Full Text Available Abstract Background Autism is a highly heritable complex neurodevelopmental disorder, therefore identifying its genetic basis has been challenging. To date, numerous susceptibility genes and chromosomal abnormalities have been reported in association with autism, but most discoveries either fail to be replicated or account for a small effect. Thus, in most cases the underlying causative genetic mechanisms are not fully understood. In the present work, the Autism Genetic Database (AGD was developed as a literature-driven, web-based, and easy to access database designed with the aim of creating a comprehensive repository for all the currently reported genes and genomic copy number variations (CNVs associated with autism in order to further facilitate the assessment of these autism susceptibility genetic factors. Description AGD is a relational database that organizes data resulting from exhaustive literature searches for reported susceptibility genes and CNVs associated with autism. Furthermore, genomic information about human fragile sites and noncoding RNAs was also downloaded and parsed from miRBase, snoRNA-LBME-db, piRNABank, and the MIT/ICBP siRNA database. A web client genome browser enables viewing of the features while a web client query tool provides access to more specific information for the features. When applicable, links to external databases including GenBank, PubMed, miRBase, snoRNA-LBME-db, piRNABank, and the MIT siRNA database are provided. Conclusion AGD comprises a comprehensive list of susceptibility genes and copy number variations reported to-date in association with autism, as well as all known human noncoding RNA genes and fragile sites. Such a unique and inclusive autism genetic database will facilitate the evaluation of autism susceptibility factors in relation to known human noncoding RNAs and fragile sites, impacting on human diseases. As a result, this new autism database offers a valuable tool for the research
Lea, Isabel A.; Gong, Hui; Paleja, Anand; Rashid, Asif; Fostel, Jennifer
The Chemical Effects in Biological Systems database (CEBS) is a comprehensive and unique toxicology resource that compiles individual and summary animal data from the National Toxicology Program (NTP) testing program and other depositors into a single electronic repository. CEBS has undergone significant updates in recent years and currently contains over 11 000 test articles (exposure agents) and over 8000 studies including all available NTP carcinogenicity, short-term toxicity and genetic toxicity studies. Study data provided to CEBS are manually curated, accessioned and subject to quality assurance review prior to release to ensure high quality. The CEBS database has two main components: data collection and data delivery. To accommodate the breadth of data produced by NTP, the CEBS data collection component is an integrated relational design that allows the flexibility to capture any type of electronic data (to date). The data delivery component of the database comprises a series of dedicated user interface tables containing pre-processed data that support each component of the user interface. The user interface has been updated to include a series of nine Guided Search tools that allow access to NTP summary and conclusion data and larger non-NTP datasets. The CEBS database can be accessed online at http://www.niehs.nih.gov/research/resources/databases/cebs/. PMID:27899660
Huang, Weiliang; Brewer, Luke K; Jones, Jace W; Nguyen, Angela T; Marcu, Ana; Wishart, David S; Oglesby-Sherrouse, Amanda G; Kane, Maureen A; Wilks, Angela
The Pseudomonas aeruginosaMetabolome Database (PAMDB, http://pseudomonas.umaryland.edu) is a searchable, richly annotated metabolite database specific to P. aeruginosa. P. aeruginosa is a soil organism and significant opportunistic pathogen that adapts to its environment through a versatile energy metabolism network. Furthermore, P. aeruginosa is a model organism for the study of biofilm formation, quorum sensing, and bioremediation processes, each of which are dependent on unique pathways and metabolites. The PAMDB is modelled on the Escherichia coli (ECMDB), yeast (YMDB) and human (HMDB) metabolome databases and contains >4370 metabolites and 938 pathways with links to over 1260 genes and proteins. The database information was compiled from electronic databases, journal articles and mass spectrometry (MS) metabolomic data obtained in our laboratories. For each metabolite entered, we provide detailed compound descriptions, names and synonyms, structural and physiochemical information, nuclear magnetic resonance (NMR) and MS spectra, enzymes and pathway information, as well as gene and protein sequences. The database allows extensive searching via chemical names, structure and molecular weight, together with gene, protein and pathway relationships. The PAMBD and its future iterations will provide a valuable resource to biologists, natural product chemists and clinicians in identifying active compounds, potential biomarkers and clinical diagnostics. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Mishchenko, Michael I.; Videen, Gorden; Khlebtsov, Nikolai G.; Wriedt, Thomas
The T-matrix method is one of the most versatile, efficient, and accurate theoretical techniques widely used for numerically exact computer calculations of electromagnetic scattering by single and composite particles, discrete random media, and particles imbedded in complex environments. This paper presents the fifth update to the comprehensive database of peer-reviewed T-matrix publications initiated by us in 2004 and includes relevant publications that have appeared since 2012. It also lists several earlier publications not incorporated in the original database, including Peter Waterman's reports from the 1960s illustrating the history of the T-matrix approach and demonstrating that John Fikioris and Peter Waterman were the true pioneers of the multi-sphere method otherwise known as the generalized Lorenz - Mie theory.
Tan Jun; Pu Jiantao; Zheng Bin; Wang Xingwei; Leader, Joseph K.
Purpose: Lung Image Database Consortium (LIDC) is the largest public CT image database of lung nodules. In this study, the authors present a comprehensive and the most updated analysis of this dynamically growing database under the help of a computerized tool, aiming to assist researchers to optimally use this database for lung cancer related investigations. Methods: The authors developed a computer scheme to automatically match the nodule outlines marked manually by radiologists on CT images. A large variety of characteristics regarding the annotated nodules in the database including volume, spiculation level, elongation, interobserver variability, as well as the intersection of delineated nodule voxels and overlapping ratio between the same nodules marked by different radiologists are automatically calculated and summarized. The scheme was applied to analyze all 157 examinations with complete annotation data currently available in LIDC dataset. Results: The scheme summarizes the statistical distributions of the abovementioned geometric and diagnosis features. Among the 391 nodules, (1) 365 (93.35%) have principal axis length ≤20 mm; (2) 120, 75, 76, and 120 were marked by one, two, three, and four radiologists, respectively; and (3) 122 (32.48%) have the maximum volume overlapping ratios ≥80% for the delineations of two radiologists, while 198 (50.64%) have the maximum volume overlapping ratios <60%. The results also showed that 72.89% of the nodules were assessed with malignancy score between 2 and 4, and only 7.93% of these nodules were considered as severely malignant (malignancy ≥4). Conclusions: This study demonstrates that LIDC contains examinations covering a diverse distribution of nodule characteristics and it can be a useful resource to assess the performance of the nodule detection and/or segmentation schemes.
Mishchenko, Michael I.; Zakharova, Nadezhda T.; Khlebtsov, Nikolai G.; Wriedt, Thomas; Videen, Gorden
This paper is the sixth update to the comprehensive thematic database of peer-reviewedT-matrix publications initiated by us in 2004 and includes relevant publications that have appeared since 2013. It also lists several earlier publications not incorporated in the original database and previous updates.
Full Text Available Abstract Background Medicago truncatula has been chosen as a model species for genomic studies. It is closely related to an important legume, alfalfa. Transporters are a large group of membrane-spanning proteins. They deliver essential nutrients, eject waste products, and assist the cell in sensing environmental conditions by forming a complex system of pumps and channels. Although studies have effectively characterized individual M. truncatula transporters in several databases, until now there has been no available systematic database that includes all transporters in M. truncatula. Description The M. truncatula transporter database (MTDB contains comprehensive information on the transporters in M. truncatula. Based on the TransportTP method, we have presented a novel prediction pipeline. A total of 3,665 putative transporters have been annotated based on International Medicago Genome Annotated Group (IMGAG V3.5 V3 and the M. truncatula Gene Index (MTGI V10.0 releases and assigned to 162 families according to the transporter classification system. These families were further classified into seven types according to their transport mode and energy coupling mechanism. Extensive annotations referring to each protein were generated, including basic protein function, expressed sequence tag (EST mapping, genome locus, three-dimensional template prediction, transmembrane segment, and domain annotation. A chromosome distribution map and text-based Basic Local Alignment Search Tools were also created. In addition, we have provided a way to explore the expression of putative M. truncatula transporter genes under stress treatments. Conclusions In summary, the MTDB enables the exploration and comparative analysis of putative transporters in M. truncatula. A user-friendly web interface and regular updates make MTDB valuable to researchers in related fields. The MTDB is freely available now to all users at http://bioinformatics.cau.edu.cn/MtTransporter/.
Berry, H.A.; Burson, Z.G.
After a radiological accident occurs, it is highly desirable to promptly begin developing a comprehensive and accountable environmental database both for immediate health and safety needs and for long-term documentation. The need to assess and evaluate the impact of the accident as quickly as possible is always very urgent, the technical integrity of the data must also be assured and maintained. Care must therefore be taken to log, collate, and organize the environmental data into a complete and accountable database. The key components of the database development are summarized as well as the experience gained in organizing and handling environmental data acquired during: (1) TMI (1979); (2) the St. Lucie Reactor Accident Exercise (through the Federal Radiological Measurement and Assessment Center (FRMAC), March 1984); (3) the Sequoyah Fuels Inc., uranium hexafluoride accident near Gore, Oklahoma (January 1986); and (4) Chernobyl reactor accident in Russia (April 1986)
Ho, Che Wah
The CHinese ANcient Texts (CHANT) database is a long-term project which began in 1988 to build up a comprehensive database of all ancient Chinese texts up to the sixth century AD. The project is near completion and the entire database, which includes both traditional and excavated materials, will be released on the CHANT Web site (www.chant.org) in mid-2002. With more than a decade of experience in establishing an electronic Chinese literary database, we have gained much insight useful to the...
Ranji, Mohammad; Hadaegh, Ahmad R.
Coccolithophorids are unicellular, marine, golden-brown, single-celled algae (Haptophyta) commonly found in near-surface waters in patchy distributions. They belong to the Phytoplankton family that is known to be responsible for much of the earth reproduction. Phytoplankton, just like plants live based on the energy obtained by Photosynthesis which produces oxygen. Substantial amount of oxygen in the earth's atmosphere is produced by Phytoplankton through Photosynthesis. The single-celled Emiliana Huxleyi is the most commonly known specie of Coccolithophorids and is known for extracting bicarbonate (HCO3) from its environment and producing calcium carbonate to form Coccoliths. Coccolithophorids are one of the world's primary producers, contributing about 15% of the average oceanic phytoplankton biomass to the oceans. They produce elaborate, minute calcite platelets (Coccoliths), covering the cell to form a Coccosphere and supplying up to 60% of the bulk pelagic calcite deposited on the sea floors. In order to understand the genetics of Coccolithophorid and the complexities of their biochemical reactions, we decided to build a database to store a complete profile of these organisms' genomes. Although a variety of such databases currently exist, (http://www.geneservice.co.uk/home/) none have yet been developed to comprehensively address the sequencing efforts underway by the Coccolithophorid research community. This database is called CocooExpress and is available to public (http://bioinfo.csusm.edu) for both data queries and sequence contribution.
Tsuji, Hirokazu; Yokoyama, Norio; Tsukada, Takashi; Nakajima, Hajime
This paper introduces the present status of the comprehensive material performance database for nuclear applications, which was named JAERI Material Performance Database (JMPD), and examples of its utilization. The JMPD has been developed since 1986 in JAERI with a view to utilizing various kinds of characteristics data of nuclear materials efficiently. Management system of relational database, PLANNER, was employed, and supporting systems for data retrieval and output were expanded. In order to improve user-friendliness of the retrieval system, the menu selection type procedures have been developed where knowledge of the system or the data structures are not required for end-users. As to utilization of the JMPD, two types of data analyses are mentioned as follows: (1) A series of statistical analyses was performed in order to estimate the design values both of the yield strength (Sy) and the tensile strength (Su) for aluminum alloys which are widely used as structural materials for research reactors. (2) Statistical analyses were accomplished by using the cyclic crack growth rate data for nuclear pressure vessel steels, and comparisons were made on variability and/or reproducibility of the data between obtained by ΔK-increasing and ΔK-constant type tests. (author)
Glick Benjamin S
Full Text Available Abstract Background Molecular biologists work with DNA databases that often include entire genomes. A common requirement is to search a DNA database to find exact matches for a nondegenerate or partially degenerate query. The software programs available for such purposes are normally designed to run on remote servers, but an appealing alternative is to work with DNA databases stored on local computers. We describe a desktop software program termed MICA (K-Mer Indexing with Compact Arrays that allows large DNA databases to be searched efficiently using very little memory. Results MICA rapidly indexes a DNA database. On a Macintosh G5 computer, the complete human genome could be indexed in about 5 minutes. The indexing algorithm recognizes all 15 characters of the DNA alphabet and fully captures the information in any DNA sequence, yet for a typical sequence of length L, the index occupies only about 2L bytes. The index can be searched to return a complete list of exact matches for a nondegenerate or partially degenerate query of any length. A typical search of a long DNA sequence involves reading only a small fraction of the index into memory. As a result, searches are fast even when the available RAM is limited. Conclusion MICA is suitable as a search engine for desktop DNA analysis software.
Winkler, James D; Halweg-Edwards, Andrea L; Erickson, Keesha E; Choudhury, Alaksh; Pines, Gur; Gill, Ryan T
The microbial ability to resist stressful environmental conditions and chemical inhibitors is of great industrial and medical interest. Much of the data related to mutation-based stress resistance, however, is scattered through the academic literature, making it difficult to apply systematic analyses to this wealth of information. To address this issue, we introduce the Resistome database: a literature-curated collection of Escherichia coli genotypes-phenotypes containing over 5,000 mutants that resist hundreds of compounds and environmental conditions. We use the Resistome to understand our current state of knowledge regarding resistance and to detect potential synergy or antagonism between resistance phenotypes. Our data set represents one of the most comprehensive collections of genomic data related to resistance currently available. Future development will focus on the construction of a combined genomic-transcriptomic-proteomic framework for understanding E. coli's resistance biology. The Resistome can be downloaded at https://bitbucket.org/jdwinkler/resistome_release/overview .
Mewes, H Werner; Ruepp, Andreas; Theis, Fabian; Rattei, Thomas; Walter, Mathias; Frishman, Dmitrij; Suhre, Karsten; Spannagl, Manuel; Mayer, Klaus F X; Stümpflen, Volker; Antonov, Alexey
The Munich Information Center for Protein Sequences (MIPS at the Helmholtz Center for Environmental Health, Neuherberg, Germany) has many years of experience in providing annotated collections of biological data. Selected data sets of high relevance, such as model genomes, are subjected to careful manual curation, while the bulk of high-throughput data is annotated by automatic means. High-quality reference resources developed in the past and still actively maintained include Saccharomyces cerevisiae, Neurospora crassa and Arabidopsis thaliana genome databases as well as several protein interaction data sets (MPACT, MPPI and CORUM). More recent projects are PhenomiR, the database on microRNA-related phenotypes, and MIPS PlantsDB for integrative and comparative plant genome research. The interlinked resources SIMAP and PEDANT provide homology relationships as well as up-to-date and consistent annotation for 38,000,000 protein sequences. PPLIPS and CCancer are versatile tools for proteomics and functional genomics interfacing to a database of compilations from gene lists extracted from literature. A novel literature-mining tool, EXCERBT, gives access to structured information on classified relations between genes, proteins, phenotypes and diseases extracted from Medline abstracts by semantic analysis. All databases described here, as well as the detailed descriptions of our projects can be accessed through the MIPS WWW server (http://mips.helmholtz-muenchen.de).
Zakharova, Nadezhda T.; Videen, G.; Khlebtsov, Nikolai G.
The T-matrix method is one of the most versatile and efficient theoretical techniques widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper presents an update to the comprehensive database of peer-reviewed T-matrix publications compiled by us previously and includes the publications that appeared since 2009. It also lists several earlier publications not included in the original database.
Mishchenko, Michael I.; Zakharova, Nadia T.; Videen, Gorden; Khlebtsov, Nikolai G.; Wriedt, Thomas
The T-matrix method is among the most versatile, efficient, and widely used theoretical techniques for the numerically exact computation of electromagnetic scattering by homogeneous and composite particles, clusters of particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper presents an update to the comprehensive database of T-matrix publications compiled by us previously and includes the publications that appeared since 2007. It also lists several earlier publications not included in the original database.
Demarest, Geoffrey B
The Defense Intelligence Agency asked the Foreign Military Studies Office (FMSO) to determine the feasibility of producing a digital database of Colombian real property, and to express the usefulness of such a database...
Avvaru, Akshay Kumar; Saxena, Saketh; Sowpati, Divya Tej; Mishra, Rakesh Kumar
Microsatellites, also known as Simple Sequence Repeats (SSRs), are short tandem repeats of 1-6 nt motifs present in all genomes, particularly eukaryotes. Besides their usefulness as genome markers, SSRs have been shown to perform important regulatory functions, and variations in their length at coding regions are linked to several disorders in humans. Microsatellites show a taxon-specific enrichment in eukaryotic genomes, and some may be functional. MSDB (Microsatellite Database) is a collection of >650 million SSRs from 6,893 species including Bacteria, Archaea, Fungi, Plants, and Animals. This database is by far the most exhaustive resource to access and analyze SSR data of multiple species. In addition to exploring data in a customizable tabular format, users can view and compare the data of multiple species simultaneously using our interactive plotting system. MSDB is developed using the Django framework and MySQL. It is freely available at http://tdb.ccmb.res.in/msdb. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Full Text Available Eighty-one echinoid species are present south of the Antarctic Convergence, and they represent an important component of the benthic fauna. “Antarctic echinoids” is an interactive database synthesising the results of more than 100 years of Antarctic expeditions, and comprising information about all echinoid species. It includes illustrated keys for determination of the species, and information about their morphology and ecology (text, illustrations and glossary and their distribution (maps and histograms of bathymetrical distribution; the sources of the information (bibliography, collections and expeditions are also provided. All these data (taxonomic, morphologic, geographic, bathymetric… can be interactively queried in two main ways: (1 display of listings that can be browsed, sorted according to various criteria, or printed; and (2 interactive requests crossing the different kinds of data. Many other possibilities are offered, and an on-line help file is also available.
Sverdlov Alexander V
Full Text Available Abstract Background The availability of multiple, essentially complete genome sequences of prokaryotes and eukaryotes spurred both the demand and the opportunity for the construction of an evolutionary classification of genes from these genomes. Such a classification system based on orthologous relationships between genes appears to be a natural framework for comparative genomics and should facilitate both functional annotation of genomes and large-scale evolutionary studies. Results We describe here a major update of the previously developed system for delineation of Clusters of Orthologous Groups of proteins (COGs from the sequenced genomes of prokaryotes and unicellular eukaryotes and the construction of clusters of predicted orthologs for 7 eukaryotic genomes, which we named KOGs after eukaryotic orthologous groups. The COG collection currently consists of 138,458 proteins, which form 4873 COGs and comprise 75% of the 185,505 (predicted proteins encoded in 66 genomes of unicellular organisms. The eukaryotic orthologous groups (KOGs include proteins from 7 eukaryotic genomes: three animals (the nematode Caenorhabditis elegans, the fruit fly Drosophila melanogaster and Homo sapiens, one plant, Arabidopsis thaliana, two fungi (Saccharomyces cerevisiae and Schizosaccharomyces pombe, and the intracellular microsporidian parasite Encephalitozoon cuniculi. The current KOG set consists of 4852 clusters of orthologs, which include 59,838 proteins, or ~54% of the analyzed eukaryotic 110,655 gene products. Compared to the coverage of the prokaryotic genomes with COGs, a considerably smaller fraction of eukaryotic genes could be included into the KOGs; addition of new eukaryotic genomes is expected to result in substantial increase in the coverage of eukaryotic genomes with KOGs. Examination of the phyletic patterns of KOGs reveals a conserved core represented in all analyzed species and consisting of ~20% of the KOG set. This conserved portion of the
Attimonelli, M.; Altamura, N.; Benne, R.; Brennicke, A.; Cooper, J. M.; D'Elia, D.; Montalvo, A.; Pinto, B.; de Robertis, M.; Golik, P.; Knoop, V.; Lanave, C.; Lazowska, J.; Licciulli, F.; Malladi, B. S.; Memeo, F.; Monnerot, M.; Pasimeni, R.; Pilbout, S.; Schapira, A. H.; Sloof, P.; Saccone, C.
MitBASE is an integrated and comprehensive database of mitochondrial DNA data which collects, under a single interface, databases for Plant, Vertebrate, Invertebrate, Human, Protist and Fungal mtDNA and a Pilot database on nuclear genes involved in mitochondrial biogenesis in Saccharomyces
Wang, Dapeng; Zhang, Yubin; Fan, Zhonghua; Liu, Guiming; Yu, Jun
Animal genes of different lineages, such as vertebrates and arthropods, are well-organized and blended into dynamic chromosomal structures that represent a primary regulatory mechanism for body development and cellular differentiation. The majority of genes in a genome are actually clustered, which are evolutionarily stable to different extents and biologically meaningful when evaluated among genomes within and across lineages. Until now, many questions concerning gene organization, such as what is the minimal number of genes in a cluster and what is the driving force leading to gene co-regulation, remain to be addressed. Here, we provide a user-friendly database-LCGbase (a comprehensive database for lineage-based co-regulated genes)-hosting information on evolutionary dynamics of gene clustering and ordering within animal kingdoms in two different lineages: vertebrates and arthropods. The database is constructed on a web-based Linux-Apache-MySQL-PHP framework and effective interactive user-inquiry service. Compared to other gene annotation databases with similar purposes, our database has three comprehensible advantages. First, our database is inclusive, including all high-quality genome assemblies of vertebrates and representative arthropod species. Second, it is human-centric since we map all gene clusters from other genomes in an order of lineage-ranks (such as primates, mammals, warm-blooded, and reptiles) onto human genome and start the database from well-defined gene pairs (a minimal cluster where the two adjacent genes are oriented as co-directional, convergent, and divergent pairs) to large gene clusters. Furthermore, users can search for any adjacent genes and their detailed annotations. Third, the database provides flexible parameter definitions, such as the distance of transcription start sites between two adjacent genes, which is extendable to genes that flanking the cluster across species. We also provide useful tools for sequence alignment, gene
Mishchenko, Michael I.; Zakharova, Nadezhda; Khlebtsov, Nikolai G.; Videen, Gorden; Wriedt, Thomas
The T-matrix method is one of the most versatile and efficient direct computer solvers of the macroscopic Maxwell equations and is widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper is the seventh update to the comprehensive thematic database of peer-reviewed T-matrix publications initiated by us in 2004 and includes relevant publications that have appeared since 2013. It also lists a number of earlier publications overlooked previously.
Liu, Wanting; Xiang, Lunping; Zheng, Tingkai; Jin, Jingjie; Zhang, Gong
Translation is a key regulatory step, linking transcriptome and proteome. Two major methods of translatome investigations are RNC-seq (sequencing of translating mRNA) and Ribo-seq (ribosome profiling). To facilitate the investigation of translation, we built a comprehensive database TranslatomeDB (http://www.translatomedb.net/) which provides collection and integrated analysis of published and user-generated translatome sequencing data. The current version includes 2453 Ribo-seq, 10 RNC-seq and their 1394 corresponding mRNA-seq datasets in 13 species. The database emphasizes the analysis functions in addition to the dataset collections. Differential gene expression (DGE) analysis can be performed between any two datasets of same species and type, both on transcriptome and translatome levels. The translation indices translation ratios, elongation velocity index and translational efficiency can be calculated to quantitatively evaluate translational initiation efficiency and elongation velocity, respectively. All datasets were analyzed using a unified, robust, accurate and experimentally-verifiable pipeline based on the FANSe3 mapping algorithm and edgeR for DGE analyzes. TranslatomeDB also allows users to upload their own datasets and utilize the identical unified pipeline to analyze their data. We believe that our TranslatomeDB is a comprehensive platform and knowledgebase on translatome and proteome research, releasing the biologists from complex searching, analyzing and comparing huge sequencing data without needing local computational power. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...
Li, Mengjiao; E, Qimin; Liu, Jialin; Huang, Tingting; Liang, Chuanyu
By collecting and analyzing the laryngeal cancer related genes and the miRNAs, to build a comprehensive laryngeal cancer-related gene database, which differs from the current biological information database with complex and clumsy structure and focuses on the theme of gene and miRNA, and it could make the research and teaching more convenient and efficient. Based on the B/S architecture, using Apache as a Web server, MySQL as coding language of database design and PHP as coding language of web design, a comprehensive database for laryngeal cancer-related genes was established, providing with the gene tables, protein tables, miRNA tables and clinical information tables of the patients with laryngeal cancer. The established database containsed 207 laryngeal cancer related genes, 243 proteins, 26 miRNAs, and their particular information such as mutations, methylations, diversified expressions, and the empirical references of laryngeal cancer relevant molecules. The database could be accessed and operated via the Internet, by which browsing and retrieval of the information were performed. The database were maintained and updated regularly. The database for laryngeal cancer related genes is resource-integrated and user-friendly, providing a genetic information query tool for the study of laryngeal cancer.
Full Text Available Background China is one of the countries that have the largest number of patients suffering from Duchenne and Becker muscular dystrophy (DMD/BMD. Although the building of international DMD/BMD databases has laid a foundation for clinical drug development and clinical trials, it has not yet been carried out in China. In this study, a modified registry form of Remudy was applied to 229 DMD/BMD patients in order to establish a comprehensive database, which will lay the groundwork for international cooperation. Methods A total of 229 DMD/BMD patients diagnosed by genetic testing or muscle biopsy admitted in Children's Hospital of Fudan University (CHFU during the period of August 2011 to December 2013 were enrolled in this study. The data included sex, age, age at diagnosis, geographic distribution of patients, DMD gene mutation types, family history, walking capability, cardiac and respiratory function, steroid treatment and rehabilitation intervention. Results There were 194 DMD and 35 BMD male patients who were diagnosed at the age of 0-18 years, and among them, most patients were diagnosed at the age of > 3-4 (16.59%, 38/229 and > 7-8 (14.85%, 34/229 years. Exon deletion was the most frequent genetic mutations for DMD/BMD [65.46% (127/194 and 74.29% (26/35], respectively. Patients with a family history accounted for 23.14% (53/229. The rate of DMD registrants losing walking capability was 17.53% (34/194, and all the BMD registrants were able to walk. Cardiac functions were examined in 46.29% (106/229 DMD/BMD boys and respiratory functions were examined in 17.90% (41/229 DMD/BMD boys. The proportion of DMD patients receiving prednisone with dosage of 0.75 mg/(kg·d was 26.29% (51/194. Conclusions This database describes in detail the genotype, clinical manifestation, diagnosis and treatment and rehabilitation status of 229 DMD/BMD patients in China. The database not only provides comprehensive information for DMD/BMD patient management
Jan-Ole Christian; Patrick May; Stefan Kempa; Dirk Walther
*Background* - The unicellular green alga _Chlamydomonas reinhardtii_ is an important eukaryotic model organism for the study of photosynthesis and growth, as well as flagella development and other cellular processes. In the era of high-throughput technologies there is an imperative need to integrate large-scale data sets from high-throughput experimental techniques using computational methods and database resources to provide comprehensive information about the whole cellular system of a sin...
Ishak, Mustapha; Lake, Kayll
We describe a new interactive database (GRDB) of geometric objects in the general area of differential geometry. Database objects include, but are not restricted to, exact solutions of Einstein's field equations. GRDB is designed for researchers (and teachers) in applied mathematics, physics and related fields. The flexible search environment allows the database to be useful over a wide spectrum of interests, for example, from practical considerations of neutron star models in astrophysics to abstract space-time classification schemes. The database is built using a modular and object-oriented design and uses several Java technologies (e.g. Applets, Servlets, JDBC). These are platform-independent and well adapted for applications developed for the World Wide Web. GRDB is accompanied by a virtual calculator (GRTensorJ), a graphical user interface to the computer algebra system GRTensorII, used to perform online coordinate, tetrad or basis calculations. The highly interactive nature of GRDB allows systematic internal self-checking and minimization of the required internal records. This new database is now available online at http://grdb.org
Kim Woo Taek
Full Text Available Abstract Background There is no dedicated database available for Expressed Sequence Tags (EST of the chili pepper (Capsicum annuum, although the interest in a chili pepper EST database is increasing internationally due to the nutritional, economic, and pharmaceutical value of the plant. Recent advances in high-throughput sequencing of the ESTs of chili pepper cv. Bukang have produced hundreds of thousands of complementary DNA (cDNA sequences. Therefore, a chili pepper EST database was designed and constructed to enable comprehensive analysis of chili pepper gene expression in response to biotic and abiotic stresses. Results We built the Pepper EST database to mine the complexity of chili pepper ESTs. The database was built on 122,582 sequenced ESTs and 116,412 refined ESTs from 21 pepper EST libraries. The ESTs were clustered and assembled into virtual consensus cDNAs and the cDNAs were assigned to metabolic pathway, Gene Ontology (GO, and MIPS Functional Catalogue (FunCat. The Pepper EST database is designed to provide a workbench for (i identifying unigenes in pepper plants, (ii analyzing expression patterns in different developmental tissues and under conditions of stress, and (iii comparing the ESTs with those of other members of the Solanaceae family. The Pepper EST database is freely available at http://genepool.kribb.re.kr/pepper/. Conclusion The Pepper EST database is expected to provide a high-quality resource, which will contribute to gaining a systemic understanding of plant diseases and facilitate genetics-based population studies. The database is also expected to contribute to analysis of gene synteny as part of the chili pepper sequencing project by mapping ESTs to the genome.
MacDonald Russell D
Full Text Available Abstract Background The Provincial Transfer Authorization Centre (PTAC was established as a part of the emergency response in Ontario, Canada to the Severe Acute Respiratory Syndrome (SARS outbreak in 2003. Prior to 2003, data relating to inter-facility patient transfers were not collected in a systematic manner. Then, in an emergency setting, a comprehensive database with a complex data collection process was established. For the first time in Ontario, population-based data for patient movement between healthcare facilities for a population of twelve million are available. The PTAC database stores all patient transfer data in a large database. There are few population-based patient transfer databases and the PTAC database is believed to be the largest example to house this novel dataset. A patient transfer database has also never been validated. This paper presents the validation of the PTAC database. Methods A random sample of 100 patient inter-facility transfer records was compared to the corresponding institutional patient records from the sending healthcare facilities. Measures of agreement, including sensitivity, were calculated for the 12 common data variables. Results Of the 100 randomly selected patient transfer records, 95 (95% of the corresponding institutional patient records were located. Data variables in the categories patient demographics, facility identification and timing of transfer and reason and urgency of transfer had strong agreement levels. The 10 most commonly used data variables had accuracy rates that ranged from 85.3% to 100% and error rates ranging from 0 to 12.6%. These same variables had sensitivity values ranging from 0.87 to 1.0. Conclusion The very high level of agreement between institutional patient records and the PTAC data for fields compared in this study supports the validity of the PTAC database. For the first time, a population-based patient transfer database has been established. Although it was created
Christina M Lill
Full Text Available More than 800 published genetic association studies have implicated dozens of potential risk loci in Parkinson's disease (PD. To facilitate the interpretation of these findings, we have created a dedicated online resource, PDGene, that comprehensively collects and meta-analyzes all published studies in the field. A systematic literature screen of -27,000 articles yielded 828 eligible articles from which relevant data were extracted. In addition, individual-level data from three publicly available genome-wide association studies (GWAS were obtained and subjected to genotype imputation and analysis. Overall, we performed meta-analyses on more than seven million polymorphisms originating either from GWAS datasets and/or from smaller scale PD association studies. Meta-analyses on 147 SNPs were supplemented by unpublished GWAS data from up to 16,452 PD cases and 48,810 controls. Eleven loci showed genome-wide significant (P < 5 × 10(-8 association with disease risk: BST1, CCDC62/HIP1R, DGKQ/GAK, GBA, LRRK2, MAPT, MCCC1/LAMP3, PARK16, SNCA, STK39, and SYT11/RAB25. In addition, we identified novel evidence for genome-wide significant association with a polymorphism in ITGA8 (rs7077361, OR 0.88, P = 1.3 × 10(-8. All meta-analysis results are freely available on a dedicated online database (www.pdgene.org, which is cross-linked with a customized track on the UCSC Genome Browser. Our study provides an exhaustive and up-to-date summary of the status of PD genetics research that can be readily scaled to include the results of future large-scale genetics projects, including next-generation sequencing studies.
Chen, Dijun; Yuan, Chunhui; Zhang, Jian; Zhang, Zhao; Bai, Lin; Meng, Yijun; Chen, Ling-Ling; Chen, Ming
Natural antisense transcripts (NATs), as one type of regulatory RNAs, occur prevalently in plant genomes and play significant roles in physiological and pathological processes. Although their important biological functions have been reported widely, a comprehensive database is lacking up to now. Consequently, we constructed a plant NAT database (PlantNATsDB) involving approximately 2 million NAT pairs in 69 plant species. GO annotation and high-throughput small RNA sequencing data currently available were integrated to investigate the biological function of NATs. PlantNATsDB provides various user-friendly web interfaces to facilitate the presentation of NATs and an integrated, graphical network browser to display the complex networks formed by different NATs. Moreover, a 'Gene Set Analysis' module based on GO annotation was designed to dig out the statistical significantly overrepresented GO categories from the specific NAT network. PlantNATsDB is currently the most comprehensive resource of NATs in the plant kingdom, which can serve as a reference database to investigate the regulatory function of NATs. The PlantNATsDB is freely available at http://bis.zju.edu.cn/pnatdb/.
Zhang, Bofei; Hu, Senyang; Baskin, Elizabeth; Patt, Andrew; Siddiqui, Jalal K; Mathé, Ewy A
The value of metabolomics in translational research is undeniable, and metabolomics data are increasingly generated in large cohorts. The functional interpretation of disease-associated metabolites though is difficult, and the biological mechanisms that underlie cell type or disease-specific metabolomics profiles are oftentimes unknown. To help fully exploit metabolomics data and to aid in its interpretation, analysis of metabolomics data with other complementary omics data, including transcriptomics, is helpful. To facilitate such analyses at a pathway level, we have developed RaMP (Relational database of Metabolomics Pathways), which combines biological pathways from the Kyoto Encyclopedia of Genes and Genomes (KEGG), Reactome, WikiPathways, and the Human Metabolome DataBase (HMDB). To the best of our knowledge, an off-the-shelf, public database that maps genes and metabolites to biochemical/disease pathways and can readily be integrated into other existing software is currently lacking. For consistent and comprehensive analysis, RaMP enables batch and complex queries (e.g., list all metabolites involved in glycolysis and lung cancer), can readily be integrated into pathway analysis tools, and supports pathway overrepresentation analysis given a list of genes and/or metabolites of interest. For usability, we have developed a RaMP R package (https://github.com/Mathelab/RaMP-DB), including a user-friendly RShiny web application, that supports basic simple and batch queries, pathway overrepresentation analysis given a list of genes or metabolites of interest, and network visualization of gene-metabolite relationships. The package also includes the raw database file (mysql dump), thereby providing a stand-alone downloadable framework for public use and integration with other tools. In addition, the Python code needed to recreate the database on another system is also publicly available (https://github.com/Mathelab/RaMP-BackEnd). Updates for databases in RaMP will be
Full Text Available The Intermediate Data Structure (IDS is a standardised database structure for longitudinal historical databases. Such a common structure facilitates data sharing and comparative research. In this study, we propose an extended version of IDS, named IDS-Geo, that also includes geographic data. The geographic data that will be stored in IDS-Geo are primarily buildings and/or property units, and the purpose of these geographic data is mainly to link individuals to places in space. When we want to assign such detailed spatial locations to individuals (in times before there were any detailed house addresses available, we often have to create tailored geographic datasets. In those cases, there are benefits of storing geographic data in the same structure as the demographic data. Moreover, we propose the export of data from IDS-Geo using an eXtensible Markup Language (XML Schema. IDS-Geo is implemented in a case study using historical property units, for the period 1804 to 1913, stored in a geographically extended version of the Scanian Economic Demographic Database (SEDD. To fit into the IDS-Geo data structure, we included an object lifeline representation of all of the property units (based on the snapshot time representation of single historical maps and poll-tax registers. The case study verifies that the IDS-Geo model is capable of handling geographic data that can be linked to demographic data.
Takahashi, Arata; Kumamaru, Hiraku; Tomotaki, Ai; Matsumura, Goki; Fukuchi, Eriko; Hirata, Yasutaka; Murakami, Arata; Hashimoto, Hideki; Ono, Minoru; Miyata, Hiroaki
Japan Congenital Cardiovascluar Surgical Database (JCCVSD) is a nationwide registry whose data are used for health quality assessment and clinical research in Japan. We evaluated the completeness of case registration and the accuracy of recorded data components including postprocedural mortality and complications in the database via on-site data adjudication. We validated the records from JCCVSD 2010 to 2012 containing congenital cardiovascular surgery data performed in 111 facilities throughout Japan. We randomly chose nine facilities for site visit by the auditor team and conducted on-site data adjudication. We assessed whether the records in JCCVSD matched the data in the source materials. We identified 1,928 cases of eligible surgeries performed at the facilities, of which 1,910 were registered (99.1% completeness), with 6 cases of duplication and 1 inappropriate case registration. Data components including gender, age, and surgery time (hours) were highly accurate with 98% to 100% concordance. Mortality at discharge and at 30 and 90 postoperative days was 100% accurate. Among the five complications studied, reoperation was the most frequently observed, with 16 and 21 cases recorded in the database and source materials, respectively, having a sensitivity of 0.67 and a specificity of 0.99. Validation of JCCVSD database showed high registration completeness and high accuracy especially in the categorical data components. Adjudicated mortality was 100% accurate. While limited in numbers, the recorded cases of postoperative complications all had high specificities but had lower sensitivity (0.67-1.00). Continued activities for data quality improvement and assessment are necessary for optimizing the utility of these registries.
Kelbert, A.; Blum, C.
Magnetotelluric Transfer Functions (MT TFs) represent most of the information about Earth electrical conductivity found in the raw electromagnetic data, providing inputs for further inversion and interpretation. To be useful for scientific interpretation, they must also contain carefully recorded metadata. Making these data available in a discoverable and citable fashion would provide the most benefit to the scientific community, but such a development requires that the metadata is not only present in the file but is also searchable. The most commonly used MT TF format to date, the historical Society of Exploration Geophysicists Electromagnetic Data Interchange Standard 1987 (EDI), no longer supports some of the needs of modern magnetotellurics, most notably accurate error bars recording. Moreover, the inherent heterogeneity of EDI's and other historic MT TF formats has mostly kept the community away from healthy data sharing practices. Recently, the MT team at Oregon State University in collaboration with IRIS Data Management Center developed a new, XML-based format for MT transfer functions, and an online system for long-term storage, discovery and sharing of MT TF data worldwide (IRIS SPUD; www.iris.edu/spud/emtf). The system provides a query page where all of the MT transfer functions collected within the USArray MT experiment and other field campaigns can be searched for and downloaded; an automatic on-the-fly conversion to the historic EDI format is also included. To facilitate conversion to the new, more comprehensive and sustainable, XML format for MT TFs, and to streamline inclusion of historic data into the online database, we developed a set of open source format conversion tools, which can be used for rotation of MT TFs as well as a general XML EDI converter (https://seiscode.iris.washington.edu/projects/emtf-fcu). Here, we report on the newly established collaboration between the USGS Geomagnetism Program and the Oregon State University to gather and
Kågesten, Anna; Parekh, Jenita; Tunçalp, Ozge; Turke, Shani; Blum, Robert William
We systematically reviewed peer-reviewed and gray literature on comprehensive adolescent health (CAH) programs (1998-2013), including sexual and reproductive health services. We screened 36 119 records and extracted articles using predefined criteria. We synthesized data into descriptive characteristics and assessed quality by evidence level. We extracted data on 46 programs, of which 19 were defined as comprehensive. Ten met all inclusion criteria. Most were US based; others were implemented in Egypt, Ethiopia, and Mexico. Three programs displayed rigorous evidence; 5 had strong and 2 had modest evidence. Those with rigorous or strong evidence directly or indirectly influenced adolescent sexual and reproductive health. The long-term impact of many CAH programs cannot be proven because of insufficient evaluations. Evaluation approaches that take into account the complex operating conditions of many programs are needed to better understand mechanisms behind program effects.
Moukhtar, Mirna; Chaar, Wafi; Abdel-Razzak, Ziad; Khalil, Mohamad; Taha, Samir; Chamieh, Hala
Superfamily 1 and Superfamily 2 helicases, two of the largest helicase protein families, play vital roles in many biological processes including replication, transcription and translation. Study of helicase proteins in the model microorganisms of archaea have largely contributed to the understanding of their function, architecture and assembly. Based on a large phylogenomics approach, we have identified and classified all SF1 and SF2 protein families in ninety five sequenced archaea genomes. Here we developed an online webserver linked to a specialized protein database named ARCPHdb to provide access for SF1 and SF2 helicase families from archaea. ARCPHdb was implemented using MySQL relational database. Web interfaces were developed using Netbeans. Data were stored according to UniProt accession numbers, NCBI Ref Seq ID, PDB IDs and Entrez Databases. A user-friendly interactive web interface has been developed to browse, search and download archaeal helicase protein sequences, their available 3D structure models, and related documentation available in the literature provided by ARCPHdb. The database provides direct links to matching external databases. The ARCPHdb is the first online database to compile all protein information on SF1 and SF2 helicase from archaea in one platform. This database provides essential resource information for all researchers interested in the field. Copyright © 2016 Elsevier Ltd. All rights reserved.
Jin, Suming; Yang, Limin; Danielson, Patrick; Homer, Collin G.; Fry, Joyce; Xian, George
The importance of characterizing, quantifying, and monitoring land cover, land use, and their changes has been widely recognized by global and environmental change studies. Since the early 1990s, three U.S. National Land Cover Database (NLCD) products (circa 1992, 2001, and 2006) have been released as free downloads for users. The NLCD 2006 also provides land cover change products between 2001 and 2006. To continue providing updated national land cover and change datasets, a new initiative in developing NLCD 2011 is currently underway. We present a new Comprehensive Change Detection Method (CCDM) designed as a key component for the development of NLCD 2011 and the research results from two exemplar studies. The CCDM integrates spectral-based change detection algorithms including a Multi-Index Integrated Change Analysis (MIICA) model and a novel change model called Zone, which extracts change information from two Landsat image pairs. The MIICA model is the core module of the change detection strategy and uses four spectral indices (CV, RCVMAX, dNBR, and dNDVI) to obtain the changes that occurred between two image dates. The CCDM also includes a knowledge-based system, which uses critical information on historical and current land cover conditions and trends and the likelihood of land cover change, to combine the changes from MIICA and Zone. For NLCD 2011, the improved and enhanced change products obtained from the CCDM provide critical information on location, magnitude, and direction of potential change areas and serve as a basis for further characterizing land cover changes for the nation. An accuracy assessment from the two study areas show 100% agreement between CCDM mapped no-change class with reference dataset, and 18% and 82% disagreement for the change class for WRS path/row p22r39 and p33r33, respectively. The strength of the CCDM is that the method is simple, easy to operate, widely applicable, and capable of capturing a variety of natural and
Full Text Available The current work addresses one of the key building blocks towards an improved understanding of flood processes and associated changes in flood characteristics and regimes in Europe: the development of a comprehensive, extensive European flood database. The presented work results from ongoing cross-border research collaborations initiated with data collection and joint interpretation in mind. A detailed account of the current state, characteristics and spatial and temporal coverage of the European Flood Database, is presented. At this stage, the hydrological data collection is still growing and consists at this time of annual maximum and daily mean discharge series, from over 7000 hydrometric stations of various data series lengths. Moreover, the database currently comprises data from over 50 different data sources. The time series have been obtained from different national and regional data sources in a collaborative effort of a joint European flood research agreement based on the exchange of data, models and expertise, and from existing international data collections and open source websites. These ongoing efforts are contributing to advancing the understanding of regional flood processes beyond individual country boundaries and to a more coherent flood research in Europe.
Cihoric, Nikola; Tsikkinis, Alexandros; Miguelez, Cristina Gutierrez; Strnad, Vratislav; Soldatovic, Ivan; Ghadjar, Pirus; Jeremic, Branislav; Dal Pra, Alan; Aebersold, Daniel M; Lössl, Kristina
To evaluate the current status of prospective interventional clinical trials that includes brachytherapy (BT) procedures. The records of 175,538 (100 %) clinical trials registered at ClinicalTrials.gov were downloaded on September 2014 and a database was established. Trials using BT as an intervention were identified for further analyses. The selected trials were manually categorized according to indication(s), BT source, applied dose rate, primary sponsor type, location, protocol initiator and funding source. We analyzed trials across 8 available trial protocol elements registered within the database. In total 245 clinical trials were identified, 147 with BT as primary investigated treatment modality and 98 that included BT as an optional treatment component or as part of the standard treatment. Academic centers were the most frequent protocol initiators in trials where BT was the primary investigational treatment modality (p < 0.01). High dose rate (HDR) BT was the most frequently investigated type of BT dose rate (46.3 %) followed by low dose rate (LDR) (42.0 %). Prostate was the most frequently investigated tumor entity in trials with BT as the primary treatment modality (40.1 %) followed by breast cancer (17.0 %). BT was rarely the primary investigated treatment modality for cervical cancer (6.8 %). Most clinical trials using BT are predominantly in early phases, investigator-initiated and with low accrual numbers. Current investigational activities that include BT mainly focus on prostate and breast cancers. Important questions concerning the optimal usage of BT will not be answered in the near future.
Cihoric, Nikola; Tsikkinis, Alexandros; Miguelez, Cristina Gutierrez; Strnad, Vratislav; Soldatovic, Ivan; Ghadjar, Pirus; Jeremic, Branislav; Dal Pra, Alan; Aebersold, Daniel M.; Lössl, Kristina
To evaluate the current status of prospective interventional clinical trials that includes brachytherapy (BT) procedures. The records of 175,538 (100 %) clinical trials registered at ClinicalTrials.gov were downloaded on September 2014 and a database was established. Trials using BT as an intervention were identified for further analyses. The selected trials were manually categorized according to indication(s), BT source, applied dose rate, primary sponsor type, location, protocol initiator and funding source. We analyzed trials across 8 available trial protocol elements registered within the database. In total 245 clinical trials were identified, 147 with BT as primary investigated treatment modality and 98 that included BT as an optional treatment component or as part of the standard treatment. Academic centers were the most frequent protocol initiators in trials where BT was the primary investigational treatment modality (p < 0.01). High dose rate (HDR) BT was the most frequently investigated type of BT dose rate (46.3 %) followed by low dose rate (LDR) (42.0 %). Prostate was the most frequently investigated tumor entity in trials with BT as the primary treatment modality (40.1 %) followed by breast cancer (17.0 %). BT was rarely the primary investigated treatment modality for cervical cancer (6.8 %). Most clinical trials using BT are predominantly in early phases, investigator-initiated and with low accrual numbers. Current investigational activities that include BT mainly focus on prostate and breast cancers. Important questions concerning the optimal usage of BT will not be answered in the near future. The online version of this article (doi:10.1186/s13014-016-0624-8) contains supplementary material, which is available to authorized users
Kara A Livingston
Full Text Available Dietary fiber is a broad category of compounds historically defined as partially or completely indigestible plant-based carbohydrates and lignin with, more recently, the additional criteria that fibers incorporated into foods as additives should demonstrate functional human health outcomes to receive a fiber classification. Thousands of research studies have been published examining fibers and health outcomes.(1 Develop a database listing studies testing fiber and physiological health outcomes identified by experts at the Ninth Vahouny Conference; (2 Use evidence mapping methodology to summarize this body of literature. This paper summarizes the rationale, methodology, and resulting database. The database will help both scientists and policy-makers to evaluate evidence linking specific fibers with physiological health outcomes, and identify missing information.To build this database, we conducted a systematic literature search for human intervention studies published in English from 1946 to May 2015. Our search strategy included a broad definition of fiber search terms, as well as search terms for nine physiological health outcomes identified at the Ninth Vahouny Fiber Symposium. Abstracts were screened using a priori defined eligibility criteria and a low threshold for inclusion to minimize the likelihood of rejecting articles of interest. Publications then were reviewed in full text, applying additional a priori defined exclusion criteria. The database was built and published on the Systematic Review Data Repository (SRDR™, a web-based, publicly available application.A fiber database was created. This resource will reduce the unnecessary replication of effort in conducting systematic reviews by serving as both a central database archiving PICO (population, intervention, comparator, outcome data on published studies and as a searchable tool through which this data can be extracted and updated.
Full Text Available Abstract Background Transposable elements are the most abundant components of all characterized genomes of higher eukaryotes. It has been documented that these elements not only contribute to the shaping and reshaping of their host genomes, but also play significant roles in regulating gene expression, altering gene function, and creating new genes. Thus, complete identification of transposable elements in sequenced genomes and construction of comprehensive transposable element databases are essential for accurate annotation of genes and other genomic components, for investigation of potential functional interaction between transposable elements and genes, and for study of genome evolution. The recent availability of the soybean genome sequence has provided an unprecedented opportunity for discovery, and structural and functional characterization of transposable elements in this economically important legume crop. Description Using a combination of structure-based and homology-based approaches, a total of 32,552 retrotransposons (Class I and 6,029 DNA transposons (Class II with clear boundaries and insertion sites were structurally annotated and clearly categorized, and a soybean transposable element database, SoyTEdb, was established. These transposable elements have been anchored in and integrated with the soybean physical map and genetic map, and are browsable and visualizable at any scale along the 20 soybean chromosomes, along with predicted genes and other sequence annotations. BLAST search and other infrastracture tools were implemented to facilitate annotation of transposable elements or fragments from soybean and other related legume species. The majority (> 95% of these elements (particularly a few hundred low-copy-number families are first described in this study. Conclusion SoyTEdb provides resources and information related to transposable elements in the soybean genome, representing the most comprehensive and the largest manually
Misra, Namrata; Panda, Prasanna Kumar; Parida, Bikram Kumar; Mishra, Barada Kanta
Microalgae have attracted wide attention as one of the most versatile renewable feedstocks for production of biofuel. To develop genetically engineered high lipid yielding algal strains, a thorough understanding of the lipid biosynthetic pathway and the underpinning enzymes is essential. In this work, we have systematically mined the genomes of fifteen diverse algal species belonging to Chlorophyta, Heterokontophyta, Rhodophyta, and Haptophyta, to identify and annotate the putative enzymes of lipid metabolic pathway. Consequently, we have also developed a database, dEMBF (Database of Enzymes of Microalgal Biofuel Feedstock), which catalogues the complete list of identified enzymes along with their computed annotation details including length, hydrophobicity, amino acid composition, subcellular location, gene ontology, KEGG pathway, orthologous group, Pfam domain, intron-exon organization, transmembrane topology, and secondary/tertiary structural data. Furthermore, to facilitate functional and evolutionary study of these enzymes, a collection of built-in applications for BLAST search, motif identification, sequence and phylogenetic analysis have been seamlessly integrated into the database. dEMBF is the first database that brings together all enzymes responsible for lipid synthesis from available algal genomes, and provides an integrative platform for enzyme inquiry and analysis. This database will be extremely useful for algal biofuel research. It can be accessed at http://bbprof.immt.res.in/embf.
We present an analysis of alphabetical co-authorship in the social sciences and humanities (SSH), based on data from the VABB-SHW, a comprehensive database of SSH research output in Flanders (2000-2013). Using an unbiased estimator of the share of intentional alphabetical co-authorship (IAC), we find that alphabetical co-authorship is more engrained in SSH than in science as a whole. Within the SSH, large differences exist between disciplines. The highest proportions of IAC are found for Literature, Economics & business, and History. Furthermore, alphabetical co-authorship varies with publication type: it occurs most often in books, is less common in articles in journals or in books, and is rare in proceedings papers. The use of alphabetical co-authorship appears to be slowly declining. (Author)
Tetarenko, B. E.; Sivakoff, G. R.; Heinke, C. O.; Gladstone, J. C.
With the advent of more sensitive all-sky instruments, the transient universe is being probed in greater depth than ever before. Taking advantage of available resources, we have established a comprehensive database of black hole (and black hole candidate) X-ray binary (BHXB) activity between 1996 and 2015 as revealed by all-sky instruments, scanning surveys, and select narrow-field X-ray instruments on board the INTErnational Gamma-Ray Astrophysics Laboratory, Monitor of All-Sky X-ray Image, Rossi X-ray Timing Explorer, and Swift telescopes; the Whole-sky Alberta Time-resolved Comprehensive black-Hole Database Of the Galaxy or WATCHDOG. Over the past two decades, we have detected 132 transient outbursts, tracked and classified behavior occurring in 47 transient and 10 persistently accreting BHs, and performed a statistical study on a number of outburst properties across the Galactic population. We find that outbursts undergone by BHXBs that do not reach the thermally dominant accretion state make up a substantial fraction (∼40%) of the Galactic transient BHXB outburst sample over the past ∼20 years. Our findings suggest that this “hard-only” behavior, observed in transient and persistently accreting BHXBs, is neither a rare nor recent phenomenon and may be indicative of an underlying physical process, relatively common among binary BHs, involving the mass-transfer rate onto the BH remaining at a low level rather than increasing as the outburst evolves. We discuss how the larger number of these “hard-only” outbursts and detected outbursts in general have significant implications for both the luminosity function and mass-transfer history of the Galactic BHXB population
Full Text Available For the design and optimisation of offshore wind turbines, the knowledge of realistic environmental conditions and utilisation of well-founded simulation constraints is very important, as both influence the structural behaviour and power output in numerical simulations. However, real high-quality data, especially for research purposes, are scarcely available. This is why, in this work, a comprehensive database of 13 environmental conditions at wind turbine locations in the North and Baltic Sea is derived using data of the FINO research platforms. For simulation constraints, like the simulation length and the time of initial simulation transients, well-founded recommendations in the literature are also rare. Nevertheless, it is known that the choice of simulation lengths and times of initial transients fundamentally affects the quality and computing time of simulations. For this reason, studies of convergence for both parameters are conducted to determine adequate values depending on the type of substructure, the wind speed, and the considered loading (fatigue or ultimate. As the main purpose of both the database and the simulation constraints is to compromise realistic data for probabilistic design approaches and to serve as a guidance for further studies in order to enable more realistic and accurate simulations, all results are freely available and easy to apply.
Kelly, Lauren E; Ito, Shinya; Woods, David; Nunn, Anthony J; Taketomo, Carol; de Hoog, Matthijs; Offringa, Martin
Children require special considerations for drug prescribing. Drug information summarized in a formulary containing drug monographs is essential for safe and effective prescribing. Currently, little is known about the information needs of those who prescribe and administer medicines to children. Our primary objective was to identify a list of important and relevant items to be included in a pediatric drug monograph. Following the establishment of an expert steering committee and an environmental scan of adult and pediatric formulary monograph items, 46 participants from 25 countries were invited to complete a 2-round Delphi survey. Questions regarding source of prescribing information and importance of items were recorded. An international consensus meeting to vote on and finalize the items list with the steering committee followed. Pediatric formularies are most commonly the first resource consulted for information on medication used in children by 31 Delphi participants. After the Delphi rounds, 116 items were identified to be included in a comprehensive pediatric drug monograph, including general information, adverse drug reactions, dosages, precautions, drug-drug interactions, formulation, and drug properties. Health care providers identified 116 monograph items as important for prescribing medicines for children by an international consensus-based process. This information will assist in setting standards for the creation of new pediatric drug monographs for international application and for those involved in pediatric formulary development.
servicemembers included in the Blast-Related Auditory Injury Database. * Training injuries, accidents, and other noncombat injuries. †3,452 injuries...medications, exposures to ototoxic chemicals, recreational noise exposure, and other forms of temporary and persistent threshold shift. Combat marines...AC, Vecchiotti M, Kujawa SG, Lee DJ, Quesnel AM. Otologic outcomes after blast injury: The Boston Marathon experience. Otol Neurotol. 2014; 35(10
Herman, J.; Krotkov, N.
The TOMS UV irradiance database (1978 to 2003) has been expanded to include five new products (noon irradiance at 305,310,324, and 380 nm, and noon erythemal-weighted irradiance), in addition to the existing erythemal daily exposure, that permit direct comparisons with ground-based measurements from spectrometers and broadband instruments. The new data are available on http://toms.gsfc.nasa.gov/>http://toms.gsfc.nasa.gov. Comparisons of the TOMS estimated irradiances with ground-based instruments are given along with a review of the sources of known errors, especially the recent improvements in accounting for aerosol attenuation. Trend estimations from the new TOMS irradiances permit the clear separation of changes caused by ozone and those caused by aerosols and clouds. Systematic differences in cloud cover are shown to be the most important factor in determining regional differences in UV radiation reaching the ground for locations at the same latitude (e.g., the summertime differences between Australia and the US southwest).
Wang, Shanshan; Pavlicek, William; Roberts, Catherine C; Langer, Steve G; Zhang, Muhong; Hu, Mengqi; Morin, Richard L; Schueler, Beth A; Wellnitz, Clinton V; Wu, Teresa
The U.S. National Press has brought to full public discussion concerns regarding the use of medical radiation, specifically x-ray computed tomography (CT), in diagnosis. A need exists for developing methods whereby assurance is given that all diagnostic medical radiation use is properly prescribed, and all patients' radiation exposure is monitored. The "DICOM Index Tracker©" (DIT) transparently captures desired digital imaging and communications in medicine (DICOM) tags from CT, nuclear imaging equipment, and other DICOM devices across an enterprise. Its initial use is recording, monitoring, and providing automatic alerts to medical professionals of excursions beyond internally determined trigger action levels of radiation. A flexible knowledge base, aware of equipment in use, enables automatic alerts to system administrators of newly identified equipment models or software versions so that DIT can be adapted to the new equipment or software. A dosimetry module accepts mammography breast organ dose, skin air kerma values from XA modalities, exposure indices from computed radiography, etc. upon receipt. The American Association of Physicists in Medicine recommended a methodology for effective dose calculations which are performed with CT units having DICOM structured dose reports. Web interface reporting is provided for accessing the database in real-time. DIT is DICOM-compliant and, thus, is standardized for international comparisons. Automatic alerts currently in use include: email, cell phone text message, and internal pager text messaging. This system extends the utility of DICOM for standardizing the capturing and computing of radiation dose as well as other quality measures.
Michel, Laurent; Des Jarlais, Don C; Duong Thi, Huong; Khuat Thi Hai, Oanh; Pham Minh, Khuê; Peries, Marianne; Vallo, Roselyne; Nham Thi Tuyet, Thanh; Hoang Thi, Giang; Le Sao, Mai; Feelemyer, Jonathan; Vu Hai, Vinh; Moles, Jean-Pierre; Laureillard, Didier; Nagot, Nicolas
The aim of this study was to describe patterns among people who inject drugs (PWID), risk-related behaviours and access to methadone treatment, in order to design a large-scale intervention aiming to end the HIV epidemic in Haiphong, Vietnam. A respondent-driven sampling (RDS) survey was first conducted to identify profiles of drug use and HIV risk-related behaviour among PWID. A sample of PWID was then included in a one-year cohort study to describe access to methadone treatment and associated factors. Among the 603 patients enrolled in the RDS survey, 10% were female, all were injecting heroin and 24% were using methamphetamine, including 3 (0.5%) through injection. Different profiles of risk-related behaviours were identified, including one entailing high-risk sexual behaviour (n=37) and another involving drug-related high-risk practices (n=22). High-risk sexual activity was related to binge drinking and methamphetamine use. Among subjects with low sexual risk, sexual intercourse with a main partner with unknown serostatus was often unprotected. Among the 250 PWID included in the cohort, 55.2% initiated methadone treatment during the follow-up (versus 4.4% at RDS); methamphetamine use significantly increased. The factors associated with not being treated with methadone after 52 weeks were fewer injections per month and being a methamphetamine user at RDS. Heroin is still the main drug injected in Haiphong. Methamphetamine use is increasing markedly and is associated with delay in methadone initiation. Drug-related risks are low but sexual risk behaviours are still present. Comprehensive approaches are needed in the short term. Copyright © 2017 Elsevier B.V. All rights reserved.
The Toxic Substances Control Act Test Submissions Database (TSCATS) was developed to make unpublished test data available to the public. The test data is submitted to the U.S. Environmental Protection Agency by industry under the Toxic Substances Control Act. Test is broadly defined to include case reports, episodic incidents, such as spills, and formal test study presentations. The database allows searching of test submissions according to specific chemical identity or type of study when used with an appropriate search retrieval software program. Studies are indexed under three broad subject areas: health effects, environmental effects and environmental fate. Additional controlled vocabulary terms are assigned which describe the experimental protocol and test observations. Records identify reference information needed to locate the source document, as well as the submitting organization and reason for submission of the test data
Gallacher, Christopher; Thomas, Russell; Lord, Richard; Kalin, Robert M; Taylor, Chris
Coal tars are a mixture of organic and inorganic compounds that were by-products from the manufactured gas and coke making industries. The tar compositions varied depending on many factors such as the temperature of production and the type of retort used. For this reason a comprehensive database of the compounds found in different tar types is of value to understand both how their compositions differ and what potential chemical hazards are present. This study focuses on the heterocyclic and hydroxylated compounds present in a database produced from 16 different tars from five different production processes. Samples of coal tar were extracted using accelerated solvent extraction (ASE) and derivatized post-extraction using N,O-bis(trimethylsilyl)trifluoroacetamide (BSTFA) with 1% trimethylchlorosilane (TMCS). The derivatized samples were analysed using two-dimensional gas chromatography combined with time-of-flight mass spectrometry (GCxGC/TOFMS). A total of 865 heterocyclic compounds and 359 hydroxylated polycyclic aromatic hydrocarbons (PAHs) were detected in 16 tar samples produced by five different processes. The contents of both heterocyclic and hydroxylated PAHs varied greatly with the production process used, with the heterocyclic compounds giving information about the feedstock used. Of the 359 hydroxylated PAHs detected the majority would not have been be detected without the use of derivatization. Coal tars produced using different production processes and feedstocks yielded tars with significantly different heterocyclic and hydroxylated contents. The concentrations of the individual heterocyclic compounds varied greatly even within the different production processes and provided information about the feedstock used to produce the tars. The hydroxylated PAH content of the samples provided important analytical information that would otherwise not have been obtained without the use of derivatization and GCxGC/TOFMS. Copyright © 2017 John Wiley & Sons, Ltd.
Wang, Dan-Ni; Wang, Zhi-Qiang; Yan, Lei; He, Jin; Lin, Min-Ting; Chen, Wan-Jin; Wang, Ning
The development of clinical trials for Duchenne muscular dystrophy (DMD) in China faces many challenges due to limited information about epidemiological data, natural history and clinical management. To provide these detailed data, we developed a comprehensive database based on registered DMD patients from South China and analysed their clinical and mutational characteristics. The database included DMD registrants confirmed by clinical presentation, family history, genetic detection, prognostic outcome, and/or muscle biopsy. Clinical data were collected by a registry form. Mutations of dystrophin were detected by multiplex ligation-dependent probe amplification (MLPA) and Sanger sequencing. Currently, 132 DMD patients from 128 families in South China have been registered, and 91.7% of them were below 10 years old. In mutational detection, large deletions were the most frequent type (57.8%), followed by small deletion/insertion mutations (14.1%), nonsense mutations (13.3%), large duplications (10.9%), and splice site mutations (3.1%). Clinical analysis revealed that most patients reported initial symptoms between 1 and 3 years of age, but the diagnostic age was more frequently between 6 and 8 years. 81.4% of patients were ambulatory. Baseline cardiac assessments at diagnosis were conducted in 39.4% and 29.5% of patients by echocardiograms and electrocardiograms, respectively. Only 22.7% of registrants performed baseline respiratory assessments. A small numbers of patients (20.5%) were treated with glucocorticoids. 13.3% of patients were eligible for stop codon read-through therapy, and 48.4% of patients would potentially benefit from exon skipping. The top five exon skips applicable to the largest group of registrants were skipping of exons 51 (14.8% of total mutations), 53 (12.5%), 45 (7.0%), 55 (4.7%), and 44 (3.9%). In conclusion, our database provided information on the natural history, diagnosis and management status of DMD in South China, as well as potential
.... Use of waveform diversity and a comprehensive approach to adaptive processing may not be useful if the sensors deviate from their true positions, due to environmental effects or due to mechanical...
Balaur, Irina; Mazein, Alexander; Saqi, Mansoor; Lysenko, Artem; Rawlings, Christopher J; Auffray, Charles
The goal of this work is to offer a computational framework for exploring data from the Recon2 human metabolic reconstruction model. Advanced user access features have been developed using the Neo4j graph database technology and this paper describes key features such as efficient management of the network data, examples of the network querying for addressing particular tasks, and how query results are converted back to the Systems Biology Markup Language (SBML) standard format. The Neo4j-based metabolic framework facilitates exploration of highly connected and comprehensive human metabolic data and identification of metabolic subnetworks of interest. A Java-based parser component has been developed to convert query results (available in the JSON format) into SBML and SIF formats in order to facilitate further results exploration, enhancement or network sharing. The Neo4j-based metabolic framework is freely available from: https://diseaseknowledgebase.etriks.org/metabolic/browser/ . The java code files developed for this work are available from the following url: https://github.com/ibalaur/MetabolicFramework . firstname.lastname@example.org. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Krupke, Debra M; Begley, Dale A; Sundberg, John P; Richardson, Joel E; Neuhauser, Steven B; Bult, Carol J
Research using laboratory mice has led to fundamental insights into the molecular genetic processes that govern cancer initiation, progression, and treatment response. Although thousands of scientific articles have been published about mouse models of human cancer, collating information and data for a specific model is hampered by the fact that many authors do not adhere to existing annotation standards when describing models. The interpretation of experimental results in mouse models can also be confounded when researchers do not factor in the effect of genetic background on tumor biology. The Mouse Tumor Biology (MTB) database is an expertly curated, comprehensive compendium of mouse models of human cancer. Through the enforcement of nomenclature and related annotation standards, MTB supports aggregation of data about a cancer model from diverse sources and assessment of how genetic background of a mouse strain influences the biological properties of a specific tumor type and model utility. Cancer Res; 77(21); e67-70. ©2017 AACR . ©2017 American Association for Cancer Research.
Boué, Stéphanie; Talikka, Marja; Westra, Jurjen Willem; Hayes, William; Di Fabio, Anselmo; Park, Jennifer; Schlage, Walter K; Sewer, Alain; Fields, Brett; Ansari, Sam; Martin, Florian; Veljkovic, Emilija; Kenney, Renee; Peitsch, Manuel C; Hoeng, Julia
With the wealth of publications and data available, powerful and transparent computational approaches are required to represent measured data and scientific knowledge in a computable and searchable format. We developed a set of biological network models, scripted in the Biological Expression Language, that reflect causal signaling pathways across a wide range of biological processes, including cell fate, cell stress, cell proliferation, inflammation, tissue repair and angiogenesis in the pulmonary and cardiovascular context. This comprehensive collection of networks is now freely available to the scientific community in a centralized web-based repository, the Causal Biological Network database, which is composed of over 120 manually curated and well annotated biological network models and can be accessed at http://causalbionet.com. The website accesses a MongoDB, which stores all versions of the networks as JSON objects and allows users to search for genes, proteins, biological processes, small molecules and keywords in the network descriptions to retrieve biological networks of interest. The content of the networks can be visualized and browsed. Nodes and edges can be filtered and all supporting evidence for the edges can be browsed and is linked to the original articles in PubMed. Moreover, networks may be downloaded for further visualization and evaluation. Database URL: http://causalbionet.com © The Author(s) 2015. Published by Oxford University Press.
Kim, Jungeun; Weber, Jessica A; Jho, Sungwoong; Jang, Jinho; Jun, JeHoon; Cho, Yun Sung; Kim, Hak-Min; Kim, Hyunho; Kim, Yumi; Chung, OkSung; Kim, Chang Geun; Lee, HyeJin; Kim, Byung Chul; Han, Kyudong; Koh, InSong; Chae, Kyun Shik; Lee, Semin; Edwards, Jeremy S; Bhak, Jong
High-coverage whole-genome sequencing data of a single ethnicity can provide a useful catalogue of population-specific genetic variations, and provides a critical resource that can be used to more accurately identify pathogenic genetic variants. We report a comprehensive analysis of the Korean population, and present the Korean National Standard Reference Variome (KoVariome). As a part of the Korean Personal Genome Project (KPGP), we constructed the KoVariome database using 5.5 terabases of whole genome sequence data from 50 healthy Korean individuals in order to characterize the benign ethnicity-relevant genetic variation present in the Korean population. In total, KoVariome includes 12.7M single-nucleotide variants (SNVs), 1.7M short insertions and deletions (indels), 4K structural variations (SVs), and 3.6K copy number variations (CNVs). Among them, 2.4M (19%) SNVs and 0.4M (24%) indels were identified as novel. We also discovered selective enrichment of 3.8M SNVs and 0.5M indels in Korean individuals, which were used to filter out 1,271 coding-SNVs not originally removed from the 1,000 Genomes Project when prioritizing disease-causing variants. KoVariome health records were used to identify novel disease-causing variants in the Korean population, demonstrating the value of high-quality ethnic variation databases for the accurate interpretation of individual genomes and the precise characterization of genetic variations.
Bruijstens, A.J.; Beuman, W.P.H.; Molen, M. van der; Rijke, J. de; Cloudt, R.P.M.; Kadijk, G.; Camp, O.M.G.C. op den; Bleuanus, W.A.J.
In order to enable this evaluation of the current biogas quality situation in the EU; results are presented in a biogas database. Furthermore the key gas parameter Sonic Bievo Index (influence on open loop A/F-ratio) is defined and other key gas parameters like the Methane Number (knock resistance)
Full Text Available The Soybean Proteome Database (SPD was created to provide a data repository for functional analyses of soybean responses to flooding stress, thought to be a major constraint for establishment and production of this plant. Since the last publication of the SPD, we thoroughly enhanced the contents of database, particularly protein samples and their annotations from several organelles. The current release contains 23 reference maps of soybean (Glycine max cv. Enrei proteins collected from several organs, tissues and organelles including the maps for plasma membrane, cell wall, chloroplast and mitochondrion, which were electrophoresed on two-dimensional polyacrylamide gels. Furthermore, the proteins analyzed with gel-free proteomics technique have been added and available online. In addition to protein fluctuations under flooding, those of salt and drought stress have been included in the current release. An omics table also has been provided to reveal relationships among mRNAs, proteins and metabolites with a unified temporal-profile tag in order to facilitate retrieval of the data based on the temporal profiles. An intuitive user interface based on dynamic HTML enables users to browse the network as well as the profiles of multiple omes in an integrated fashion. The SPD is available at: http://proteome.dc.affrc.go.jp/Soybean/.
Full Text Available The glycosylation of proteins is responsible for their structural and functional roles in many cellular activities. This work describes a strategy that combines an efficient release, labeling and liquid chromatography-mass spectral analysis with the use of a comprehensive database to analyze N-glycans. The analytical method described relies on a recently commercialized kit in which quick deglycosylation is followed by rapid labeling and cleanup of labeled glycans. This greatly improves the separation, mass spectrometry (MS analysis and fluorescence detection of N-glycans. A hypothetical database, constructed using GlycResoft, provides all compositional possibilities of N-glycans based on the common sugar residues found in N-glycans. In the initial version this database contains >8,700 N-glycans, and is compatible with MS instrument software and expandable. N-glycans from four different well-studied glycoproteins were analyzed by this strategy. The results provided much more accurate and comprehensive data than had been previously reported. This strategy was then used to analyze the N-glycans present on the membrane glycoproteins of gastric carcinoma cells with different degrees of differentiation. Accurate and comprehensive N-glycan data from those cells was obtained efficiently and their differences compared corresponding to their differentiation states. Thus, the novel strategy developed greatly improves accuracy, efficiency and comprehensiveness of N-glycan analysis.
Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.
This water quality database (viz.GeochemXX.mdb) has been developed as part of the Underground Test Area (UGTA) Program with the cooperation of several agencies actively participating in ongoing evaluation and characterization activities under contract to the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office (NNSA/NSO). The database has been constructed to provide up-to-date, comprehensive, and quality controlled data in a uniform format for the support of current and future projects. This database provides a valuable tool for geochemical and hydrogeologic evaluations of the Nevada Test Site (NTS) and surrounding region. Chemistry data have been compiled for groundwater within the NTS and the surrounding region. These data include major ions, organic compounds, trace elements, radionuclides, various field parameters, and environmental isotopes. Colloid data are also included in the database. The GeochemXX.mdb database is distributed on an annual basis. The extension ''XX'' within the database title is replaced by the last two digits of the release year (e.g., Geochem06 for the version released during the 2006 fiscal year). The database is distributed via compact disc (CD) and is also uploaded to the Common Data Repository (CDR) in order to make it available to all agencies with DOE intranet access. This report provides an explanation of the database configuration and summarizes the general content and utility of the individual data tables. In addition to describing the data, subsequent sections of this report provide the data user with an explanation of the quality assurance/quality control (QA/QC) protocols for this database
Seawater desalination is now widely accepted as an attractive alternative source of freshwater for domestic and industrial uses. Despite the considerable progress made in the relevant technologies desalination, however, remains an energy intensive process in which the energy cost is the paramount factor. Many papers have already been published on desalination economics but a comprehensive study, based on the exhaustive analysis of a combination of energy sources and desalination processes, using state of the art economic models and realistic assumptions, is still quite rare. The aim of this paper is to fulfil this gap with a view to provide clear choices of techno-economic options to decision makers in a wide range of countries be they from the developed regions or emerging countries
Mylne, Adrian; Brady, Oliver J.; Huang, Zhi; Pigott, David M.; Golding, Nick; Kraemer, Moritz U.G.; Hay, Simon I.
Ebola is a zoonotic filovirus that has the potential to cause outbreaks of variable magnitude in human populations. This database collates our existing knowledge of all known human outbreaks of Ebola for the first time by extracting details of their suspected zoonotic origin and subsequent human-to-human spread from a range of published and non-published sources. In total, 22 unique Ebola outbreaks were identified, composed of 117 unique geographic transmission clusters. Details of the index case and geographic spread of secondary and imported cases were recorded as well as summaries of patient numbers and case fatality rates. A brief text summary describing suspected routes and means of spread for each outbreak was also included. While we cannot yet include the ongoing Guinea and DRC outbreaks until they are over, these data and compiled maps can be used to gain an improved understanding of the initial spread of past Ebola outbreaks and help evaluate surveillance and control guidelines for limiting the spread of future epidemics. PMID:25984346
Mylne, Adrian; Brady, Oliver J; Huang, Zhi; Pigott, David M; Golding, Nick; Kraemer, Moritz U G; Hay, Simon I
Ebola is a zoonotic filovirus that has the potential to cause outbreaks of variable magnitude in human populations. This database collates our existing knowledge of all known human outbreaks of Ebola for the first time by extracting details of their suspected zoonotic origin and subsequent human-to-human spread from a range of published and non-published sources. In total, 22 unique Ebola outbreaks were identified, composed of 117 unique geographic transmission clusters. Details of the index case and geographic spread of secondary and imported cases were recorded as well as summaries of patient numbers and case fatality rates. A brief text summary describing suspected routes and means of spread for each outbreak was also included. While we cannot yet include the ongoing Guinea and DRC outbreaks until they are over, these data and compiled maps can be used to gain an improved understanding of the initial spread of past Ebola outbreaks and help evaluate surveillance and control guidelines for limiting the spread of future epidemics.
Full Text Available Abstract Background G protein-coupled receptors (GPCRs transduce a wide variety of extracellular signals to within the cell and therefore have a key role in regulating cell activity and physiological function. GPCR malfunction is responsible for a wide range of diseases including cancer, diabetes and hyperthyroidism and a large proportion of drugs on the market target these receptors. The three dimensional structure of GPCRs is important for elucidating the molecular mechanisms underlying these diseases and for performing structure-based drug design. Although structural data are restricted to only a handful of GPCRs, homology models can be used as a proxy for those receptors not having crystal structures. However, many researchers working on GPCRs are not experienced homology modellers and are therefore unable to benefit from the information that can be gleaned from such three-dimensional models. Here, we present a comprehensive database called the GPCR-SSFE, which provides initial homology models of the transmembrane helices for a large variety of family A GPCRs. Description Extending on our previous theoretical work, we have developed an automated pipeline for GPCR homology modelling and applied it to a large set of family A GPCR sequences. Our pipeline is a fragment-based approach that exploits available family A crystal structures. The GPCR-SSFE database stores the template predictions, sequence alignments, identified sequence and structure motifs and homology models for 5025 family A GPCRs. Users are able to browse the GPCR dataset according to their pharmacological classification or search for results using a UniProt entry name. It is also possible for a user to submit a GPCR sequence that is not contained in the database for analysis and homology model building. The models can be viewed using a Jmol applet and are also available for download along with the alignments. Conclusions The data provided by GPCR-SSFE are useful for investigating
Worth, Catherine L; Kreuchwig, Annika; Kleinau, Gunnar; Krause, Gerd
G protein-coupled receptors (GPCRs) transduce a wide variety of extracellular signals to within the cell and therefore have a key role in regulating cell activity and physiological function. GPCR malfunction is responsible for a wide range of diseases including cancer, diabetes and hyperthyroidism and a large proportion of drugs on the market target these receptors. The three dimensional structure of GPCRs is important for elucidating the molecular mechanisms underlying these diseases and for performing structure-based drug design. Although structural data are restricted to only a handful of GPCRs, homology models can be used as a proxy for those receptors not having crystal structures. However, many researchers working on GPCRs are not experienced homology modellers and are therefore unable to benefit from the information that can be gleaned from such three-dimensional models. Here, we present a comprehensive database called the GPCR-SSFE, which provides initial homology models of the transmembrane helices for a large variety of family A GPCRs. Extending on our previous theoretical work, we have developed an automated pipeline for GPCR homology modelling and applied it to a large set of family A GPCR sequences. Our pipeline is a fragment-based approach that exploits available family A crystal structures. The GPCR-SSFE database stores the template predictions, sequence alignments, identified sequence and structure motifs and homology models for 5025 family A GPCRs. Users are able to browse the GPCR dataset according to their pharmacological classification or search for results using a UniProt entry name. It is also possible for a user to submit a GPCR sequence that is not contained in the database for analysis and homology model building. The models can be viewed using a Jmol applet and are also available for download along with the alignments. The data provided by GPCR-SSFE are useful for investigating general and detailed sequence-structure-function relationships
Dornback, M.; Hourigan, T.; Etnoyer, P.; McGuinn, R.; Cross, S. L.
Research on deep-sea corals has expanded rapidly over the last two decades, as scientists began to realize their value as long-lived structural components of high biodiversity habitats and archives of environmental information. The NOAA Deep Sea Coral Research and Technology Program's National Database for Deep-Sea Corals and Sponges is a comprehensive resource for georeferenced data on these organisms in U.S. waters. The National Database currently includes more than 220,000 deep-sea coral records representing approximately 880 unique species. Database records from museum archives, commercial and scientific bycatch, and from journal publications provide baseline information with relatively coarse spatial resolution dating back as far as 1842. These data are complemented by modern, in-situ submersible observations with high spatial resolution, from surveys conducted by NOAA and NOAA partners. Management of high volumes of modern high-resolution observational data can be challenging. NOAA is working with our data partners to incorporate this occurrence data into the National Database, along with images and associated information related to geoposition, time, biology, taxonomy, environment, provenance, and accuracy. NOAA is also working to link associated datasets collected by our program's research, to properly archive them to the NOAA National Data Centers, to build a robust metadata record, and to establish a standard protocol to simplify the process. Access to the National Database is provided through an online mapping portal. The map displays point based records from the database. Records can be refined by taxon, region, time, and depth. The queries and extent used to view the map can also be used to download subsets of the database. The database, map, and website is already in use by NOAA, regional fishery management councils, and regional ocean planning bodies, but we envision it as a model that can expand to accommodate data on a global scale.
Zhai, Peng; Yang, Longshu; Guo, Xiao; Wang, Zhe; Guo, Jiangtao; Wang, Xiaoqi; Zhu, Huaiqiu
During the past decade, the development of high throughput nucleic sequencing and mass spectrometry analysis techniques have enabled the characterization of microbial communities through metagenomics, metatranscriptomics, metaproteomics and metabolomics data. To reveal the diversity of microbial communities and interactions between living conditions and microbes, it is necessary to introduce comparative analysis based upon integration of all four types of data mentioned above. Comparative meta-omics, especially comparative metageomics, has been established as a routine process to highlight the significant differences in taxon composition and functional gene abundance among microbiota samples. Meanwhile, biologists are increasingly concerning about the correlations between meta-omics features and environmental factors, which may further decipher the adaptation strategy of a microbial community. We developed a graphical comprehensive analysis software named MetaComp comprising a series of statistical analysis approaches with visualized results for metagenomics and other meta-omics data comparison. This software is capable to read files generated by a variety of upstream programs. After data loading, analyses such as multivariate statistics, hypothesis testing of two-sample, multi-sample as well as two-group sample and a novel function-regression analysis of environmental factors are offered. Here, regression analysis regards meta-omic features as independent variable and environmental factors as dependent variables. Moreover, MetaComp is capable to automatically choose an appropriate two-group sample test based upon the traits of input abundance profiles. We further evaluate the performance of its choice, and exhibit applications for metagenomics, metaproteomics and metabolomics samples. MetaComp, an integrative software capable for applying to all meta-omics data, originally distills the influence of living environment on microbial community by regression analysis
Barbosa, Luiz Carlos Bertucci; Garrido, Saulo Santesso; Marchetto, Reinaldo
Sutter Nathan B
Full Text Available Abstract Research laboratories studying the genetics of companion animals have no database tools specifically designed to aid in the management of the many kinds of data that are generated, stored and analyzed. We have developed a relational database, "DOG-SPOT," to provide such a tool. Implemented in MS-Access, the database is easy to extend or customize to suit a lab's particular needs. With DOG-SPOT a lab can manage data relating to dogs, breeds, samples, biomaterials, phenotypes, owners, communications, amplicons, sequences, markers, genotypes and personnel. Such an integrated data structure helps ensure high quality data entry and makes it easy to track physical stocks of biomaterials and oligonucleotides.
Moco, S.I.A.; Tseng, L.; Spraul, M.; Chen, Z.; Vervoort, J.J.M.
The improvements in separation and analysis of complex mixtures by LC-NMR during the last decade have shifted its emphasis from data acquisition to data analysis. For correct data analysis, not only high quality datasets are necessary, but adequate software and adequate databases for semi (or
Jennifer C. Jenkins; David C. Chojnacky; Linda S. Heath; Richard A. Birdsey
A database consisting of 2,640 equations compiled from the literature for predicting the biomass of trees and tree components from diameter measurements of species found in North America. Bibliographic information, geographic locations, diameter limits, diameter and biomass units, equation forms, statistical errors, and coefficients are provided for each equation,...
Gauthier, Nicholas Paul; Larsen, Malene Erup; Wernersson, Rasmus
The past decade has seen the publication of a large number of cell-cycle microarray studies and many more are in the pipeline. However, data from these experiments are not easy to access, combine and evaluate. We have developed a centralized database with an easy-to-use interface, Cyclebase...
Full Text Available Cracids are among the most vulnerable groups of Neotropical birds. Almost half of the species of this family are included in a conservation risk category. Twelve taxa occur in Mexico, six of which are considered at risk at national level and two are globally endangered. Therefore, it is imperative that high quality, comprehensive, and high-resolution spatial data on the occurrence of these taxa are made available as a valuable tool in the process of defining appropriate management strategies for conservation at a local and global level. We constructed the CracidMex1 database by collating global records of all cracid taxa that occur in Mexico from available electronic databases, museum specimens, publications, “grey literature”, and unpublished records. We generated a database with 23,896 clean, validated, and standardized geographic records. Database quality control was an iterative process that commenced with the consolidation and elimination of duplicate records, followed by the geo-referencing of records when necessary, and their taxonomic and geographic validation using GIS tools and expert knowledge. We followed the geo-referencing protocol proposed by the Mexican National Commission for the Use and Conservation of Biodiversity. We could not estimate the geographic coordinates of 981 records due to inconsistencies or lack of sufficient information in the description of the locality.Given that current records for most of the taxa have some degree of distributional bias, with redundancies at different spatial scales, the CracidMex1 database has allowed us to detect areas where more sampling effort is required to have a better representation of the global spatial occurrence of these cracids. We also found that particular attention needs to be given to taxa identification in those areas where congeners or conspecifics co-occur in order to avoid taxonomic uncertainty. The construction of the CracidMex1 database represents the first
Otsuka, Yuta; Muto, Ai; Takeuchi, Rikiya; Okada, Chihiro; Ishikawa, Motokazu; Nakamura, Koichiro; Yamamoto, Natsuko; Dose, Hitomi; Nakahigashi, Kenji; Tanishima, Shigeki; Suharnan, Sivasundaram; Nomura, Wataru; Nakayashiki, Toru; Aref, Walid G; Bochner, Barry R; Conway, Tyrrell; Gribskov, Michael; Kihara, Daisuke; Rudd, Kenneth E; Tohsato, Yukako; Wanner, Barry L; Mori, Hirotada
Comprehensive experimental resources, such as ORFeome clone libraries and deletion mutant collections, are fundamental tools for elucidation of gene function. Data sets by omics analysis using these resources provide key information for functional analysis, modeling and simulation both in individual and systematic approaches. With the long-term goal of complete understanding of a cell, we have over the past decade created a variety of clone and mutant sets for functional genomics studies of Escherichia coli K-12. We have made these experimental resources freely available to the academic community worldwide. Accordingly, these resources have now been used in numerous investigations of a multitude of cell processes. Quality control is extremely important for evaluating results generated by these resources. Because the annotation has been changed since 2005, which we originally used for the construction, we have updated these genomic resources accordingly. Here, we describe GenoBase (http://ecoli.naist.jp/GB/), which contains key information about comprehensive experimental resources of E. coli K-12, their quality control and several omics data sets generated using these resources. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Full Text Available Dengue virus (DENV is a human pathogen and its etiology has been widely established. There are many interactions between DENV and human proteins that have been reported in literature. However, no publicly accessible resource for efficiently retrieving the information is yet available. In this study, we mined all publicly available dengue-human interactions that have been reported in the literature into a database called DenHunt. We retrieved 682 direct interactions of human proteins with dengue viral components, 382 indirect interactions and 4120 differentially expressed human genes in dengue infected cell lines and patients. We have illustrated the importance of DenHunt by mapping the dengue-human interactions on to the host interactome and observed that the virus targets multiple host functional complexes of important cellular processes such as metabolism, immune system and signaling pathways suggesting a potential role of these interactions in viral pathogenesis. We also observed that 7 percent of the dengue virus interacting human proteins are also associated with other infectious and non-infectious diseases. Finally, the understanding that comes from such analyses could be used to design better strategies to counteract the diseases caused by dengue virus. The whole dataset has been catalogued in a searchable database, called DenHunt (http://proline.biochem.iisc.ernet.in/DenHunt/.
Karyala, Prashanthi; Metri, Rahul; Bathula, Christopher; Yelamanchi, Syam K; Sahoo, Lipika; Arjunan, Selvam; Sastri, Narayan P; Chandra, Nagasuma
Dengue virus (DENV) is a human pathogen and its etiology has been widely established. There are many interactions between DENV and human proteins that have been reported in literature. However, no publicly accessible resource for efficiently retrieving the information is yet available. In this study, we mined all publicly available dengue-human interactions that have been reported in the literature into a database called DenHunt. We retrieved 682 direct interactions of human proteins with dengue viral components, 382 indirect interactions and 4120 differentially expressed human genes in dengue infected cell lines and patients. We have illustrated the importance of DenHunt by mapping the dengue-human interactions on to the host interactome and observed that the virus targets multiple host functional complexes of important cellular processes such as metabolism, immune system and signaling pathways suggesting a potential role of these interactions in viral pathogenesis. We also observed that 7 percent of the dengue virus interacting human proteins are also associated with other infectious and non-infectious diseases. Finally, the understanding that comes from such analyses could be used to design better strategies to counteract the diseases caused by dengue virus. The whole dataset has been catalogued in a searchable database, called DenHunt (http://proline.biochem.iisc.ernet.in/DenHunt/).
Gaby, John Christian; Buckley, Daniel H
We describe a nitrogenase gene sequence database that facilitates analysis of the evolution and ecology of nitrogen-fixing organisms. The database contains 32 954 aligned nitrogenase nifH sequences linked to phylogenetic trees and associated sequence metadata. The database includes 185 linked multigene entries including full-length nifH, nifD, nifK and 16S ribosomal RNA (rRNA) gene sequences. Evolutionary analyses enabled by the multigene entries support an ancient horizontal transfer of nitrogenase genes between Archaea and Bacteria and provide evidence that nifH has a different history of horizontal gene transfer from the nifDK enzyme core. Further analyses show that lineages in nitrogenase cluster I and cluster III have different rates of substitution within nifD, suggesting that nifD is under different selection pressure in these two lineages. Finally, we find that that the genetic divergence of nifH and 16S rRNA genes does not correlate well at sequence dissimilarity values used commonly to define microbial species, as stains having <3% sequence dissimilarity in their 16S rRNA genes can have up to 23% dissimilarity in nifH. The nifH database has a number of uses including phylogenetic and evolutionary analyses, the design and assessment of primers/probes and the evaluation of nitrogenase sequence diversity. Database URL: http://www.css.cornell.edu/faculty/buckley/nifh.htm.
Ho, Lap; Cheng, Haoxiang; Wang, Jun; Simon, James E; Wu, Qingli; Zhao, Danyue; Carry, Eileen; Ferruzzi, Mario G; Faith, Jeremiah; Valcarcel, Breanna; Hao, Ke; Pasinetti, Giulio M
The development of a given botanical preparation for eventual clinical application requires extensive, detailed characterizations of the chemical composition, as well as the biological availability, biological activity, and safety profiles of the botanical. These issues are typically addressed using diverse experimental protocols and model systems. Based on this consideration, in this study we established a comprehensive database and analysis framework for the collection, collation, and integrative analysis of diverse, multiscale data sets. Using this framework, we conducted an integrative analysis of heterogeneous data from in vivo and in vitro investigation of a complex bioactive dietary polyphenol-rich preparation (BDPP) and built an integrated network linking data sets generated from this multitude of diverse experimental paradigms. We established a comprehensive database and analysis framework as well as a systematic and logical means to catalogue and collate the diverse array of information gathered, which is securely stored and added to in a standardized manner to enable fast query. We demonstrated the utility of the database in (1) a statistical ranking scheme to prioritize response to treatments and (2) in depth reconstruction of functionality studies. By examination of these data sets, the system allows analytical querying of heterogeneous data and the access of information related to interactions, mechanism of actions, functions, etc., which ultimately provide a global overview of complex biological responses. Collectively, we present an integrative analysis framework that leads to novel insights on the biological activities of a complex botanical such as BDPP that is based on data-driven characterizations of interactions between BDPP-derived phenolic metabolites and their mechanisms of action, as well as synergism and/or potential cancellation of biological functions. Out integrative analytical approach provides novel means for a systematic integrative
Wang, Jia; Chen, Dijun; Lei, Yang; Chang, Ji-Wei; Hao, Bao-Hai; Xing, Feng; Li, Sen; Xu, Qiang; Deng, Xiu-Xin; Chen, Ling-Ling
Citrus is one of the most important and widely grown fruit crop with global production ranking firstly among all the fruit crops in the world. Sweet orange accounts for more than half of the Citrus production both in fresh fruit and processed juice. We have sequenced the draft genome of a double-haploid sweet orange (C. sinensis cv. Valencia), and constructed the Citrus sinensis annotation project (CAP) to store and visualize the sequenced genomic and transcriptome data. CAP provides GBrowse-based organization of sweet orange genomic data, which integrates ab initio gene prediction, EST, RNA-seq and RNA-paired end tag (RNA-PET) evidence-based gene annotation. Furthermore, we provide a user-friendly web interface to show the predicted protein-protein interactions (PPIs) and metabolic pathways in sweet orange. CAP provides comprehensive information beneficial to the researchers of sweet orange and other woody plants, which is freely available at http://citrus.hzau.edu.cn/.
Ferro, Myriam; Brugière, Sabine; Salvi, Daniel; Seigneurin-Berny, Daphné; Court, Magali; Moyet, Lucas; Ramus, Claire; Miras, Stéphane; Mellal, Mourad; Le Gall, Sophie; Kieffer-Jaquinod, Sylvie; Bruley, Christophe; Garin, Jérôme; Joyard, Jacques; Masselon, Christophe; Rolland, Norbert
Recent advances in the proteomics field have allowed a series of high throughput experiments to be conducted on chloroplast samples, and the data are available in several public databases. However, the accurate localization of many chloroplast proteins often remains hypothetical. This is especially true for envelope proteins. We went a step further into the knowledge of the chloroplast proteome by focusing, in the same set of experiments, on the localization of proteins in the stroma, the thylakoids, and envelope membranes. LC-MS/MS-based analyses first allowed building the AT_CHLORO database (http://www.grenoble.prabi.fr/protehome/grenoble-plant-proteomics/), a comprehensive repertoire of the 1323 proteins, identified by 10,654 unique peptide sequences, present in highly purified chloroplasts and their subfractions prepared from Arabidopsis thaliana leaves. This database also provides extensive proteomics information (peptide sequences and molecular weight, chromatographic retention times, MS/MS spectra, and spectral count) for a unique chloroplast protein accurate mass and time tag database gathering identified peptides with their respective and precise analytical coordinates, molecular weight, and retention time. We assessed the partitioning of each protein in the three chloroplast compartments by using a semiquantitative proteomics approach (spectral count). These data together with an in-depth investigation of the literature were compiled to provide accurate subplastidial localization of previously known and newly identified proteins. A unique knowledge base containing extensive information on the proteins identified in envelope fractions was thus obtained, allowing new insights into this membrane system to be revealed. Altogether, the data we obtained provide unexpected information about plastidial or subplastidial localization of some proteins that were not suspected to be associated to this membrane system. The spectral counting-based strategy was further
Kusano, Kristofer D; Gabler, Hampton C
The objective of active safety systems is to prevent or mitigate collisions. A critical component in the design of active safety systems is the identification of the target population for a proposed system. The target population for an active safety system is that set of crashes that a proposed system could prevent or mitigate. Target crashes have scenarios in which the sensors and algorithms would likely activate. For example, the rear-end crash scenario, where the front of one vehicle contacts another vehicle traveling in the same direction and in the same lane as the striking vehicle, is one scenario for which forward collision warning (FCW) would be most effective in mitigating or preventing. This article presents a novel set of precrash scenarios based on coded variables from NHTSA's nationally representative crash databases in the United States. Using 4 databases (National Automotive Sampling System-General Estimates System [NASS-GES], NASS Crashworthiness Data System [NASS-CDS], Fatality Analysis Reporting System [FARS], and National Motor Vehicle Crash Causation Survey [NMVCCS]) the scenarios developed in this study can be used to quantify the number of police-reported crashes, seriously injured occupants, and fatalities that are applicable to proposed active safety systems. In this article, we use the precrash scenarios to identify the target populations for FCW, pedestrian crash avoidance systems (PCAS), lane departure warning (LDW), and vehicle-to-vehicle (V2V) or vehicle-to-infrastructure (V2I) systems. Crash scenarios were derived using precrash variables (critical event, accident type, precrash movement) present in all 4 data sources. This study found that these active safety systems could potentially mitigate approximately 1 in 5 of all severity and serious injury crashes in the United States and 26 percent of fatal crashes. Annually, this corresponds to 1.2 million all severity, 14,353 serious injury (MAIS 3+), and 7412 fatal crashes. In addition
Miryala, Sravan Kumar; Anbarasu, Anand; Ramaiah, Sudha
Computational analysis of biomolecular interaction networks is now gaining a lot of importance to understand the functions of novel genes/proteins. Gene interaction (GI) network analysis and protein-protein interaction (PPI) network analysis play a major role in predicting the functionality of interacting genes or proteins and gives an insight into the functional relationships and evolutionary conservation of interactions among the genes. An interaction network is a graphical representation of gene/protein interactome, where each gene/protein is a node, and interaction between gene/protein is an edge. In this review, we discuss the popular open source databases that serve as data repositories to search and collect protein/gene interaction data, and also tools available for the generation of interaction network, visualization and network analysis. Also, various network analysis approaches like topological approach and clustering approach to study the network properties and functional enrichment server which illustrates the functions and pathway of the genes and proteins has been discussed. Hence the distinctive attribute mentioned in this review is not only to provide an overview of tools and web servers for gene and protein-protein interaction (PPI) network analysis but also to extract useful and meaningful information from the interaction networks. Copyright © 2017 Elsevier B.V. All rights reserved.
Adam Y Ye
Full Text Available Transporters are essential in homeostatic exchange of endogenous and exogenous substances at the systematic, organic, cellular, and subcellular levels. Gene mutations of transporters are often related to pharmacogenetics traits. Recent developments in high throughput technologies on genomics, transcriptomics and proteomics allow in depth studies of transporter genes in normal cellular processes and diverse disease conditions. The flood of high throughput data have resulted in urgent need for an updated knowledgebase with curated, organized, and annotated human transporters in an easily accessible way. Using a pipeline with the combination of automated keywords query, sequence similarity search and manual curation on transporters, we collected 1,555 human non-redundant transporter genes to develop the Human Transporter Database (HTD (http://htd.cbi.pku.edu.cn. Based on the extensive annotations, global properties of the transporter genes were illustrated, such as expression patterns and polymorphisms in relationships with their ligands. We noted that the human transporters were enriched in many fundamental biological processes such as oxidative phosphorylation and cardiac muscle contraction, and significantly associated with Mendelian and complex diseases such as epilepsy and sudden infant death syndrome. Overall, HTD provides a well-organized interface to facilitate research communities to search detailed molecular and genetic information of transporters for development of personalized medicine.
Warnell, F; George, B; McConachie, H; Johnson, M; Hardy, R; Parr, J R
(1) Describe how the Autism Spectrum Database-UK (ASD-UK) was established; (2) investigate the representativeness of the first 1000 children and families who participated, compared to those who chose not to; (3) investigate the reliability of the parent-reported Autism Spectrum Disorder (ASD) diagnoses, and present evidence about the validity of diagnoses, that is, whether children recruited actually have an ASD; (4) present evidence about the representativeness of the ASD-UK children and families, by comparing their characteristics with the first 1000 children and families from the regional Database of children with ASD living in the North East (Dasl(n)e), and children and families identified from epidemiological studies. Recruitment through a network of 50 UK child health teams and self-referral. Parents/carers with a child with ASD, aged 2-16 years, completed questionnaires about ASD and some gave professionals' reports about their children. 1000 families registered with ASD-UK in 30 months. Children of families who participated, and of the 208 who chose not to, were found to be very similar on: gender ratio, year of birth, ASD diagnosis and social deprivation score. The reliability of parent-reported ASD diagnoses of children was very high when compared with clinical reports (over 96%); no database child without ASD was identified. A comparison of gender, ASD diagnosis, age at diagnosis, school placement, learning disability, and deprivation score of children and families from ASD-UK with 1084 children and families from Dasl(n)e, and families from population studies, showed that ASD-UK families are representative of families of children with ASD overall. ASD-UK includes families providing parent-reported data about their child and family, who appear to be broadly representative of UK children with ASD. Families continue to join the databases and more than 3000 families can now be contacted by researchers about UK autism research. Published by the BMJ
Vacher , Michel; Bouakaz , Saida; Bobillier-Chaumon , Marc-Eric; Aman , F; Khan , Rizwan Ahmed; Bekkadja , S; Portet , François; Guillou , Erwan; Rossato , S; Lecouteux , Benjamin
International audience; Ambient Assisted Living aims at enhancing the quality of life of older and disabled people at home thanks to Smart Homes. In particular, regarding elderly living alone at home, the detection of distress situation after a fall is very important to reassure this kind of population. However, many studies do not include tests in real settings, because data collection in this domain is very expensive and challenging and because of the few available data sets. The CIRDOcorpu...
Hoeltzenbein, Maria; Beck, Evelin; Rajwanshi, Richa; Gøtestam Skorpen, Carina; Berber, Erhan; Schaefer, Christof; Østensen, Monika
Analyze the cumulative evidence for pregnancy outcomes after maternal exposure to tocilizumab, an anti-interleukin-6-receptor monoclonal antibody used for the treatment of rheumatoid arthritis and juvenile idiopathic arthritis. At present, published experience on tocilizumab use during pregnancy is very limited. We have analyzed all pregnancy-related reports documented in the Roche Global Safety Database until December 31, 2014 (n = 501). After exclusion of ongoing pregnancies, duplicates, and cases retrieved from the literature, 399 women were found to have been exposed to tocilizumab shortly before or during pregnancy, with pregnancy outcomes being reported in 288 pregnancies (72.2%). Of these 288 pregnancies, 180 were prospectively reported resulting in 109 live births (60.6%), 39 spontaneous abortions (21.7%), 31 elective terminations of pregnancy (17.2%), and 1 stillbirth. The rate of malformations was 4.5%. Co-medications included methotrexate in 21.1% of the prospectively ascertained cases. Compared to the general population, an increased rate of preterm birth (31.2%) was observed. Retrospectively reported pregnancies (n = 108) resulted in 55 live births (50.9%), 31 spontaneous abortions (28.7%), and 22 elective terminations (20.4%). Three infants/fetuses with congenital anomalies were reported in this group. No increased risks for adverse pregnancy outcomes were observed after paternal exposure in 13 pregnancies with known outcome. No indication for a substantially increased malformation risk was observed. Considering the limitations of global safety databases, the data do not yet prove safety, but provide information for physicians and patients to make informed decisions. This is particularly important after inadvertent exposure to tocilizumab, shortly before or during early pregnancy. Copyright © 2016 Elsevier Inc. All rights reserved.
Turewicz, Michael; Kohl, Michael; Ahrens, Maike; Mayer, Gerhard; Uszkoreit, Julian; Naboulsi, Wael; Bracht, Thilo; Megger, Dominik A; Sitek, Barbara; Marcus, Katrin; Eisenacher, Martin
The analysis of high-throughput mass spectrometry-based proteomics data must address the specific challenges of this technology. To this end, the comprehensive proteomics workflow offered by the de.NBI service center BioInfra.Prot provides indispensable components for the computational and statistical analysis of this kind of data. These components include tools and methods for spectrum identification and protein inference, protein quantification, expression analysis as well as data standardization and data publication. All particular methods of the workflow which address these tasks are state-of-the-art or cutting edge. As has been shown in previous publications, each of these methods is adequate to solve its specific task and gives competitive results. However, the methods included in the workflow are continuously reviewed, updated and improved to adapt to new scientific developments. All of these particular components and methods are available as stand-alone BioInfra.Prot services or as a complete workflow. Since BioInfra.Prot provides manifold fast communication channels to get access to all components of the workflow (e.g., via the BioInfra.Prot ticket system: email@example.com) users can easily benefit from this service and get support by experts. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Yoo, Kwang Ha; Chung, Wou Young; Park, Joo Hun; Hwang, Sung Chul; Kim, Tae Eun; Oh, Min Jung; Kang, Dae Ryong; Rhee, Chin Kook; Yoon, Hyoung Kyu; Kim, Tae Hyung; Kim, Deog Kyeom; Park, Yong Bum; Kim, Sang Ha; Yum, Ho Kee
Proper education regarding inhaler usage and optimal management of chronic obstructive pulmonary disease (COPD) is essential for effectively treating patients with COPD. This study was conducted to evaluate the effects of a comprehensive education program including inhaler training and COPD management. We enlisted 127 patients with COPD on an outpatient basis at 43 private clinics in Korea. The patients were educated on inhaler usage and disease management for three visits across 2 weeks. Physicians and patients were administered a COPD assessment test (CAT) and questionnaires about the correct usage of inhalers and management of COPD before commencement of this program and after their third visit. The outcomes of 127 COPD patients were analyzed. CAT scores (19.6±12.5 vs. 15.1±12.3) improved significantly after this program (pmanagement and the correct technique for using inhalers than those who did not have improved CAT scores (peducation program including inhaler training and COPD management at a primary care setting improved CAT scores and led to patients' better understanding of COPD management. Copyright©2017. The Korean Academy of Tuberculosis and Respiratory Diseases
Monge-Nájera, Julián; Nielsen-Muñoz, Vanessa; Azofeifa-Mora, Ana Beatriz
BINABITROP is a bibliographical database of more than 38000 records about the ecosystems and organisms of Costa Rica. In contrast with commercial databases, such as Web of Knowledge and Scopus, which exclude most of the scientific journals published in tropical countries, BINABITROP is a comprehensive record of knowledge on the tropical ecosystems and organisms of Costa Rica. We analyzed its contents in three sites (La Selva, Palo Verde and Las Cruces) and recorded scientific field, taxonomic group and authorship. We found that most records dealt with ecology and systematics, and that most authors published only one article in the study period (1963-2011). Most research was published in four journals: Biotropica, Revista de Biología Tropical/ International Journal of Tropical Biology and Conservation, Zootaxa and Brenesia. This may be the first study of a such a comprehensive database for any case of tropical biology literature.
Full Text Available BINABITROP is a bibliographical database of more than 38 000 records about the ecosystems and organisms of Costa Rica. In contrast with commercial databases, such as Web of Knowledge and Scopus, which exclude most of the scientific journals published in tropical countries, BINABITROP is a comprehensive record of knowledge on the tropical ecosystems and organisms of Costa Rica. We analyzed its contents in three sites (La Selva, Palo Verde and Las Cruces and recorded scientific field, taxonomic group and authorship. We found that most records dealt with ecology and systematics, and that most authors published only one article in the study period (1963-2011. Most research was published in four journals: Biotropica, Revista de Biología Tropical/ International Journal of Tropical Biology and Conservation, Zootaxa and Brenesia. This may be the first study of a such a comprehensive database for any case of tropical biology literature.
Wang, Xia; Shen, Yihang; Wang, Shiwei; Li, Shiliang; Zhang, Weilin; Liu, Xiaofeng; Lai, Luhua; Pei, Jianfeng; Li, Honglin
The PharmMapper online tool is a web server for potential drug target identification by reversed pharmacophore matching the query compound against an in-house pharmacophore model database. The original version of PharmMapper includes more than 7000 target pharmacophores derived from complex crystal structures with corresponding protein target annotations. In this article, we present a new version of the PharmMapper web server, of which the backend pharmacophore database is six times larger than the earlier one, with a total of 23 236 proteins covering 16 159 druggable pharmacophore models and 51 431 ligandable pharmacophore models. The expanded target data cover 450 indications and 4800 molecular functions compared to 110 indications and 349 molecular functions in our last update. In addition, the new web server is united with the statistically meaningful ranking of the identified drug targets, which is achieved through the use of standard scores. It also features an improved user interface. The proposed web server is freely available at http://lilab.ecust.edu.cn/pharmmapper/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Full Text Available BACKGROUND: To decrease the risk of postoperative complication, improving general and pulmonary conditioning preoperatively should be considered essential for patients scheduled to undergo lung surgery. OBJECTIVE: The aim of this study is to develop a short-term beneficial program of preoperative pulmonary rehabilitation for lung cancer patients. METHODS: From June 2009, comprehensive preoperative pulmonary rehabilitation (CHPR including intensive nutritional support was performed prospectively using a multidisciplinary team-based approach. Postoperative complication rate and the transitions of pulmonary function in CHPR were compared with historical data of conventional preoperative pulmonary rehabilitation (CVPR conducted since June 2006. The study population was limited to patients who underwent standard lobectomy. RESULTS: Postoperative complication rate in the CVPR (n = 29 and CHPR (n = 21 were 48.3% and 28.6% (p = 0.2428, respectively. Those in patients with Charlson Comorbidity Index scores ≥2 were 68.8% (n = 16 and 27.3% (n = 11, respectively (p = 0.0341 and those in patients with preoperative risk score in Estimation of Physiologic Ability and Surgical Stress scores >0.3 were 57.9% (n = 19 and 21.4% (n = 14, respectively (p = 0.0362. Vital capacities of pre- and post intervention before surgery in the CHPR group were 2.63±0.65 L and 2.75±0.63 L (p = 0.0043, respectively; however, their transition in the CVPR group was not statistically significant (p = 0.6815. Forced expiratory volumes in one second of pre- and post intervention before surgery in the CHPR group were 1.73±0.46 L and 1.87±0.46 L (p = 0.0012, respectively; however, their transition in the CVPR group was not statistically significant (p = 0.6424. CONCLUSIONS: CHPR appeared to be a beneficial and effective short-term preoperative rehabilitation protocol, especially in patients with poor preoperative conditions.
Harada, Hiroaki; Yamashita, Yoshinori; Misumi, Keizo; Tsubokawa, Norifumi; Nakao, Junichi; Matsutani, Junko; Yamasaki, Miyako; Ohkawachi, Tomomi; Taniyama, Kiyomi
To decrease the risk of postoperative complication, improving general and pulmonary conditioning preoperatively should be considered essential for patients scheduled to undergo lung surgery. The aim of this study is to develop a short-term beneficial program of preoperative pulmonary rehabilitation for lung cancer patients. From June 2009, comprehensive preoperative pulmonary rehabilitation (CHPR) including intensive nutritional support was performed prospectively using a multidisciplinary team-based approach. Postoperative complication rate and the transitions of pulmonary function in CHPR were compared with historical data of conventional preoperative pulmonary rehabilitation (CVPR) conducted since June 2006. The study population was limited to patients who underwent standard lobectomy. Postoperative complication rate in the CVPR (n = 29) and CHPR (n = 21) were 48.3% and 28.6% (p = 0.2428), respectively. Those in patients with Charlson Comorbidity Index scores ≥2 were 68.8% (n = 16) and 27.3% (n = 11), respectively (p = 0.0341) and those in patients with preoperative risk score in Estimation of Physiologic Ability and Surgical Stress scores >0.3 were 57.9% (n = 19) and 21.4% (n = 14), respectively (p = 0.0362). Vital capacities of pre- and post intervention before surgery in the CHPR group were 2.63±0.65 L and 2.75±0.63 L (p = 0.0043), respectively; however, their transition in the CVPR group was not statistically significant (p = 0.6815). Forced expiratory volumes in one second of pre- and post intervention before surgery in the CHPR group were 1.73±0.46 L and 1.87±0.46 L (p = 0.0012), respectively; however, their transition in the CVPR group was not statistically significant (p = 0.6424). CHPR appeared to be a beneficial and effective short-term preoperative rehabilitation protocol, especially in patients with poor preoperative conditions.
Burkhardt, Donald R; McNamara, James A; Baccetti, Tiziano
Several methods of Class II treatment that do not rely on significant patient compliance have become popular during the last decade, including several versions of the Herbst appliance and the pendulum or Pendex molar-distalization appliances. Yet, these 2 general approaches theoretically have opposite treatment effects, one presumably enhancing mandibular growth, and the other moving the maxillary teeth posteriorly. This study examined the treatment effects produced by 2 types of the Herbst appliance (acrylic splint and stainless-steel crown) followed by fixed appliances, and the pendulum appliance followed by fixed appliances. For each of the 3 treatment groups, lateral cephalograms were analyzed before the start of treatment (T1) and after the second phase of treatment (T2). Patients were matched according to age and sex. The comprehensive treatment time for the pendulum group was 31.6 months, and the acrylic and crowned Herbst groups were treated for 29.5 months and 28.0 months, respectively. Overall from T1 to T2, there were no statistically significant differences in mandibular growth among the 3 groups. Skeletal changes accounted for a larger portion of molar correction in the Herbst treatment groups than in the pendulum group. Patients in the pendulum group had an increase in the mandibular plane angle. Conversely, the mandibular plane angle in patients treated with either Herbst appliance closed slightly from T1 to T2. At T2, the chin points (pogonion) of patients in both Herbst groups, however, were located slightly more anteriorly than were the chin points of the pendulum patients. It is likely that the slight downward and backward rotation of the mandible occurring during treatment in the pendulum patients accounted for much of this difference. The treatment effects produced by the 2 types of Herbst appliance were similar at T2, in spite of their differences in design. It is important not to generalize the findings of this comparison beyond the appliance
Olivier, J.G.J.; Peters, J.A.H.W.; Bakker, J.; Berdowski, J.J.; Visschedijk, A.J.H.; Bloos, J.P.J.
EDGAR 2.0 (Emission Database for Global Atmospheric Research) provided global annual emissions for 1990 of greenhouse gases CO2, CH4 and N2O and precursor gases CO, NOx, NMVOC and SO2 both per region and on a 1 degrees x 1degrees grid. Similar inventories were compiled for a number of CFCs, halons and methyl bromide, methyl chloroform. This report discusses the applications of EDGAR 2.0 over the last couple of years as well as the validation and uncertainty analysis carried out. About 700 users have downloaded EDGAR 2.0 data during the last 24 year. In addition, the approach taken to compile EDGAR 3.0 is discussed: update and extension from 1990 to 1995 for all gases and extended time series for direct greenhouse gases to 1970-1995 and inclusion of the new 'Kyoto' greenhouse gases HFCs, PFCs, SF6. Selected time profiles for the seasonality of anthropogenic sources are also discussed. The work is linked into and part of the Global Emissions Inventory Activity (GEIA) of IGBP/IGAC
Kyparissiadis, Antonios; van Heuven, Walter J B; Pitchford, Nicola J; Ledgeway, Timothy
Databases containing lexical properties on any given orthography are crucial for psycholinguistic research. In the last ten years, a number of lexical databases have been developed for Greek. However, these lack important part-of-speech information. Furthermore, the need for alternative procedures for calculating syllabic measurements and stress information, as well as combination of several metrics to investigate linguistic properties of the Greek language are highlighted. To address these issues, we present a new extensive lexical database of Modern Greek (GreekLex 2) with part-of-speech information for each word and accurate syllabification and orthographic information predictive of stress, as well as several measurements of word similarity and phonetic information. The addition of detailed statistical information about Greek part-of-speech, syllabification, and stress neighbourhood allowed novel analyses of stress distribution within different grammatical categories and syllabic lengths to be carried out. Results showed that the statistical preponderance of stress position on the pre-final syllable that is reported for Greek language is dependent upon grammatical category. Additionally, analyses showed that a proportion higher than 90% of the tokens in the database would be stressed correctly solely by relying on stress neighbourhood information. The database and the scripts for orthographic and phonological syllabification as well as phonetic transcription are available at http://www.psychology.nottingham.ac.uk/greeklex/.
To promote the succession of history of and the creative use of industrial science technologies, the paper made lists and databases of the articles of industrial technology museums and material halls in Japan. Record/preservation and collection/systematization of history of the industrial technology is useful for forming bases necessary for promotion of future research/development and international contribution. Museums and material halls are the fields for making comprehensive and practical activities. The data were made as one of the basic databases as the first step for promoting activities for examining the technical succession situation in a long term range continuously and systematically. In the classification of the data, the energy relation was divided into electric power, nuclear power, oil, coal, gas and energy in general. Others were classified into metal/mine, electricity/electronics/communication, chemistry/food, ship building/heavy machinery, printing/precision instrument, and textile/spinning. Moreover, the traffic relation was classified into railroad, automobiles/two-wheeled vehicles, airline/space, and ships. Items were also set of life relation, civil engineering/architecture, and general. The total number of the museums for the survey reached 208.
Akune, Yukie; Lin, Chi-Hung; Abrahams, Jodie L; Zhang, Jingyu; Packer, Nicolle H; Aoki-Kinoshita, Kiyoko F; Campbell, Matthew P
Glycan structures attached to proteins are comprised of diverse monosaccharide sequences and linkages that are produced from precursor nucleotide-sugars by a series of glycosyltransferases. Databases of these structures are an essential resource for the interpretation of analytical data and the development of bioinformatics tools. However, with no template to predict what structures are possible the human glycan structure databases are incomplete and rely heavily on the curation of published, experimentally determined, glycan structure data. In this work, a library of 45 human glycosyltransferases was used to generate a theoretical database of N-glycan structures comprised of 15 or less monosaccharide residues. Enzyme specificities were sourced from major online databases including Kyoto Encyclopedia of Genes and Genomes (KEGG) Glycan, Consortium for Functional Glycomics (CFG), Carbohydrate-Active enZymes (CAZy), GlycoGene DataBase (GGDB) and BRENDA. Based on the known activities, more than 1.1 million theoretical structures and 4.7 million synthetic reactions were generated and stored in our database called UniCorn. Furthermore, we analyzed the differences between the predicted glycan structures in UniCorn and those contained in UniCarbKB (www.unicarbkb.org), a database which stores experimentally described glycan structures reported in the literature, and demonstrate that UniCorn can be used to aid in the assignment of ambiguous structures whilst also serving as a discovery database. Copyright © 2016 Elsevier Ltd. All rights reserved.
Rubinstein, Wendy S; Maglott, Donna R; Lee, Jennifer M; Kattman, Brandi L; Malheiro, Adriana J; Ovetsky, Michael; Hem, Vichet; Gorelenkov, Viatcheslav; Song, Guangfeng; Wallin, Craig; Husain, Nora; Chitipiralla, Shanmuga; Katz, Kenneth S; Hoffman, Douglas; Jang, Wonhee; Johnson, Mark; Karmanov, Fedor; Ukrainchik, Alexander; Denisenko, Mikhail; Fomous, Cathy; Hudson, Kathy; Ostell, James M
The National Institutes of Health Genetic Testing Registry (GTR; available online at http://www.ncbi.nlm.nih.gov/gtr/) maintains comprehensive information about testing offered worldwide for disorders with a genetic basis. Information is voluntarily submitted by test providers. The database provides details of each test (e.g. its purpose, target populations, methods, what it measures, analytical validity, clinical validity, clinical utility, ordering information) and laboratory (e.g. location, contact information, certifications and licenses). Each test is assigned a stable identifier of the format GTR000000000, which is versioned when the submitter updates information. Data submitted by test providers are integrated with basic information maintained in National Center for Biotechnology Information's databases and presented on the web and through FTP (ftp.ncbi.nih.gov/pub/GTR/_README.html).
Stenson, Peter D; Mort, Matthew; Ball, Edward V; Shaw, Katy; Phillips, Andrew; Cooper, David N
The Human Gene Mutation Database (HGMD®) is a comprehensive collection of germline mutations in nuclear genes that underlie, or are associated with, human inherited disease. By June 2013, the database contained over 141,000 different lesions detected in over 5,700 different genes, with new mutation entries currently accumulating at a rate exceeding 10,000 per annum. HGMD was originally established in 1996 for the scientific study of mutational mechanisms in human genes. However, it has since acquired a much broader utility as a central unified disease-oriented mutation repository utilized by human molecular geneticists, genome scientists, molecular biologists, clinicians and genetic counsellors as well as by those specializing in biopharmaceuticals, bioinformatics and personalized genomics. The public version of HGMD (http://www.hgmd.org) is freely available to registered users from academic institutions/non-profit organizations whilst the subscription version (HGMD Professional) is available to academic, clinical and commercial users under license via BIOBASE GmbH.
Kuranishi, Fumito; Imaoka, Yuki; Sumi, Yuusuke; Uemae, Yoji; Yasuda-Kurihara, Hiroko; Ishihara, Takeshi; Miyazaki, Tsubasa; Ohno, Tadao
No effective treatment has been developed for bone-metastatic breast cancer. We found 3 cases with clinical complete response (cCR) of the bone metastasis and longer overall survival of the retrospectively examined cohort treated comprehensively including autologous formalin-fixed tumor vaccine (AFTV). AFTV was prepared individually for each patient from their own formalin-fixed and paraffin-embedded breast cancer tissues. Three patients maintained cCR status of the bone metastasis for 17 months or more. Rate of cCR for 1 year or more appeared to be 15% (3/20) after comprehensive treatments including AFTV. The median overall survival time (60.0 months) and the 3- to 8-year survival rates after diagnosis of bone metastasis were greater than those of historical control cohorts in Japan (1988-2002) and in the nationwide population-based cohort study of Denmark (1999-2007). Bone-metastatic breast cancer may be curable after comprehensive treatments including AFTV, although larger scale clinical trial is required.
Full Text Available Introduction. No effective treatment has been developed for bone-metastatic breast cancer. We found 3 cases with clinical complete response (cCR of the bone metastasis and longer overall survival of the retrospectively examined cohort treated comprehensively including autologous formalin-fixed tumor vaccine (AFTV. Patients and Methods. AFTV was prepared individually for each patient from their own formalin-fixed and paraffin-embedded breast cancer tissues. Results. Three patients maintained cCR status of the bone metastasis for 17 months or more. Rate of cCR for 1 year or more appeared to be 15% (3/20 after comprehensive treatments including AFTV. The median overall survival time (60.0 months and the 3- to 8-year survival rates after diagnosis of bone metastasis were greater than those of historical control cohorts in Japan (1988–2002 and in the nationwide population-based cohort study of Denmark (1999–2007. Conclusion. Bone-metastatic breast cancer may be curable after comprehensive treatments including AFTV, although larger scale clinical trial is required.
Jha, Ashish Kumar
Glomerular filtration rate (GFR) estimation by plasma sampling method is considered as the gold standard. However, this method is not widely used because the complex technique and cumbersome calculations coupled with the lack of availability of user-friendly software. The routinely used Serum Creatinine method (SrCrM) of GFR estimation also requires the use of online calculators which cannot be used without internet access. We have developed user-friendly software "GFR estimation software" which gives the options to estimate GFR by plasma sampling method as well as SrCrM. We have used Microsoft Windows(®) as operating system and Visual Basic 6.0 as the front end and Microsoft Access(®) as database tool to develop this software. We have used Russell's formula for GFR calculation by plasma sampling method. GFR calculations using serum creatinine have been done using MIRD, Cockcroft-Gault method, Schwartz method, and Counahan-Barratt methods. The developed software is performing mathematical calculations correctly and is user-friendly. This software also enables storage and easy retrieval of the raw data, patient's information and calculated GFR for further processing and comparison. This is user-friendly software to calculate the GFR by various plasma sampling method and blood parameter. This software is also a good system for storing the raw and processed data for future analysis.
Willmes, M.; Boel, C.; Grün, R.; Armstrong, R.; Chancerel, A.; Maureille, B.; Courtaud, P.
Strontium isotope ratios (87Sr/86Sr) can be used for the reconstruction of human and animal migrations across geologically different terrains. Sr isotope ratios in rocks are a product of age and composition and thus vary between geologic units. From the eroding environment Sr is transported into the soils, plants and rivers of a region. Humans and animals incorporate Sr from their diet into their bones and teeth, where it substitutes for calcium. Tooth enamel contains Sr isotope signatures acquired during childhood and is most resistant to weathering and overprinting, while the dentine is often diagenetically altered towards the local Sr signature. For the reconstruction of human and animal migrations the tooth enamel 87Sr/86Sr ratio is compared to the Sr isotope signature in the vicinity of the burial site and the surrounding area. This study focuses on the establishment of a comprehensive reference map of bioavailable 87Sr/86Sr ratios for France. In a next step we will compare human and animal teeth from key archaeological sites to this reference map to investigate mobility. So far, we have analysed plant and soil samples from ~200 locations across France including the Aquitaine basin, the western and northern parts of the Paris basin, as well as three transects through the Pyrenees Mountains. The isotope data, geologic background information (BRGM 1:1M), field images, and detailed method descriptions are available through our online database iRhum (http://rses.anu.edu.au/research/ee). This database can also be used in forensic studies and food sciences. As an archaeological case study teeth from 16 adult and 8 juvenile individuals were investigated from an early Bell Beaker (2500-2000 BC) site at Le Tumulus des Sables, south-west France (Gironde). The teeth were analysed for Sr isotope ratios using laser ablation ICP-MS. Four teeth were also analysed using solution ICP-MS, which showed a significant offset to the laser ablation results. This requires further
Report on comprehensive surveys of nationwide geothermal resources in fiscal 1979. Conceptual design of a database system; 1979 nendo zenkoku chinetsu shigen sogo chosa hokokusho. Database system gainen sekkei
Conceptual design was made on a database system as part of the comprehensive surveys of nationwide geothermal resources. Underground hot water in depths of several kilometers close to the ground surface is a utilizable geothermal energy. Exploration using the ground surface survey is much less expensive than the test drilling survey, but has greater error in estimation because of being an indirect method. However, integrating data by freely using a number of exploration methods can improve the accuracy of estimation on the whole. In performing the conceptual design of a geothermal resource information system, the functions of this large scale database were used as the framework. Further data collection, distribution and interactive type man-machine communication, modeling, and environment surveillance functions were incorporated. Considerations were also given on further diversified utilization patterns and on support to users in remote areas and end users. What is important in designing the system is that constituting elements of hardware and software should function while being combined organically as one system, rather than the elements work independently. In addition, sufficient expandability and flexibility are indispensable. (NEDO)
Guo, An Chi; Jewison, Timothy; Wilson, Michael; Liu, Yifeng; Knox, Craig; Djoumbou, Yannick; Lo, Patrick; Mandal, Rupasri; Krishnamurthy, Ram; Wishart, David S.
The Escherichia coli Metabolome Database (ECMDB, http://www.ecmdb.ca) is a comprehensively annotated metabolomic database containing detailed information about the metabolome of E. coli (K-12). Modelled closely on the Human and Yeast Metabolome Databases, the ECMDB contains >2600 metabolites with links to ?1500 different genes and proteins, including enzymes and transporters. The information in the ECMDB has been collected from dozens of textbooks, journal articles and electronic databases. E...
Kadumuri, Rajashekar Varma; Vadrevu, Ramakrishna
Due to their crucial role in function, folding, and stability, protein loops are being targeted for grafting/designing to create novel or alter existing functionality and improve stability and foldability. With a view to facilitate a thorough analysis and effectual search options for extracting and comparing loops for sequence and structural compatibility, we developed, LoopX a comprehensively compiled library of sequence and conformational features of ∼700,000 loops from protein structures. The database equipped with a graphical user interface is empowered with diverse query tools and search algorithms, with various rendering options to visualize the sequence- and structural-level information along with hydrogen bonding patterns, backbone φ, ψ dihedral angles of both the target and candidate loops. Two new features (i) conservation of the polar/nonpolar environment and (ii) conservation of sequence and conformation of specific residues within the loops have also been incorporated in the search and retrieval of compatible loops for a chosen target loop. Thus, the LoopX server not only serves as a database and visualization tool for sequence and structural analysis of protein loops but also aids in extracting and comparing candidate loops for a given target loop based on user-defined search options.
Full Text Available Abstract Background Management and care of the acutely ill patient has improved over the last years due to introduction of systematic assessment and accelerated treatment protocols. We have, however, sparse knowledge of the association between patient status at admission to hospital and patient outcome. A likely explanation is the difficulty in retrieving all relevant information from one database. The objective of this article was 1 to describe the formation and design of the 'Acute Admission Database', and 2 to characterize the cohort included. Methods All adult patients triaged at the Emergency Department at Hillerød Hospital and admitted either to the observationary unit or to a general ward in-hospital were prospectively included during a period of 22 weeks. The triage system used was a Danish adaptation of the Swedish triage system, ADAPT. Data from 3 different data sources was merged using a unique identifier, the Central Personal Registry number; 1 Data from patient admission; time and date, vital signs, presenting complaint and triage category, 2 Blood sample results taken at admission, including a venous acid-base status, and 3 Outcome measures, e.g. length of stay, admission to Intensive Care Unit, and mortality within 7 and 28 days after admission. Results In primary triage, patients were categorized as red (4.4%, orange (25.2%, yellow (38.7% and green (31.7%. Abnormal vital signs were present at admission in 25% of the patients, most often temperature (10.5%, saturation of peripheral oxygen (9.2%, Glasgow Coma Score (6.6% and respiratory rate (4.8%. A venous acid-base status was obtained in 43% of all patients. The majority (78% had a pH within the normal range (7.35-7.45, 15% had acidosis (pH 7.45. Median length of stay was 2 days (range 1-123. The proportion of patients admitted to Intensive Care Unit was 1.6% (95% CI 1.2-2.0, 1.8% (95% CI 1.5-2.2 died within 7 days, and 4.2% (95% CI 3.7-4.7 died within 28 days after admission
Full Text Available BINABITROP is a bibliographical database of more than 38 000 records about the ecosystems and organisms of Costa Rica. In contrast with commercial databases, such as Web of Knowledge and Scopus, which exclude most of the scientific journals published in tropical countries, BINABITROP is a comprehensive record of knowledge on the tropical ecosystems and organisms of Costa Rica. We analyzed its contents in three sites (La Selva, Palo Verde and Las Cruces and recorded scientific field, taxonomic group and authorship. We found that most records dealt with ecology and systematics, and that most authors published only one article in the study period (1963-2011. Most research was published in four journals: Biotropica, Revista de Biología Tropical/ International Journal of Tropical Biology and Conservation, Zootaxa and Brenesia. This may be the first study of a such a comprehensive database for any case of tropical biology literature.BINABITROP es una base de datos bibliográfica con más de 38 000 registros sobre los ecosistemas y organismos de Costa Rica. En contraste con bases de datos comerciales como Web of Knowledge y Scopus, que excluyen a la mayoría de las revistas científicas publicadas en los países tropicales, BINABITROP registra casi por completo la literatura biológica sobre Costa Rica. Analizamos los registros de La Selva, Palo Verde y Las Cruces. Hallamos que la mayoría de los registros corresponden a estudios sobre ecología y sistemática; que la mayoría de los autores sólo registraron un artículo en el período de estudio (1963-2011 y que la mayoría de la investigación formalmente publicada apareció en cuatro revistas: Biotropica, Revista de Biología Tropical/International Journal of Tropical Biology, Zootaxa y Brenesia. Este parece ser el primer estudio de una base de datos integral sobre literatura de biología tropical.
This study investigates teaching methods regarding color comprehension and color categorization among blind students, as compared to their non-blind peers and whether they understand and represent the same color comprehension and color categories. Then after digit codes for color comprehension teaching and assistive technology for the blind had…
New, P W; Currie, K E
Questionnaire development, validation and completion. Develop comprehensive survey of sexuality issues including validated self-report versions of the International Spinal Cord Injury male sexual function and female sexual and reproductive function basic data sets (SR-iSCI-sexual function). People with spinal cord damage (SCD) living in the community, Australia from August 2013 to June 2014. An iterative process involving rehabilitation medicine clinicians, a nurse specialising in sexuality issues in SCD and people with SCD who developed a comprehensive survey that included the SR-iSCI-sexual function. Participants recruitment through spinal rehabilitation review clinic and community organisations that support people with SCD. Surveys completed by 154 people. Most were male (n=101, 65.6%). Respondents' median age was 50 years (interquartile range (IQR) 38-58), and they were a median of 10 years (IQR 4-20) after the onset of SCD. Sexual problems unrelated to SCD were reported by 12 (8%) respondents, and 114 (n=75.5%) reported sexual problems because of SCD. Orgasms were much less likely (χ(2)=13.1, P=0.006) to be normal in males (n=5, 5%) compared with females (n=11, 22%). Males had significantly worse (χ(2)=26.0, P=0.001) psychogenic genital functioning (normal n=9, 9%) than females (normal n=13, 26%) and worse (χ(2)=10.8, P=0.013) reflex genital functioning. Normal ejaculation was reported in only three (3%) men. Most (n=26, 52%) women reported reduced or absent menstruation pattern since SCD. The SR-iSCI-sexual function provides a useful tool for researchers and clinicians to collect information regarding patient-reported sexual functioning after SCD and to facilitate comparative studies.
FY 1996 report on the basic survey project on the enhancement of energy efficiency in developing countries - database construction project. Volume 1. Outline of the survey and collection of the data to be included in database; 1996 nendo hatten tojokoku energy koritsuka kiso chosa jigyo (database kochiku jigyo) hokokusho. 1. Chosa no gaiyo oyobi database ni shurokusuru data no shushu
Following the previous fiscal year, construction/study of database were carried out with the aim of energy conservation for 8 countries: Japan, China, Indonesia, the Philippines, Thailand, Malaysia, Taiwan and Korea. In the study of the items of the data included, the 192 data extracted in conceptual design were re-classified into 6 large-groups and 91 medium-groups. As to the data collection, in A group countries, counterparts were requested to collect data, and 1342-1740 data were newly collected. In B group countries, Mitsubishi Research Institute, Inc. mostly collected 957 new data in Thailand, 814 data in Malaysia and 1312 data in Japan in cooperation with research institutes and investigating organizations in each country. In C group countries, 169 and 317 common data items were collected in Vietnam and India, respectively. Relating to plans for database promotion, as a result of the study with each country, 16 measures for promotion were extracted in terms of the leveling-up of NEDO-DB recognition, education of the use method, training of the operation method, etc. (NEDO)
Baux, David; Faugère, Valérie; Larrieu, Lise; Le Guédard-Méreuze, Sandie; Hamroun, Dalil; Béroud, Christophe; Malcolm, Sue; Claustres, Mireille; Roux, Anne-Françoise
Using the Universal Mutation Database (UMD) software, we have constructed "UMD-USHbases", a set of relational databases of nucleotide variations for seven genes involved in Usher syndrome (MYO7A, CDH23, PCDH15, USH1C, USH1G, USH3A and USH2A). Mutations in the Usher syndrome type I causing genes are also recorded in non-syndromic hearing loss cases and mutations in USH2A in non-syndromic retinitis pigmentosa. Usher syndrome provides a particular challenge for molecular diagnostics because of the clinical and molecular heterogeneity. As many mutations are missense changes, and all the genes also contain apparently non-pathogenic polymorphisms, well-curated databases are crucial for accurate interpretation of pathogenicity. Tools are provided to assess the pathogenicity of mutations, including conservation of amino acids and analysis of splice-sites. Reference amino acid alignments are provided. Apparently non-pathogenic variants in patients with Usher syndrome, at both the nucleotide and amino acid level, are included. The UMD-USHbases currently contain more than 2,830 entries including disease causing mutations, unclassified variants or non-pathogenic polymorphisms identified in over 938 patients. In addition to data collected from 89 publications, 15 novel mutations identified in our laboratory are recorded in MYO7A (6), CDH23 (8), or PCDH15 (1) genes. Information is given on the relative involvement of the seven genes, the number and distribution of variants in each gene. UMD-USHbases give access to a software package that provides specific routines and optimized multicriteria research and sorting tools. These databases should assist clinicians and geneticists seeking information about mutations responsible for Usher syndrome.
Bell, D A
Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The
Daugaard, Gedske; Kier, Maria Gry Gundgaard; Bandak, Mikkel; Mortensen, Mette Saksø; Larsson, Heidi; Søgaard, Mette; Toft, Birgitte Groenkaer; Engvad, Birte; Agerbæk, Mads; Holm, Niels Vilstrup; Lauritsen, Jakob
The nationwide Danish Testicular Cancer database consists of a retrospective research database (DaTeCa database) and a prospective clinical database (Danish Multidisciplinary Cancer Group [DMCG] DaTeCa database). The aim is to improve the quality of care for patients with testicular cancer (TC) in Denmark, that is, by identifying risk factors for relapse, toxicity related to treatment, and focusing on late effects. All Danish male patients with a histologically verified germ cell cancer diagnosis in the Danish Pathology Registry are included in the DaTeCa databases. Data collection has been performed from 1984 to 2007 and from 2013 onward, respectively. The retrospective DaTeCa database contains detailed information with more than 300 variables related to histology, stage, treatment, relapses, pathology, tumor markers, kidney function, lung function, etc. A questionnaire related to late effects has been conducted, which includes questions regarding social relationships, life situation, general health status, family background, diseases, symptoms, use of medication, marital status, psychosocial issues, fertility, and sexuality. TC survivors alive on October 2014 were invited to fill in this questionnaire including 160 validated questions. Collection of questionnaires is still ongoing. A biobank including blood/sputum samples for future genetic analyses has been established. Both samples related to DaTeCa and DMCG DaTeCa database are included. The prospective DMCG DaTeCa database includes variables regarding histology, stage, prognostic group, and treatment. The DMCG DaTeCa database has existed since 2013 and is a young clinical database. It is necessary to extend the data collection in the prospective database in order to answer quality-related questions. Data from the retrospective database will be added to the prospective data. This will result in a large and very comprehensive database for future studies on TC patients.
Formal approaches to the semantics of databases and database languages can have immediate and practical consequences in extending database integration technologies to include a vastly greater range...
Ambler, Scott W
Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...
U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...
Beckers, Matthew; Mohorianu, Irina; Stocks, Matthew; Applegate, Christopher; Dalmay, Tamas; Moulton, Vincent
Recently, high-throughput sequencing (HTS) has revealed compelling details about the small RNA (sRNA) population in eukaryotes. These 20 to 25 nt noncoding RNAs can influence gene expression by acting as guides for the sequence-specific regulatory mechanism known as RNA silencing. The increase in sequencing depth and number of samples per project enables a better understanding of the role sRNAs play by facilitating the study of expression patterns. However, the intricacy of the biological hypotheses coupled with a lack of appropriate tools often leads to inadequate mining of the available data and thus, an incomplete description of the biological mechanisms involved. To enable a comprehensive study of differential expression in sRNA data sets, we present a new interactive pipeline that guides researchers through the various stages of data preprocessing and analysis. This includes various tools, some of which we specifically developed for sRNA analysis, for quality checking and normalization of sRNA samples as well as tools for the detection of differentially expressed sRNAs and identification of the resulting expression patterns. The pipeline is available within the UEA sRNA Workbench, a user-friendly software package for the processing of sRNA data sets. We demonstrate the use of the pipeline on a H. sapiens data set; additional examples on a B. terrestris data set and on an A. thaliana data set are described in the Supplemental Information A comparison with existing approaches is also included, which exemplifies some of the issues that need to be addressed for sRNA analysis and how the new pipeline may be used to do this. © 2017 Beckers et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.
Larrieu, Theodore [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Slominski, Christopher [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Keesee, Marie [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Turner, Dennison [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Joyce, Michele [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States)
The newly commissioned 12GeV CEBAF accelerator relies on a flexible, scalable and comprehensive database to define the accelerator. This database delivers the configuration for CEBAF operational tools, including hardware checkout, the downloadable optics model, control screens, and much more. The presentation will describe the flexible design of the CEBAF Element Database (CED), its features and assorted use case examples.
The importance of comprehensive food composition databases is more critical than ever in helping to address global food security. The USDA National Nutrient Database for Standard Reference is the “gold standard” for food composition databases. The presentation will include new developments in stren...
Full Text Available Abstract Background Dystrophin is a large essential protein of skeletal and heart muscle. It is a filamentous scaffolding protein with numerous binding domains. Mutations in the DMD gene, which encodes dystrophin, mostly result in the deletion of one or several exons and cause Duchenne (DMD and Becker (BMD muscular dystrophies. The most common DMD mutations are frameshift mutations resulting in an absence of dystrophin from tissues. In-frame DMD mutations are less frequent and result in a protein with partial wild-type dystrophin function. The aim of this study was to highlight structural and functional modifications of dystrophin caused by in-frame mutations. Methods and results We developed a dedicated database for dystrophin, the eDystrophin database. It contains 209 different non frame-shifting mutations found in 945 patients from a French cohort and previous studies. Bioinformatics tools provide models of the three-dimensional structure of the protein at deletion sites, making it possible to determine whether the mutated protein retains the typical filamentous structure of dystrophin. An analysis of the structure of mutated dystrophin molecules showed that hybrid repeats were reconstituted at the deletion site in some cases. These hybrid repeats harbored the typical triple coiled-coil structure of native repeats, which may be correlated with better function in muscle cells. Conclusion This new database focuses on the dystrophin protein and its modification due to in-frame deletions in BMD patients. The observation of hybrid repeat reconstitution in some cases provides insight into phenotype-genotype correlations in dystrophin diseases and possible strategies for gene therapy. The eDystrophin database is freely available: http://edystrophin.genouest.org/.
Lapointe, Martine; Rogic, Anita; Bourgoin, Sarah; Jolicoeur, Christine; Séguin, Diane
In recent years, sophisticated technology has significantly increased the sensitivity and analytical power of genetic analyses so that very little starting material may now produce viable genetic profiles. This sensitivity however, has also increased the risk of detecting unknown genetic profiles assumed to be that of the perpetrator, yet originate from extraneous sources such as from crime scene workers. These contaminants may mislead investigations, keeping criminal cases active and unresolved for long spans of time. Voluntary submission of DNA samples from crime scene workers is fairly low, therefore we have created a promotional method for our staff elimination database that has resulted in a significant increase in voluntary samples since 2011. Our database enforces privacy safeguards and allows for optional anonymity to all staff members. We also offer information sessions at various police precincts to advise crime scene workers of the importance and success of our staff elimination database. This study, a pioneer in its field, has obtained 327 voluntary submissions from crime scene workers to date, of which 46 individual profiles (14%) have been matched to 58 criminal cases. By implementing our methods and respect for individual privacy, forensic laboratories everywhere may see similar growth and success in explaining unidentified genetic profiles in stagnate criminal cases. Crown Copyright © 2015. Published by Elsevier Ireland Ltd. All rights reserved.
Full Text Available Abstract Background The role of angiotensin-converting enzyme (ACE gene insertion/deletion (I/D polymorphism in modifying the response to treatment modalities in coronary artery disease is controversial. Methods PubMed was searched and a database of 58 studies with detailed information regarding ACE I/D polymorphism and response to treatment in coronary artery disease was created. Eligible studies were synthesized using meta-analysis methods, including cumulative meta-analysis. Heterogeneity and study quality issues were explored. Results Forty studies involved invasive treatments (coronary angioplasty or coronary artery by-pass grafting and 18 used conservative treatment options (including anti-hypertensive drugs, lipid lowering therapy and cardiac rehabilitation procedures. Clinical outcomes were investigated by 11 studies, while 47 studies focused on surrogate endpoints. The most studied outcome was the restenosis following coronary angioplasty (34 studies. Heterogeneity among studies (p ACE I/D polymorphism on the response to treatment for the rest outcomes (coronary events, endothelial dysfunction, left ventricular remodeling, progression/regression of atherosclerosis, individual studies showed significance; however, results were discrepant and inconsistent. Conclusion In view of available evidence, genetic testing of ACE I/D polymorphism prior to clinical decision making is not currently justified. The relation between ACE genetic variation and response to treatment in CAD remains an unresolved issue. The results of long-term and properly designed prospective studies hold the promise for pharmacogenetically tailored therapy in CAD.
National Oceanic and Atmospheric Administration, Department of Commerce — WOOD was developed to be a comprehensive publicly-available oceanographic bio-optical database providing global coverage. It includes nearly 250 major data sources...
Chen, Fei; Dong, Wei; Zhang, Jiawei; Guo, Xinyue; Chen, Junhao; Wang, Zhengjia; Lin, Zhenguo; Tang, Haibao; Zhang, Liangsheng
Angiosperms, the flowering plants, provide the essential resources for human life, such as food, energy, oxygen, and materials. They also promoted the evolution of human, animals, and the planet earth. Despite the numerous advances in genome reports or sequencing technologies, no review covers all the released angiosperm genomes and the genome databases for data sharing. Based on the rapid advances and innovations in the database reconstruction in the last few years, here we provide a comprehensive review for three major types of angiosperm genome databases, including databases for a single species, for a specific angiosperm clade, and for multiple angiosperm species. The scope, tools, and data of each type of databases and their features are concisely discussed. The genome databases for a single species or a clade of species are especially popular for specific group of researchers, while a timely-updated comprehensive database is more powerful for address of major scientific mysteries at the genome scale. Considering the low coverage of flowering plants in any available database, we propose construction of a comprehensive database to facilitate large-scale comparative studies of angiosperm genomes and to promote the collaborative studies of important questions in plant biology.
Full Text Available A comprehensive spectral-biogeochemical database was developed for the Wabash River and the Tippecanoe River in Indiana, United States. This database includes spectral measurements of river water, coincident in situ measurements of water quality parameters (chlorophyll (chl, non-algal particles (NAP, and colored dissolved organic matter (CDOM, nutrients (total nitrogen (TN, total phosphorus (TP, and dissolved organic carbon (DOC, water-column inherent optical properties (IOPs, water depths, substrate types, and bottom reflectance spectra collected in summer 2014. With this dataset, the temporal variability of water quality observations was first analyzed and studied. Second, radiative transfer models were inverted to retrieve water quality parameters using a look-up table (LUT based spectrum matching methodology. Results found that the temporal variability of water quality parameters and nutrients in the Wabash River was closely associated with hydrologic conditions. Meanwhile, there were no significant correlations found between these parameters and streamflow for the Tippecanoe River, due to the two upstream reservoirs, which increase the settling of sediment and uptake of nutrients. The poor relationship between CDOM and DOC indicates that most DOC in the rivers was from human sources such as wastewater. It was also found that the source of water (surface runoff or combined sewer overflow (CSO, water temperature, and nutrients were important factors controlling instream concentrations of phytoplankton. The LUT retrieved NAP concentrations were in good agreement with field measurements with slope close to 1.0 and the average estimation error was 4.1% of independently obtained lab measurements. The error for chl estimation was larger (37.7%, which is attributed to the fact that the specific absorption spectrum of chl was not well represented in this study. The LUT retrievals for CDOM experienced large variability, probably due to the small data
. Collection of questionnaires is still ongoing. A biobank including blood/sputum samples for future genetic analyses has been established. Both samples related to DaTeCa and DMCG DaTeCa database are included. The prospective DMCG DaTeCa database includes variables regarding histology, stage, prognostic group, and treatment. Conclusion: The DMCG DaTeCa database has existed since 2013 and is a young clinical database. It is necessary to extend the data collection in the prospective database in order to answer quality-related questions. Data from the retrospective database will be added to the prospective data. This will result in a large and very comprehensive database for future studies on TC patients. Keywords: testis cancer, clinical indicators, database research, DaTeCa, DMCG DaTeCa database, hypogonadism
Nakagawa, So; Takahashi, Mahoko Ueda
In mammals, approximately 10% of genome sequences correspond to endogenous viral elements (EVEs), which are derived from ancient viral infections of germ cells. Although most EVEs have been inactivated, some open reading frames (ORFs) of EVEs obtained functions in the hosts. However, EVE ORFs usually remain unannotated in the genomes, and no databases are available for EVE ORFs. To investigate the function and evolution of EVEs in mammalian genomes, we developed EVE ORF databases for 20 genomes of 19 mammalian species. A total of 736,771 non-overlapping EVE ORFs were identified and archived in a database named gEVE (http://geve.med.u-tokai.ac.jp). The gEVE database provides nucleotide and amino acid sequences, genomic loci and functional annotations of EVE ORFs for all 20 genomes. In analyzing RNA-seq data with the gEVE database, we successfully identified the expressed EVE genes, suggesting that the gEVE database facilitates studies of the genomic analyses of various mammalian species.Database URL: http://geve.med.u-tokai.ac.jp. © The Author(s) 2016. Published by Oxford University Press.
National Oceanic and Atmospheric Administration, Department of Commerce — This database was established to oversee documents issued in support of fishery research activities including experimental fishing permits (EFP), letters of...
Sibilitz, Kirstine Laerum; Berg, Selina Kikkenborg; Hansen, Tina Birgitte
, either valve replacement or repair, remains the treatment of choice. However, post-surgery, the transition to daily living may become a physical, mental and social challenge. We hypothesize that a comprehensive cardiac rehabilitation program can improve physical capacity and self-assessed mental health...... and reduce hospitalization and healthcare costs after heart valve surgery. METHODS: This randomized clinical trial, CopenHeartVR, aims to investigate whether cardiac rehabilitation in addition to usual care is superior to treatment as usual after heart valve surgery. The trial will randomly allocate 210...... patients 1:1 to an intervention or a control group, using central randomization, and blinded outcome assessment and statistical analyses. The intervention consists of 12 weeks of physical exercise and a psycho-educational intervention comprising five consultations. The primary outcome is peak oxygen uptake...
Morello, Samuel A.; Ricks, Wendell R.
The aviation safety issues database was instrumental in the refinement and substantiation of the National Aviation Safety Strategic Plan (NASSP). The issues database is a comprehensive set of issues from an extremely broad base of aviation functions, personnel, and vehicle categories, both nationally and internationally. Several aviation safety stakeholders such as the Commercial Aviation Safety Team (CAST) have already used the database. This broader interest was the genesis to making the database publically accessible and writing this report.
Baccetti, Tiziano; Franchi, Lorenzo; Stahl, Franka
The aim of this clinical trial was to compare the effects of 2 protocols for single-phase comprehensive treatment of Class II Division 1 malocclusion (bonded Herbst followed by fixed appliances [BH + FA] vs headgear followed by fixed appliances and Class II elastics [HG + FA]) at the pubertal growth spurt. Fifty-six Class II patients were enrolled in the trial and allocated by personal choice to 2 practices, where they underwent 1 of 2 treatment protocols (28 patients were treated consecutively with BH + FA, and 28 patients were treated consecutively with HG + FA). All patients started treatment at puberty (cervical stage [CS] 3 or CS 4) and completed treatment after puberty (CS 5 or CS 6). Lateral cephalograms were taken before therapy and 6 months after the end of comprehensive therapy, with an average interval of 28 months. Longitudinal observations of a matched group of 28 subjects with untreated Class II malocclusions were compared with the 2 treated groups. Analysis of variance (ANOVA) with post-hoc tests was used for statistical comparisons. Discriminant analysis was applied to identify preferential candidates for the BH + FA protocol on the basis of profile changes (advancement of the soft tissues of the chin). The success rate (full occlusal correction of the malocclusion after treatment) was 92.8% in both treatment groups. The BH + FA group showed a significant increase in mandibular protrusion. The increase in effective mandibular length (Co-Gn) was significantly greater in both treatment groups when compared with natural growth changes in the Class II controls. Significantly greater improvement in sagittal maxillomandibular relationships was found in the BH + FA group. Retrusion of maxillary incisors and mesial movement of mandibular molars were significant in the HG + FA group. The BH + FA group showed significantly greater forward movements of soft-tissue B-point and pogonion compared with both the HG + FA and the control groups. Two pretreatment
US Agency for International Development — The Anticorruption Projects Database (Database) includes information about USAID projects with anticorruption interventions implemented worldwide between 2007 and...
Hallas, Jesper; Poulsen, Maja Hellfritzsch; Hansen, Morten Rix
The Odense University Pharmacoepidemiological Database (OPED) is a prescription database established in 1990 by the University of Southern Denmark, covering reimbursed prescriptions from the county of Funen in Denmark and the region of Southern Denmark (1.2 million inhabitants). It is still active...... and thereby has more than 25 years' of continuous coverage. In this MiniReview, we review its history, content, quality, coverage, governance and some of its uses. OPED's data include the Danish Civil Registration Number (CPR), which enables unambiguous linkage with virtually all other health......-related registers in Denmark. Among its research uses, we review record-linkage studies of drug effects, advanced drug utilization studies, some examples of method development and use of OPED as sampling frame to recruit patients for field studies or clinical trials. With the advent of other, more comprehensive...
Orris, Greta J.; Cocker, Mark D.; Dunlap, Pamela; Wynn, Jeff C.; Spanski, Gregory T.; Briggs, Deborah A.; Gass, Leila; Bliss, James D.; Bolm, Karen S.; Yang, Chao; Lipin, Bruce R.; Ludington, Stephen; Miller, Robert J.; Słowakiewicz, Mirosław
Potash is mined worldwide to provide potassium, an essential nutrient for food crops. Evaporite-hosted potash deposits are the largest source of salts that contain potassium in water-soluble form, including potassium chloride, potassium-magnesium chloride, potassium sulfate, and potassium nitrate. Thick sections of evaporitic salt that form laterally continuous strata in sedimentary evaporite basins are the most common host for stratabound and halokinetic potash-bearing salt deposits. Potash-bearing basins may host tens of millions to more than 100 billion metric tons of potassium oxide (K2O). Examples of these deposits include those in the Elk Point Basin in Canada, the Pripyat Basin in Belarus, the Solikamsk Basin in Russia, and the Zechstein Basin in Germany.
Koppers, A. A.; Minnett, R. C.; Tauxe, L.; Constable, C.; Donadini, F.
The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by rock and paleomagnetic data. The goal of MagIC is to archive all measurements and derived properties for studies of paleomagnetic directions (inclination, declination) and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). Organizing data for presentation in peer-reviewed publications or for ingestion into databases is a time-consuming task, and to facilitate these activities, three tightly integrated tools have been developed: MagIC-PY, the MagIC Console Software, and the MagIC Online Database. A suite of Python scripts is available to help users port their data into the MagIC data format. They allow the user to add important metadata, perform basic interpretations, and average results at the specimen, sample and site levels. These scripts have been validated for use as Open Source software under the UNIX, Linux, PC and Macintosh© operating systems. We have also developed the MagIC Console Software program to assist in collating rock and paleomagnetic data for upload to the MagIC database. The program runs in Microsoft Excel© on both Macintosh© computers and PCs. It performs routine consistency checks on data entries, and assists users in preparing data for uploading into the online MagIC database. The MagIC website is hosted under EarthRef.org at http://earthref.org/MAGIC/ and has two search nodes, one for paleomagnetism and one for rock magnetism. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual FlashMap interface to browse and select locations. Users can also browse the database by data type (inclination, intensity, VGP, hysteresis, susceptibility) or by data compilation to view all contributions associated with previous databases, such as PINT, GMPDB or TAFI or other user
The availability of online bibliographic databases greatly facilitates literature searching in political science. The advantages to searching databases online include combination of concepts, comprehensiveness, multiple database searching, free-text searching, currency, current awareness services, document delivery service, and convenience.…
Jimenez Infante, Francy M.
The OM43 clade within the family Methylophilaceae of Betaproteobacteria represents a group of methylotrophs playing important roles in the metabolism of C1 compounds in marine environments and other aquatic environments around the globe. Using dilution-to-extinction cultivation techniques, we successfully isolated a novel species of this clade (designated here as MBRS-H7) from the ultra-oligotrophic open ocean waters of the central Red Sea. Phylogenomic analyses indicate that MBRS-H7 is a novel species, which forms a distinct cluster together with isolate KB13 from Hawaii (H-RS cluster) that is separate from that represented by strain HTCC2181 (from the Oregon coast). Phylogenetic analyses using the robust 16S–23S internal transcribed spacer revealed a potential ecotype separation of the marine OM43 clade members, which was further confirmed by metagenomic fragment recruitment analyses that showed trends of higher abundance in low chlorophyll and/or high temperature provinces for the H-RS cluster, but a preference for colder, highly productive waters for the HTCC2181 cluster. This potential environmentally driven niche differentiation is also reflected in the metabolic gene inventories, which in the case of H-RS include those conferring resistance to high levels of UV irradiation, temperature, and salinity. Interestingly, we also found different energy conservation modules between these OM43 subclades, namely the existence of the NADH:quinone oxidoreductase NUO system in the H-RS and the non-homologous NQR system in HTCC2181, which might have implications on their overall energetic yields.
Jimenez Infante, Francy M.; Ngugi, David; Vinu, Manikandan; Alam, Intikhab; Kamau, Allan; Blom, Jochen; Bajic, Vladimir B.; Stingl, Ulrich
The OM43 clade within the family Methylophilaceae of Betaproteobacteria represents a group of methylotrophs playing important roles in the metabolism of C1 compounds in marine environments and other aquatic environments around the globe. Using dilution-to-extinction cultivation techniques, we successfully isolated a novel species of this clade (designated here as MBRS-H7) from the ultra-oligotrophic open ocean waters of the central Red Sea. Phylogenomic analyses indicate that MBRS-H7 is a novel species, which forms a distinct cluster together with isolate KB13 from Hawaii (H-RS cluster) that is separate from that represented by strain HTCC2181 (from the Oregon coast). Phylogenetic analyses using the robust 16S–23S internal transcribed spacer revealed a potential ecotype separation of the marine OM43 clade members, which was further confirmed by metagenomic fragment recruitment analyses that showed trends of higher abundance in low chlorophyll and/or high temperature provinces for the H-RS cluster, but a preference for colder, highly productive waters for the HTCC2181 cluster. This potential environmentally driven niche differentiation is also reflected in the metabolic gene inventories, which in the case of H-RS include those conferring resistance to high levels of UV irradiation, temperature, and salinity. Interestingly, we also found different energy conservation modules between these OM43 subclades, namely the existence of the NADH:quinone oxidoreductase NUO system in the H-RS and the non-homologous NQR system in HTCC2181, which might have implications on their overall energetic yields.
Jimenez-Infante, Francy; Ngugi, David Kamanda; Vinu, Manikandan; Alam, Intikhab; Kamau, Allan Anthony; Blom, Jochen; Bajic, Vladimir B.
The OM43 clade within the family Methylophilaceae of Betaproteobacteria represents a group of methylotrophs that play important roles in the metabolism of C1 compounds in marine environments and other aquatic environments around the globe. Using dilution-to-extinction cultivation techniques, we successfully isolated a novel species of this clade (here designated MBRS-H7) from the ultraoligotrophic open ocean waters of the central Red Sea. Phylogenomic analyses indicate that MBRS-H7 is a novel species that forms a distinct cluster together with isolate KB13 from Hawaii (Hawaii-Red Sea [H-RS] cluster) that is separate from the cluster represented by strain HTCC2181 (from the Oregon coast). Phylogenetic analyses using the robust 16S-23S internal transcribed spacer revealed a potential ecotype separation of the marine OM43 clade members, which was further confirmed by metagenomic fragment recruitment analyses that showed trends of higher abundance in low-chlorophyll and/or high-temperature provinces for the H-RS cluster but a preference for colder, highly productive waters for the HTCC2181 cluster. This potential environmentally driven niche differentiation is also reflected in the metabolic gene inventories, which in the case of the H-RS cluster include those conferring resistance to high levels of UV irradiation, temperature, and salinity. Interestingly, we also found different energy conservation modules between these OM43 subclades, namely, the existence of the NADH:quinone oxidoreductase complex I (NUO) system in the H-RS cluster and the nonhomologous NADH:quinone oxidoreductase (NQR) system in the HTCC2181 cluster, which might have implications for their overall energetic yields. PMID:26655752
U.S. Environmental Protection Agency — The Comprehensive Environmental Response, Compensation and Liability Information System (CERCLIS) (Superfund) Public Access Database contains a selected set of...
Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A.; Trukhachev, Vladimir I.; Kostyukova, Elena I.; Gerasimov, Alexey N.; Kitas, George D.
Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and d...
The Supply Chain Initiatives Database (SCID) presents innovative approaches to engaging industrial suppliers in efforts to save energy, increase productivity and improve environmental performance. This comprehensive and freely-accessible database was developed by the Institute for Industrial Productivity (IIP). IIP acknowledges Ecofys for their valuable contributions. The database contains case studies searchable according to the types of activities buyers are undertaking to motivate suppliers, target sector, organization leading the initiative, and program or partnership linkages.
Hendrich, Lars; Morinière, Jérôme; Haszprunar, Gerhard; Hebert, Paul D N; Hausmann, Axel; Köhler, Frank; Balke, Michael
Beetles are the most diverse group of animals and are crucial for ecosystem functioning. In many countries, they are well established for environmental impact assessment, but even in the well-studied Central European fauna, species identification can be very difficult. A comprehensive and taxonomically well-curated DNA barcode library could remedy this deficit and could also link hundreds of years of traditional knowledge with next generation sequencing technology. However, such a beetle library is missing to date. This study provides the globally largest DNA barcode reference library for Coleoptera for 15 948 individuals belonging to 3514 well-identified species (53% of the German fauna) with representatives from 97 of 103 families (94%). This study is the first comprehensive regional test of the efficiency of DNA barcoding for beetles with a focus on Germany. Sequences ≥500 bp were recovered from 63% of the specimens analysed (15 948 of 25 294) with short sequences from another 997 specimens. Whereas most specimens (92.2%) could be unambiguously assigned to a single known species by sequence diversity at CO1, 1089 specimens (6.8%) were assigned to more than one Barcode Index Number (BIN), creating 395 BINs which need further study to ascertain if they represent cryptic species, mitochondrial introgression, or simply regional variation in widespread species. We found 409 specimens (2.6%) that shared a BIN assignment with another species, most involving a pair of closely allied species as 43 BINs were involved. Most of these taxa were separated by barcodes although sequence divergences were low. Only 155 specimens (0.97%) show identical or overlapping clusters. © 2014 John Wiley & Sons Ltd.
Rodgers, Kirk D.
The U.S. Geological Survey, in cooperation with the Reservoir Fisheries Habitat Partnership, combined multiple national databases to create one comprehensive national reservoir database and to calculate new morphological metrics for 3,828 reservoirs. These new metrics include, but are not limited to, shoreline development index, index of basin permanence, development of volume, and other descriptive metrics based on established morphometric formulas. The new database also contains modeled chemical and physical metrics. Because of the nature of the existing databases used to compile the Reservoir Morphology Database and the inherent missing data, some metrics were not populated. One comprehensive database will assist water-resource managers in their understanding of local reservoir morphology and water chemistry characteristics throughout the continental United States.
Slobodanka Ključanin; Zdravko Galić
The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on t...
Blaženović, Ivana; Kind, Tobias; Torbašinović, Hrvoje; Obrenović, Slobodan; Mehta, Sajjan S; Tsugawa, Hiroshi; Wermuth, Tobias; Schauer, Nicolas; Jahn, Martina; Biedendieck, Rebekka; Jahn, Dieter; Fiehn, Oliver
In mass spectrometry-based untargeted metabolomics, rarely more than 30% of the compounds are identified. Without the true identity of these molecules it is impossible to draw conclusions about the biological mechanisms, pathway relationships and provenance of compounds. The only way at present to address this discrepancy is to use in silico fragmentation software to identify unknown compounds by comparing and ranking theoretical MS/MS fragmentations from target structures to experimental tandem mass spectra (MS/MS). We compared the performance of four publicly available in silico fragmentation algorithms (MetFragCL, CFM-ID, MAGMa+ and MS-FINDER) that participated in the 2016 CASMI challenge. We found that optimizing the use of metadata, weighting factors and the manner of combining different tools eventually defined the ultimate outcomes of each method. We comprehensively analysed how outcomes of different tools could be combined and reached a final success rate of 93% for the training data, and 87% for the challenge data, using a combination of MAGMa+, CFM-ID and compound importance information along with MS/MS matching. Matching MS/MS spectra against the MS/MS libraries without using any in silico tool yielded 60% correct hits, showing that the use of in silico methods is still important.
Jewison, Timothy; Knox, Craig; Neveu, Vanessa; Djoumbou, Yannick; Guo, An Chi; Lee, Jacqueline; Liu, Philip; Mandal, Rupasri; Krishnamurthy, Ram; Sinelnikov, Igor; Wilson, Michael; Wishart, David S.
The Yeast Metabolome Database (YMDB, http://www.ymdb.ca) is a richly annotated ‘metabolomic’ database containing detailed information about the metabolome of Saccharomyces cerevisiae. Modeled closely after the Human Metabolome Database, the YMDB contains >2000 metabolites with links to 995 different genes/proteins, including enzymes and transporters. The information in YMDB has been gathered from hundreds of books, journal articles and electronic databases. In addition to its comprehensive literature-derived data, the YMDB also contains an extensive collection of experimental intracellular and extracellular metabolite concentration data compiled from detailed Mass Spectrometry (MS) and Nuclear Magnetic Resonance (NMR) metabolomic analyses performed in our lab. This is further supplemented with thousands of NMR and MS spectra collected on pure, reference yeast metabolites. Each metabolite entry in the YMDB contains an average of 80 separate data fields including comprehensive compound description, names and synonyms, structural information, physico-chemical data, reference NMR and MS spectra, intracellular/extracellular concentrations, growth conditions and substrates, pathway information, enzyme data, gene/protein sequence data, as well as numerous hyperlinks to images, references and other public databases. Extensive searching, relational querying and data browsing tools are also provided that support text, chemical structure, spectral, molecular weight and gene/protein sequence queries. Because of S. cervesiae's importance as a model organism for biologists and as a biofactory for industry, we believe this kind of database could have considerable appeal not only to metabolomics researchers, but also to yeast biologists, systems biologists, the industrial fermentation industry, as well as the beer, wine and spirit industry. PMID:22064855
Modak, Nabanita; Spence, Kelley; Sood, Saloni; Rosati, Jacky Ann
Air emissions from the U.S. pulp and paper sector have been federally regulated since 1978; however, regulations are periodically reviewed and revised to improve efficiency and effectiveness of existing emission standards. The Industrial Sectors Integrated Solutions (ISIS) model for the pulp and paper sector is currently under development at the U.S. Environmental Protection Agency (EPA), and can be utilized to facilitate multi-pollutant, sector-based analyses that are performed in conjunction with regulatory development. The model utilizes a multi-sector, multi-product dynamic linear modeling framework that evaluates the economic impact of emission reduction strategies for multiple air pollutants. The ISIS model considers facility-level economic, environmental, and technical parameters, as well as sector-level market data, to estimate the impacts of environmental regulations on the pulp and paper industry. Specifically, the model can be used to estimate U.S. and global market impacts of new or more stringent air regulations, such as impacts on product price, exports and imports, market demands, capital investment, and mill closures. One major challenge to developing a representative model is the need for an extensive amount of data. This article discusses the collection and processing of data for use in the model, as well as the methods used for building the ISIS pulp and paper database that facilitates the required analyses to support the air quality management of the pulp and paper sector.
Modak, Nabanita; Spence, Kelley; Sood, Saloni; Rosati, Jacky Ann
Air emissions from the U.S. pulp and paper sector have been federally regulated since 1978; however, regulations are periodically reviewed and revised to improve efficiency and effectiveness of existing emission standards. The Industrial Sectors Integrated Solutions (ISIS) model for the pulp and paper sector is currently under development at the U.S. Environmental Protection Agency (EPA), and can be utilized to facilitate multi-pollutant, sector-based analyses that are performed in conjunction with regulatory development. The model utilizes a multi-sector, multi-product dynamic linear modeling framework that evaluates the economic impact of emission reduction strategies for multiple air pollutants. The ISIS model considers facility-level economic, environmental, and technical parameters, as well as sector-level market data, to estimate the impacts of environmental regulations on the pulp and paper industry. Specifically, the model can be used to estimate U.S. and global market impacts of new or more stringent air regulations, such as impacts on product price, exports and imports, market demands, capital investment, and mill closures. One major challenge to developing a representative model is the need for an extensive amount of data. This article discusses the collection and processing of data for use in the model, as well as the methods used for building the ISIS pulp and paper database that facilitates the required analyses to support the air quality management of the pulp and paper sector. PMID:25806516
Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi
... Comprehensive Care Share this page Facebook Twitter Email Comprehensive Care Understand the importance of comprehensive MS care ... In this article A complex disease requires a comprehensive approach Today multiple sclerosis (MS) is not a ...
U.S. Department of Health & Human Services — The Cell Centered Database (CCDB) is a web accessible database for high resolution 2D, 3D and 4D data from light and electron microscopy, including correlated imaging.
This paper reports on a computerized relational database which is the basis for a hazardous materials information management system which is comprehensive, effective, flexible and efficient. The system includes product information for Material Safety Data Sheets (MSDSs), labels, shipping, and the environment and is used in Dowell Schlumberger (DS) operations worldwide for a number of programs including planning, training, emergency response and regulatory compliance
Park, Hae-Min; Park, Ju-Hyeong; Kim, Yoon-Woo; Kim, Kyoung-Jin; Jeong, Hee-Jin; Jang, Kyoung-Soon; Kim, Byung-Gee; Kim, Yun-Gon
In recent years, the improvement of mass spectrometry-based glycomics techniques (i.e. highly sensitive, quantitative and high-throughput analytical tools) has enabled us to obtain a large dataset of glycans. Here we present a database named Xeno-glycomics database (XDB) that contains cell- or tissue-specific pig glycomes analyzed with mass spectrometry-based techniques, including a comprehensive pig glycan information on chemical structures, mass values, types and relative quantities. It was designed as a user-friendly web-based interface that allows users to query the database according to pig tissue/cell types or glycan masses. This database will contribute in providing qualitative and quantitative information on glycomes characterized from various pig cells/organs in xenotransplantation and might eventually provide new targets in the α1,3-galactosyltransferase gene-knock out pigs era. The database can be accessed on the web at http://bioinformatics.snu.ac.kr/xdb.
Ding Xiaoming; Li Lin; Zhao Shiping
Nuclear power economic database (NPEDB), based on ORACLE V6.0, consists of three parts, i.e., economic data base of nuclear power station, economic data base of nuclear fuel cycle and economic database of nuclear power planning and nuclear environment. Economic database of nuclear power station includes data of general economics, technique, capital cost and benefit, etc. Economic database of nuclear fuel cycle includes data of technique and nuclear fuel price. Economic database of nuclear power planning and nuclear environment includes data of energy history, forecast, energy balance, electric power and energy facilities
1 - Description of program or function: This database is a repository of comprehensive licensing and technical reviews of siting regulatory processes and acceptance criteria for advanced light water reactor (ALWR) nuclear power plants. The program is designed to be used by applicants for an early site permit or combined construction permit/operating license (10CFRR522), Sub-parts A and C) as input for the development of the application. The database is a complete, menu-driven, self-contained package that can search and sort the supplied data by topic, keyword, or other input. The software is designed for operation on IBM compatible computers with DOS. 2 - Method of solution: The database is an R:BASE Runtime program with all the necessary database files included
Joseph, Jerrine; Rajendran, Vasanthi; Hassan, Sameer; Kumar, Vanaja
Mycobacteriophage genome database (MGDB) is an exclusive repository of the 64 completely sequenced mycobacteriophages with annotated information. It is a comprehensive compilation of the various gene parameters captured from several databases pooled together to empower mycobacteriophage researchers. The MGDB (Version No.1.0) comprises of 6086 genes from 64 mycobacteriophages classified into 72 families based on ACLAME database. Manual curation was aided by information available from public databases which was enriched further by analysis. Its web interface allows browsing as well as querying the classification. The main objective is to collect and organize the complexity inherent to mycobacteriophage protein classification in a rational way. The other objective is to browse the existing and new genomes and describe their functional annotation. The database is available for free at http://mpgdb.ibioinformatics.org/mpgdb.php.
Burnham, Judy F
The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.
Shirts, Brian H; Wood, Joel; Yolken, Robert H; Nimgaonkar, Vishwajit L
Genetic association studies of several candidate cytokine genes have been motivated by evidence of immune dysfunction among patients with schizophrenia. Intriguing but inconsistent associations have been reported with polymorphisms of three positional candidate genes, namely IL1beta, IL1RN, and IL10. We used comprehensive sequencing data from the Seattle SNPs database to select tag SNPs that represent all common polymorphisms in the Caucasian population at these loci. Associations with 28 tag SNPs were evaluated in 478 cases and 501 unscreened control individuals, while accounting for population sub-structure using the genomic control method. The samples were also stratified by gender, diagnostic category, and exposure to infectious agents. Significant association was not detected after correcting for multiple comparisons. However, meta-analysis of our data combined with previously published association studies of rs16944 (IL1beta -511) suggests that the C allele confers modest risk for schizophrenia among individuals reporting Caucasian ancestry, but not Asians (Caucasians, n=819 cases, 1292 controls; p=0.0013, OR=1.24, 95% CI 1.09, 1.41).
Full Text Available The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on the Internet.
National Oceanic and Atmospheric Administration, Department of Commerce — The Marine Minerals Geochemical Database was created by NGDC as a part of a project to construct a comprehensive computerized bibliography and geochemical database...
U.S. Environmental Protection Agency — The Comprehensive Environmental Response, Compensation and Liability Information System (CERCLIS) (Superfund) Public Access Database (CPAD) contains a selected set...
Geologic map and map database of northeastern San Francisco Bay region, California, [including] most of Solano County and parts of Napa, Marin, Contra Costa, San Joaquin, Sacramento, Yolo, and Sonoma Counties
Graymer, Russell Walter; Jones, David Lawrence; Brabb, Earl E.
This digital map database, compiled from previously published and unpublished data, and new mapping by the authors, represents the general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (nesfmf.ps, nesfmf.pdf, nesfmf.txt), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller.
Fabiano Luiz Erzinger
Full Text Available Background:The creation of an electronic database facilitates the storage of information, as well as streamlines the exchange of data, making easier the exchange of knowledge for future research.Objective:To construct an electronic database containing comprehensive and up-to-date clinical and surgical data on the most common arterial aneurysms, to help advance scientific research.Methods:The most important specialist textbooks and articles found in journals and on internet databases were reviewed in order to define the basic structure of the protocol. Data were computerized using the SINPE© system for integrated electronic protocols and tested in a pilot study.Results:The data entered onto the system was first used to create a Master protocol, organized into a structure of top-level directories covering a large proportion of the content on vascular diseases as follows: patient history; physical examination; supplementary tests and examinations; diagnosis; treatment; and clinical course. By selecting items from the Master protocol, Specific protocols were then created for the 22 arterial sites most often involved by aneurysms. The program provides a method for collection of data on patients including clinical characteristics (patient history and physical examination, supplementary tests and examinations, treatments received and follow-up care after treatment. Any information of interest on these patients that is contained in the protocol can then be used to query the database and select data for studies.Conclusions:It proved possible to construct a database of clinical and surgical data on the arterial aneurysms of greatest interest and, by adapting the data to specific software, the database was integrated into the SINPE© system, thereby providing a standardized method for collection of data on these patients and tools for retrieving this information in an organized manner for use in scientific studies.
Biofuel Database (Web, free access) This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.
National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...
The operation of the Clinical Radiology Imaging System (CRIS) in Pediatric Radiology at UCLA relies on the orderly flow of text and image data among the three basic subsystems including acquisition, storage, and display. CRIS provides the radiologist, clinician, and technician with data at clinical image workstations by maintaining comprehensive database. CRIS is made up of sub-systems, each composed of one more programs or tasks which operate in parallel on a VAX-11/750 microcomputer in Pediatric Radiology. Tasks are coordinated through dynamic data structures that include system event flags and disk-resident queues. This report outlines: (1) the CRIS data model, (2) the flow of information among CRIS components, (3) the underlying database structures which support the acquisition, display, and storage of text and image information, and (4) current database statistics
The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…
Nikolov, Nikolai Georgiev; Pavlov, Todor; Niemelä, Jay Russell
Computer-based representation of chemicals makes it possible to organize data in chemical databases-collections of chemical structures and associated properties. Databases are widely used wherever efficient processing of chemical information is needed, including search, storage, retrieval......, and dissemination. Structure and functionality of chemical databases are considered. The typical kinds of information found in a chemical database are considered-identification, structural, and associated data. Functionality of chemical databases is presented, with examples of search and access types. More details...... are included about the OASIS database and platform and the Danish (Q)SAR Database online. Various types of chemical database resources are discussed, together with a list of examples....
Mewes, H W; Hani, J; Pfeiffer, F; Frishman, D
The MIPS group [Munich Information Center for Protein Sequences of the German National Center for Environment and Health (GSF)] at the Max-Planck-Institute for Biochemistry, Martinsried near Munich, Germany, is involved in a number of data collection activities, including a comprehensive database of the yeast genome, a database reflecting the progress in sequencing the Arabidopsis thaliana genome, the systematic analysis of other small genomes and the collection of protein sequence data within the framework of the PIR-International Protein Sequence Database (described elsewhere in this volume). Through its WWW server (http://www.mips.biochem.mpg.de ) MIPS provides access to a variety of generic databases, including a database of protein families as well as automatically generated data by the systematic application of sequence analysis algorithms. The yeast genome sequence and its related information was also compiled on CD-ROM to provide dynamic interactive access to the 16 chromosomes of the first eukaryotic genome unraveled. PMID:9399795
Tian, Lixun; Liao, Ningfang; Chai, Ali
This paper is to introduce Common hyperspectral image database with a demand-oriented Database design method (CHIDB), which comprehensively set ground-based spectra, standardized hyperspectral cube, spectral analysis together to meet some applications. The paper presents an integrated approach to retrieving spectral and spatial patterns from remotely sensed imagery using state-of-the-art data mining and advanced database technologies, some data mining ideas and functions were associated into CHIDB to make it more suitable to serve in agriculture, geological and environmental areas. A broad range of data from multiple regions of the electromagnetic spectrum is supported, including ultraviolet, visible, near-infrared, thermal infrared, and fluorescence. CHIDB is based on dotnet framework and designed by MVC architecture including five main functional modules: Data importer/exporter, Image/spectrum Viewer, Data Processor, Parameter Extractor, and On-line Analyzer. The original data were all stored in SQL server2008 for efficient search, query and update, and some advance Spectral image data Processing technology are used such as Parallel processing in C#; Finally an application case is presented in agricultural disease detecting area.
Kannegaard, Pia Nimann; Vinding, Kirsten L; Hare-Bruun, Helle
AIM OF DATABASE: The aim of the National Database of Geriatrics is to monitor the quality of interdisciplinary diagnostics and treatment of patients admitted to a geriatric hospital unit. STUDY POPULATION: The database population consists of patients who were admitted to a geriatric hospital unit....... Geriatric patients cannot be defined by specific diagnoses. A geriatric patient is typically a frail multimorbid elderly patient with decreasing functional ability and social challenges. The database includes 14-15,000 admissions per year, and the database completeness has been stable at 90% during the past......, percentage of discharges with a rehabilitation plan, and the part of cases where an interdisciplinary conference has taken place. Data are recorded by doctors, nurses, and therapists in a database and linked to the Danish National Patient Register. DESCRIPTIVE DATA: Descriptive patient-related data include...
Welch, M.J.; Welles, B.W.
Accident statistics on all modes of transportation are available as risk assessment analytical tools through several federal agencies. This paper reports on the examination of the accident databases by personal contact with the federal staff responsible for administration of the database programs. This activity, sponsored by the Department of Energy through Sandia National Laboratories, is an overview of the national accident data on highway, rail, air, and marine shipping. For each mode, the definition or reporting requirements of an accident are determined and the method of entering the accident data into the database is established. Availability of the database to others, ease of access, costs, and who to contact were prime questions to each of the database program managers. Additionally, how the agency uses the accident data was of major interest
Full Text Available y are in the reverse direction. *1 A comprehensive two-hybrid analysis to explore the yeast protein interact...s. 2000 Jan 1;28(1):73-6. *2 The yeast proteome database (YPD) and Caenorhabditis elegans proteome database (WormPD): comprehensive...000 Jan 1;28(1):73-6. *3 A comprehensive analysis of protein-protein interactions in Saccharomyces cerevisia
Boutselakis, H; Dimitropoulos, D; Fillon, J; Golovin, A; Henrick, K; Hussain, A; Ionides, J; John, M; Keller, P A; Krissinel, E; McNeil, P; Naim, A; Newman, R; Oldfield, T; Pineda, J; Rachedi, A; Copeland, J; Sitnov, A; Sobhany, S; Suarez-Uruena, A; Swaminathan, J; Tagari, M; Tate, J; Tromm, S; Velankar, S; Vranken, W
The E-MSD macromolecular structure relational database (http://www.ebi.ac.uk/msd) is designed to be a single access point for protein and nucleic acid structures and related information. The database is derived from Protein Data Bank (PDB) entries. Relational database technologies are used in a comprehensive cleaning procedure to ensure data uniformity across the whole archive. The search database contains an extensive set of derived properties, goodness-of-fit indicators, and links to other EBI databases including InterPro, GO, and SWISS-PROT, together with links to SCOP, CATH, PFAM and PROSITE. A generic search interface is available, coupled with a fast secondary structure domain search tool.
Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D
Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls.
Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485
Discussion of dictionaries as databases focuses on the digitizing of The Oxford English dictionary (OED) and the use of Standard Generalized Mark-Up Language (SGML). Topics include the creation of a consortium to digitize the OED, document structure, relational databases, text forms, sequence, and discourse. (LRW)
Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and
National Oceanic and Atmospheric Administration, Department of Commerce — The Snowstorm Database is a collection of over 500 snowstorms dating back to 1900 and updated operationally. Only storms having large areas of heavy snowfall (10-20...
National Oceanic and Atmospheric Administration, Department of Commerce — The dealer reporting databases contain the primary data reported by federally permitted seafood dealers in the northeast. Electronic reporting was implemented May 1,...
Kristensen, Helen Grundtvig; Stjernø, Henrik
Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen.......Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen....
Demirguc-Kunt, Asli; Klapper, Leora; Singer, Dorothe; Ansar, Saniya; Hess, Jake
The Global Findex database is the world's most comprehensive set of data on how people make payments, save money, borrow and manage risk. Launched in 2011, it includes more than 100 financial inclusion indicators in a format allowing users to compare access to financial services among adults worldwide -- including by gender, age and household income. This third edition of the database was compiled in 2017 using nationally representative surveys in more than 140 developing and high-income...
The Anaerobic Digester Database provides basic information about anaerobic digesters on livestock farms in the United States, organized in Excel spreadsheets. It includes projects that are under construction, operating, or shut down.
National Oceanic and Atmospheric Administration, Department of Commerce — The Tethys database houses the metadata associated with the acoustic data collection efforts by the Passive Acoustic Group. These metadata include dates, locations...
General Services Administration — The Fine Arts Database records information on federally owned art in the control of the GSA; this includes the location, current condition and information on artists.
Zhang, Qingzhou; Yang, Bo; Chen, Xujiao; Xu, Jing; Mei, Changlin; Mao, Zhiguo
We present a bioinformatics database named Renal Gene Expression Database (RGED), which contains comprehensive gene expression data sets from renal disease research. The web-based interface of RGED allows users to query the gene expression profiles in various kidney-related samples, including renal cell lines, human kidney tissues and murine model kidneys. Researchers can explore certain gene profiles, the relationships between genes of interests and identify biomarkers or even drug targets in kidney diseases. The aim of this work is to provide a user-friendly utility for the renal disease research community to query expression profiles of genes of their own interest without the requirement of advanced computational skills. Availability and implementation: Website is implemented in PHP, R, MySQL and Nginx and freely available from http://rged.wall-eva.net. Database URL: http://rged.wall-eva.net PMID:25252782
The National Transportation Atlas Databases 2012 (NTAD2012) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...
The National Transportation Atlas Databases 2011 (NTAD2011) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...
The National Transportation Atlas Databases 2009 (NTAD2009) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...
The National Transportation Atlas Databases 2010 (NTAD2010) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...
Full Text Available Abstract Background Transcription factors play the crucial rule of regulating gene expression and influence almost all biological processes. Systematically identifying and annotating transcription factors can greatly aid further understanding their functions and mechanisms. In this article, we present SoyDB, a user friendly database containing comprehensive knowledge of soybean transcription factors. Description The soybean genome was recently sequenced by the Department of Energy-Joint Genome Institute (DOE-JGI and is publicly available. Mining of this sequence identified 5,671 soybean genes as putative transcription factors. These genes were comprehensively annotated as an aid to the soybean research community. We developed SoyDB - a knowledge database for all the transcription factors in the soybean genome. The database contains protein sequences, predicted tertiary structures, putative DNA binding sites, domains, homologous templates in the Protein Data Bank (PDB, protein family classifications, multiple sequence alignments, consensus protein sequence motifs, web logo of each family, and web links to the soybean transcription factor database PlantTFDB, known EST sequences, and other general protein databases including Swiss-Prot, Gene Ontology, KEGG, EMBL, TAIR, InterPro, SMART, PROSITE, NCBI, and Pfam. The database can be accessed via an interactive and convenient web server, which supports full-text search, PSI-BLAST sequence search, database browsing by protein family, and automatic classification of a new protein sequence into one of 64 annotated transcription factor families by hidden Markov models. Conclusions A comprehensive soybean transcription factor database was constructed and made publicly accessible at http://casp.rnet.missouri.edu/soydb/.
Vanschoren, Joaquin; Blockeel, Hendrik
Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.
Dong, Wei; Yang, Litao; Shen, Kailin; Kim, Banghyun; Kleter, Gijs A; Marvin, Hans J P; Guo, Rong; Liang, Wanqi; Zhang, Dabing
Since more than one hundred events of genetically modified organisms (GMOs) have been developed and approved for commercialization in global area, the GMO analysis methods are essential for the enforcement of GMO labelling regulations. Protein and nucleic acid-based detection techniques have been developed and utilized for GMOs identification and quantification. However, the information for harmonization and standardization of GMO analysis methods at global level is needed. GMO Detection method Database (GMDD) has collected almost all the previous developed and reported GMOs detection methods, which have been grouped by different strategies (screen-, gene-, construct-, and event-specific), and also provide a user-friendly search service of the detection methods by GMO event name, exogenous gene, or protein information, etc. In this database, users can obtain the sequences of exogenous integration, which will facilitate PCR primers and probes design. Also the information on endogenous genes, certified reference materials, reference molecules, and the validation status of developed methods is included in this database. Furthermore, registered users can also submit new detection methods and sequences to this database, and the newly submitted information will be released soon after being checked. GMDD contains comprehensive information of GMO detection methods. The database will make the GMOs analysis much easier.
The aim of this thesis is to provide a comprehensive overview of database systems security. Reader is introduced into the basis of information security and its development. Following chapter defines a concept of database system security using ISO/IEC 27000 Standard. The findings from this chapter form a complex list of requirements on database security. One chapter also deals with legal aspects of this domain. Second part of this thesis offers a comparison of four object-relational database s...
This second edition of the Directory of IAEA Databases has been prepared within the Division of Scientific and Technical Information (NESI). Its main objective is to describe the computerized information sources available to staff members. This directory contains all databases produced at the IAEA, including databases stored on the mainframe, LAN's and PC's. All IAEA Division Directors have been requested to register the existence of their databases with NESI. For the second edition database owners were requested to review the existing entries for their databases and answer four additional questions. The four additional questions concerned the type of database (e.g. Bibliographic, Text, Statistical etc.), the category of database (e.g. Administrative, Nuclear Data etc.), the available documentation and the type of media used for distribution. In the individual entries on the following pages the answers to the first two questions (type and category) is always listed, but the answers to the second two questions (documentation and media) is only listed when information has been made available
This paper describes a project recently completed for EPRI by Impell. The purpose of the project was to develop a reference database of fire tests performed on non-typical fire rated assemblies. The database is designed for use by utility fire protection engineers to locate test reports for power plant fire rated assemblies. As utilities prepare to respond to Information Notice 88-04, the database will identify utilities, vendors or manufacturers who have specific fire test data. The database contains fire test report summaries for 729 tested configurations. For each summary, a contact is identified from whom a copy of the complete fire test report can be obtained. Five types of configurations are included: doors, dampers, seals, wraps and walls. The database is computerized. One version for IBM; one for Mac. Each database is accessed through user-friendly software which allows adding, deleting, browsing, etc. through the database. There are five major database files. One each for the five types of tested configurations. The contents of each provides significant information regarding the test method and the physical attributes of the tested configuration. 3 figs
Pritychenko, B.; Betak, E.; Kellett, M.A.; Singh, B.; Totans, J.
The Nuclear Science References (NSR) database together with its associated Web interface is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 200,000 articles since the beginning of nuclear science. The weekly updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center (http://www.nndc.bnl.gov/nsr) and the International Atomic Energy Agency (http://www-nds.iaea.org/nsr).
A database for radiological characterization of incoming Material Test Reactor (MTR) fuel has been developed for application to the Receiving Basin for Offsite Fuels (RBOF) and L-Basin spent fuel storage facilities at the Savannah River Site (SRS). This database provides a quick quantitative check to determine if SRS bound spent fuel is radiologically bounded by the Reference Fuel Assembly used in the L-Basin and RBOF authorization bases. The developed database considers pertinent characteristics of domestic and foreign research reactor fuel including exposure, fuel enrichment, irradiation time, cooling time, and fuel-to-moderator ratio. The supplied tables replace the time-consuming studies associated with authorization of SRS bound spent fuel with simple hand calculations. Additionally, the comprehensive database provides the means to overcome resource limitations, since a series of simple, yet conservative, hand calculations can now be performed in a timely manner and replace computational and technical staff requirements
Seibæk, Lene; Jakobsen, Dorthe Hjort; Høgdall, Claus
Database (DGCD) established a nursing database in 2011. The aim of DGCD Nursing is to monitor the quality of preoperative and postoperative care and to generate data for research. MATERIAL AND METHODS: In accordance with the current data protection legislation, real-time data are entered by clinical nurses...... at all national cancer centers. The DGCD Nursing includes data of preoperative and postoperative care, and nurses are independently represented in the steering committee. The aim of the present article is to present the first results from DGCD Nursing and the national care improvements that have followed......, pain score, vital functions, and psychosocial support. CONCLUSIONS: At national level, DGCD offers a comprehensive overview of the total patient pathway within gynecological cancer surgery. The DGCD Nursing has added to the quality and implementation of evidence-based preoperative and postoperative...
A comprehensive two-dimensional gel protein database of noncultured unfractionated normal human epidermal keratinocytes: towards an integrated approach to the study of cell proliferation, differentiation and skin diseases
Celis, J E; Madsen, Peder; Rasmussen, H H
A two-dimensional (2-D) gel database of cellular proteins from noncultured, unfractionated normal human epidermal keratinocytes has been established. A total of 2651 [35S]methionine-labeled cellular proteins (1868 isoelectric focusing, 783 nonequilibrium pH gradient electrophoresis) were resolved...
Akishina, E. P.; Aleksandrov, E. I.; Aleksandrov, I. N.; Filozova, I. A.; Ivanov, V. V.; Zrelov, P. V. [Lab. of Information Technologies, JINR, Dubna (Russian Federation); Friese, V.; Mueller, W. [GSI, Darmstadt (Germany)
We consider a concept of databases for the Cm experiment. For this purpose, an analysis of the databases for large experiments at the LHC at CERN has been performed. Special features of various DBMS utilized in physical experiments, including relational and object-oriented DBMS as the most applicable ones for the tasks of these experiments, were analyzed. A set of databases for the CBM experiment, DBMS for their developments as well as use cases for the considered databases are suggested.
Akishina, E.P.; Aleksandrov, E.I.; Aleksandrov, I.N.; Filozova, I.A.; Ivanov, V.V.; Zrelov, P.V.; Friese, V.; Mueller, W.
We consider a concept of databases for the Cm experiment. For this purpose, an analysis of the databases for large experiments at the LHC at CERN has been performed. Special features of various DBMS utilized in physical experiments, including relational and object-oriented DBMS as the most applicable ones for the tasks of these experiments, were analyzed. A set of databases for the CBM experiment, DBMS for their developments as well as use cases for the considered databases are suggested.
Kansas Data Access and Support Center — The Rail Network is a comprehensive database of the nation's railway system at the 1:100,000 scale or better. The data set covers all 50 States plus the District of...
Discusses new pricing models for some online services and considers the possibilities for the traditional online database market. Topics include multimedia music databases, including copyright implications; other retail-oriented databases; and paying for free databases with advertising. (LRW)
Kisara, Katsuto; Konno, Tomomi; Niino, Masayuki
Functionally Graded Materials Database (hereinafter referred to as FGMs Database) was open to the society via Internet in October 2002, and since then it has been managed by the Japan Aerospace Exploration Agency (JAXA). As of October 2006, the database includes 1,703 research information entries with 2,429 researchers data, 509 institution data and so on. Reading materials such as "Applicability of FGMs Technology to Space Plane" and "FGMs Application to Space Solar Power System (SSPS)" were prepared in FY 2004 and 2005, respectively. The English version of "FGMs Application to Space Solar Power System (SSPS)" is now under preparation. This present paper explains the FGMs Database, describing the research information data, the sitemap and how to use it. From the access analysis, user access results and users' interests are discussed.
Li, Qingyan; Lian, Shuabin; Dai, Zhiming; Xiang, Qian; Dai, Xianhua
Emilian M. DOBRESCU
Full Text Available A national database (NDB or an international one (abbreviated IDB, also named often as “data bank”, represents a method of storing some information and data on an external device (a storage device, with the possibility of an easy extension or an easy way to quickly find these information. Therefore, through IDB we don`t only understand a bibliometric or bibliographic index, which is a collection of references, that normally represents the “soft”, but also the respective IDB “hard”, which is the support and the storage technology. Usually, a database – a very comprehensive notion in the computer’s science – is a bibliographic index, compiled with specific purpose, objectives and means. In reality, the national and international databases are operated through management systems, usually electronic and informational, based on advanced manipulation technologies in the virtual space. On line encyclopedias can also be considered and are important international database (IDB. WorldCat, for example, is a world catalogue, that included the identification data for the books within circa 71.000 libraries in 112 countries, data classified through Online Computer Library Center (OCLC, with the participation of the libraries in the respective countries, especially of those that are national library.
... DEPARTMENT OF COMMERCE Patent and Trademark Office Native American Tribal Insignia Database ACTION... comprehensive database containing the official insignia of all federally- and State- recognized Native American... to create this database. The USPTO database of official tribal insignias assists trademark attorneys...
Dudek, Christian-Alexander; Hartlich, Juliane; Brötje, David; Jahn, Dieter
Abstract Bacteria adapt to changes in their environment via differential gene expression mediated by DNA binding transcriptional regulators. The PRODORIC2 database hosts one of the largest collections of DNA binding sites for prokaryotic transcription factors. It is the result of the thoroughly redesigned PRODORIC database. PRODORIC2 is more intuitive and user-friendly. Besides significant technical improvements, the new update offers more than 1000 new transcription factor binding sites and 110 new position weight matrices for genome-wide pattern searches with the Virtual Footprint tool. Moreover, binding sites deduced from high-throughput experiments were included. Data for 6 new bacterial species including bacteria of the Rhodobacteraceae family were added. Finally, a comprehensive collection of sigma- and transcription factor data for the nosocomial pathogen Clostridium difficile is now part of the database. PRODORIC2 is publicly available at http://www.prodoric2.de. PMID:29136200
U.S. Environmental Protection Agency — Contaminants at Comprehensive Environmental Response, Compensation and Liability Information System (CERCLIS) (Superfund) Sites - The CERCLIS Public Access Database...
U.S. Environmental Protection Agency — Non-NPL Sites - The Comprehensive Environmental Response, Compensation and Liability Information System (CERCLIS) (Superfund) Public Access Database contains a...
Jones, EdD, Darolyn "Lyn"
Reading comprehension gets easier as students learn what kind of reader they are, discover how to keep facts in their head, and much more. Bonus Online Component: includes additional games, including Beat the Clock, a line match game, and a word scramble.
deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher
This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.
Full Text Available Data warehouse technology includes a set of concepts and methods that offer the users useful information for decision making. The necessity to build a data warehouse arises from the necessity to improve the quality of information in the organization. The date proceeding from different sources, having a variety of forms - both structured and unstructured, are filtered according to business rules and are integrated in a single large data collection. Using informatics solutions, managers have understood that data stored in operational systems - including databases, are an informational gold mine that must be exploited. Data warehouses have been developed to answer the increasing demands for complex analysis, which could not be properly achieved with operational databases. The present paper emphasizes some of the criteria that information application developers can use in order to choose between a database solution or a data warehouse one.
Trager, E. A.
The purpose of this paper is to describe several aspects of a Human Performance Event Database (HPED) that is being developed by the Nuclear Regulatory Commission. These include the background, the database structure and basis for the structure, the process for coding and entering event records, the results of preliminary analyses of information in the database, and plans for the future. In 1992, the Office for Analysis and Evaluation of Operational Data (AEOD) within the NRC decided to develop a database for information on human performance during operating events. The database was needed to help classify and categorize the information to help feedback operating experience information to licensees and others. An NRC interoffice working group prepared a list of human performance information that should be reported for events and the list was based on the Human Performance Investigation Process (HPIP) that had been developed by the NRC as an aid in investigating events. The structure of the HPED was based on that list. The HPED currently includes data on events described in augmented inspection team (AIT) and incident investigation team (IIT) reports from 1990 through 1996, AEOD human performance studies from 1990 through 1993, recent NRR special team inspections, and licensee event reports (LERs) that were prepared for the events. (author)
Raghava Gajendra PS
Full Text Available Abstract Background Oxygenases belong to the oxidoreductive group of enzymes (E.C. Class 1, which oxidize the substrates by transferring oxygen from molecular oxygen (O2 and utilize FAD/NADH/NADPH as the co-substrate. Oxygenases can further be grouped into two categories i.e. monooxygenases and dioxygenases on the basis of number of oxygen atoms used for oxidation. They play a key role in the metabolism of organic compounds by increasing their reactivity or water solubility or bringing about cleavage of the aromatic ring. Findings We compiled a database of biodegradative oxygenases (OxDBase which provides a compilation of the oxygenase data as sourced from primary literature in the form of web accessible database. There are two separate search engines for searching into the database i.e. mono and dioxygenases database respectively. Each enzyme entry contains its common name and synonym, reaction in which enzyme is involved, family and subfamily, structure and gene link and literature citation. The entries are also linked to several external database including BRENDA, KEGG, ENZYME and UM-BBD providing wide background information. At present the database contains information of over 235 oxygenases including both dioxygenases and monooxygenases. This database is freely available online at http://www.imtech.res.in/raghava/oxdbase/. Conclusion OxDBase is the first database that is dedicated only to oxygenases and provides comprehensive information about them. Due to the importance of the oxygenases in chemical synthesis of drug intermediates and oxidation of xenobiotic compounds, OxDBase database would be very useful tool in the field of synthetic chemistry as well as bioremediation.
Zhang, Qingzhou; Yang, Bo; Chen, Xujiao; Xu, Jing; Mei, Changlin; Mao, Zhiguo
We present a bioinformatics database named Renal Gene Expression Database (RGED), which contains comprehensive gene expression data sets from renal disease research. The web-based interface of RGED allows users to query the gene expression profiles in various kidney-related samples, including renal cell lines, human kidney tissues and murine model kidneys. Researchers can explore certain gene profiles, the relationships between genes of interests and identify biomarkers or even drug targets in kidney diseases. The aim of this work is to provide a user-friendly utility for the renal disease research community to query expression profiles of genes of their own interest without the requirement of advanced computational skills. Website is implemented in PHP, R, MySQL and Nginx and freely available from http://rged.wall-eva.net. http://rged.wall-eva.net. © The Author(s) 2014. Published by Oxford University Press.
Narui, Shigeko; Habara, Tadashi; Izawa, Michiyo; Naramoto, Miyoko; Kajiro, Tadashi
To make clear a feature of nuclear fusion area of the INIS database in comparison with the INSPEC, a survey has been made of overlap literature included in both databases, and of unique literature included only in the INSPEC. All of the 5,774 items in the categories a50.00 and a28.50R of the INSPEC in 1980 were checked on whether each item was also included in the fusion category A14 of the INIS during four years of 1979 to 1982 or not. The ratios of literature included in the INIS were 52 % and 84 % for journal and technical report, respectively. The ratio for journal was considered in connection with differences in the scope and coverage as well as input system between the two databases. High comprehensiveness for technical report is achievable in the INIS. Comparison of language of literature included in both databases, and time lags for publication are briefly described. (author)
Fang, Jianwen; Dong, Yinghua; Salamat-Miller, Nazila; Middaugh, C Russell
The interactions between polyanions (PAs) and polyanion-binding proteins (PABPs) have been found to play significant roles in many essential biological processes including intracellular organization, transport and protein folding. Furthermore, many neurodegenerative disease-related proteins are PABPs. Thus, a better understanding of PA/PABP interactions may not only enhance our understandings of biological systems but also provide new clues to these deadly diseases. The literature in this field is widely scattered, suggesting the need for a comprehensive and searchable database of PABPs. The DB-PABP is a comprehensive, manually curated and searchable database of experimentally characterized PABPs. It is freely available and can be accessed online at http://pabp.bcf.ku.edu/DB_PABP/. The DB-PABP was implemented as a MySQL relational database. An interactive web interface was created using Java Server Pages (JSP). The search page of the database is organized into a main search form and a section for utilities. The main search form enables custom searches via four menus: protein names, polyanion names, the source species of the proteins and the methods used to discover the interactions. Available utilities include a commonality matrix, a function of listing PABPs by the number of interacting polyanions and a string search for author surnames. The DB-PABP is maintained at the University of Kansas. We encourage users to provide feedback and submit new data and references.
Daugaard, Gedske; Kier, Maria Gry Gundgaard; Bandak, Mikkel
AIM: The nationwide Danish Testicular Cancer database consists of a retrospective research database (DaTeCa database) and a prospective clinical database (Danish Multidisciplinary Cancer Group [DMCG] DaTeCa database). The aim is to improve the quality of care for patients with testicular cancer (TC......) in Denmark, that is, by identifying risk factors for relapse, toxicity related to treatment, and focusing on late effects. STUDY POPULATION: All Danish male patients with a histologically verified germ cell cancer diagnosis in the Danish Pathology Registry are included in the DaTeCa databases. Data...... collection has been performed from 1984 to 2007 and from 2013 onward, respectively. MAIN VARIABLES AND DESCRIPTIVE DATA: The retrospective DaTeCa database contains detailed information with more than 300 variables related to histology, stage, treatment, relapses, pathology, tumor markers, kidney function...
Full Text Available Imago Mundi – The International Journal for the History of Cartography is dedicated exclusively to the research of early maps in all of their aspects. It publishes papers related to the history and interpretation of those maps and their production in any part of the world.
A possible promotion way for Uruguayan geological documents or foreigner authors who public their researchers in Uruguay was given by International Nuclear Information System INIS of the International Agency Atomic Energy .The article shows not only the INIS historic information, integration products, services and operation but also a safe tool for scientific and technological knowledge preservation. (author)
Li, Shu-Fen; Zhang, Guo-Jun; Zhang, Xue-Jin; Yuan, Jin-Hong; Deng, Chuan-Liang; Gu, Lian-Feng; Gao, Wu-Jun
Dioecious plants usually harbor 'young' sex chromosomes, providing an opportunity to study the early stages of sex chromosome evolution. Transposable elements (TEs) are mobile DNA elements frequently found in plants and are suggested to play important roles in plant sex chromosome evolution. The genomes of several dioecious plants have been sequenced, offering an opportunity to annotate and mine the TE data. However, comprehensive and unified annotation of TEs in these dioecious plants is still lacking. In this study, we constructed a dioecious plant transposable element database (DPTEdb). DPTEdb is a specific, comprehensive and unified relational database and web interface. We used a combination of de novo, structure-based and homology-based approaches to identify TEs from the genome assemblies of previously published data, as well as our own. The database currently integrates eight dioecious plant species and a total of 31 340 TEs along with classification information. DPTEdb provides user-friendly web interfaces to browse, search and download the TE sequences in the database. Users can also use tools, including BLAST, GetORF, HMMER, Cut sequence and JBrowse, to analyze TE data. Given the role of TEs in plant sex chromosome evolution, the database will contribute to the investigation of TEs in structural, functional and evolutionary dynamics of the genome of dioecious plants. In addition, the database will supplement the research of sex diversification and sex chromosome evolution of dioecious plants.Database URL: http://genedenovoweb.ticp.net:81/DPTEdb/index.php. © The Author(s) 2016. Published by Oxford University Press.
Cullings, Harry M.; Fujita, Shoichiro; Preston, Dale L.; Grant, Eric J.; Shizuma, Kiyoshi; Hoshi, Masaharu; Maruyama, Takashi; Lowder, Wayne M.
The Radiation Effects Research Foundation maintains a database containing detailed information on every known measurement of environmental materials in the cities of Hiroshima and Nagasaki for gamma-ray thermoluminescence or neutron activation produced by incident radiation from the atomic bomb detonations. The intent was to create a single information resource that would consistently document, as completely as possible in each case, a standard array of data for every known measurement. This database provides a uniquely comprehensive and carefully designed reference for the dosimetry reassessment. (J.P.N.)
Fahad Gilani, Syed; Reid, Jon; Raghuram, Ranga; Huddleston, James; Hammer Pedersen, Jacob
This book is for every C# programmer. It assumes no prior database experience and teaches through hands-on examples how to create and use relational databases with the standard database language SQL and how to access them with C#.Assuming only basic knowledge of C# 3.0, Beginning C# 3.0 Databases teaches all the fundamentals of database technology and database programming readers need to quickly become highly proficient database users and application developers. A comprehensive tutorial on both SQL Server 2005 and ADO.NET 3.0, this book explains and demonstrates how to create database objects
Jia, Zirui; Xiao, Yao; Ma, Wenjun; Wang, Junhui
Abstract Although transposable elements (TEs) play significant roles in structural, functional and evolutionary dynamics of the salicaceous plants genome and the accurate identification, definition and classification of TEs are still inadequate. In this study, we identified 18 393 TEs from Populus trichocarpa, Populus euphratica and Salix suchowensis using a combination of signature-based, similarity-based and De novo method, and annotated them into 1621 families. A comprehensive and user-friendly web-based database, SPTEdb, was constructed and served for researchers. SPTEdb enables users to browse, retrieve and download the TEs sequences from the database. Meanwhile, several analysis tools, including BLAST, HMMER, GetORF and Cut sequence, were also integrated into SPTEdb to help users to mine the TEs data easily and effectively. In summary, SPTEdb will facilitate the study of TEs biology and functional genomics in salicaceous plants. Database URL: http://genedenovoweb.ticp.net:81/SPTEdb/index.php PMID:29688371
Chapman, James B.; Kapp, Paul
A database containing previously published geochronologic, geochemical, and isotopic data on Mesozoic to Quaternary igneous rocks in the Himalayan-Tibetan orogenic system are presented. The database is intended to serve as a repository for new and existing igneous rock data and is publicly accessible through a web-based platform that includes an interactive map and data table interface with search, filtering, and download options. To illustrate the utility of the database, the age, location, and ɛHft composition of magmatism from the central Gangdese batholith in the southern Lhasa terrane are compared. The data identify three high-flux events, which peak at 93, 50, and 15 Ma. They are characterized by inboard arc migration and a temporal and spatial shift to more evolved isotopic compositions.
Jørgensen, Peter Holmberg; Lausten, Gunnar Schwarz; Pedersen, Alma B
AIM: The aim of the database is to gather information about sarcomas treated in Denmark in order to continuously monitor and improve the quality of sarcoma treatment in a local, a national, and an international perspective. STUDY POPULATION: Patients in Denmark diagnosed with a sarcoma, both...... skeletal and ekstraskeletal, are to be registered since 2009. MAIN VARIABLES: The database contains information about appearance of symptoms; date of receiving referral to a sarcoma center; date of first visit; whether surgery has been performed elsewhere before referral, diagnosis, and treatment; tumor...... of Diseases - tenth edition codes and TNM Classification of Malignant Tumours, and date of death (after yearly coupling to the Danish Civil Registration System). Data quality and completeness are currently secured. CONCLUSION: The Danish Sarcoma Database is population based and includes sarcomas occurring...
Videbech, Poul Bror Hemming; Deleuran, Anette
AIM OF DATABASE: The purpose of the Danish Depression Database (DDD) is to monitor and facilitate the improvement of the quality of the treatment of depression in Denmark. Furthermore, the DDD has been designed to facilitate research. STUDY POPULATION: Inpatients as well as outpatients...... with depression, aged above 18 years, and treated in the public psychiatric hospital system were enrolled. MAIN VARIABLES: Variables include whether the patient has been thoroughly somatically examined and has been interviewed about the psychopathology by a specialist in psychiatry. The Hamilton score as well...... as an evaluation of the risk of suicide are measured before and after treatment. Whether psychiatric aftercare has been scheduled for inpatients and the rate of rehospitalization are also registered. DESCRIPTIVE DATA: The database was launched in 2011. Every year since then ~5,500 inpatients and 7,500 outpatients...
The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.
Shao Juying; Fang Zhaoxia
The article describes the development of the Nuclear Power Operation Database which include Domestic and Overseas Nuclear Event Scale Database, Overseas Nuclear Power Operation Abnormal Event Database, Overseas Nuclear Power Operation General Reliability Database and Qinshan Nuclear Power Operation Abnormal Event Database. The development includes data collection and analysis, database construction and code design, database management system selection. The application of the database to provide support to the safety analysis of the NPPs which have been in commercial operation is also introduced
Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.
The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community
Abdulganiyu Abdu Yusuf; Zahraddeen Sufyanu; Kabir Yusuf Mamman; Abubakar Umar Suleiman
Bioinformatics is the application of computational tools to capture and interpret biological data. It has wide applications in drug development, crop improvement, agricultural biotechnology and forensic DNA analysis. There are various databases available to researchers in bioinformatics. These databases are customized for a specific need and are ranged in size, scope, and purpose. The main drawbacks of bioinformatics databases include redundant information, constant change, data spread over m...
Davies, L.M.; Gillemot, F.; Yanko, L.; Lyssakov, V.
The IRPVM-DB (International Reactor Pressure Vessel Material Database) initiated by the IAEA IWG LMNPP is going to collect the available surveillance and research data world-wide on RPV material ageing. This paper presents the purpose of the database; it summarizes the type and the relationship of data included; it gives information about the data access and protection; and finally, it summarizes the state of art of the database. (author). 1 ref., 2 figs
Davies, L M [Davies Consultants, Oxford (United Kingdom); Gillemot, F [Atomic Energy Research Inst., Budapest (Hungary); Yanko, L [Minatom (Russian Federation); Lyssakov, V [International Atomic Energy Agency, Vienna (Austria)
The IRPVM-DB (International Reactor Pressure Vessel Material Database) initiated by the IAEA IWG LMNPP is going to collect the available surveillance and research data world-wide on RPV material ageing. This paper presents the purpose of the database; it summarizes the type and the relationship of data included; it gives information about the data access and protection; and finally, it summarizes the state of art of the database. (author). 1 ref., 2 figs.
CLELAND, DONALD L.
THE NATURE OF COMPREHENSION IS DEFINED AND CLARIFIED. THE LITERATURE IS SURVEYED TO SHOW THAT THE DEVELOPMENT OF CONCEPTS IS IMPORTANT IN INTELLECTUAL ACTIVITIES. IT IS POINTED OUT THAT CONCEPTS ARE BUILT FROM PERCEPTS, IMAGES, SENSATION, AND MEMORIES, AND THAT THE STEPS WHICH ARE EMPLOYED AS CONCEPTS ARE BUILT AND REFINED AND INCLUDE PERCEIVING,…
Hjeresen, D.L.; Roybal, S.L.
This report contains information about Los Alamos National Laboratory's Comprehensive Environmental Management Plan. The topics covered include: waste minimization, waste generation, environmental concerns, public relations of the laboratory, and how this plan will help to answer to the demands of the laboratory as their mission changes
: The Exoplanet Orbit Database Authors: Jason T Wright, Onsi Fakhouri, Geoffrey W. Marcy, Eunkyu Han present a database of well determined orbital parameters of exoplanets. This database comprises parameters, and the method used for the planets discovery. This Exoplanet Orbit Database includes all planets
Szklarczyk, Damian; Franceschini, Andrea; Kuhn, Michael
present an update on the online database resource Search Tool for the Retrieval of Interacting Genes (STRING); it provides uniquely comprehensive coverage and ease of access to both experimental as well as predicted interaction information. Interactions in STRING are provided with a confidence score...... models, extensive data updates and strongly improved connectivity and integration with third-party resources. Version 9.0 of STRING covers more than 1100 completely sequenced organisms; the resource can be reached at http://string-db.org....
The infrastructure-as-code revolution in IT is also affecting database administration. With this practical book, developers, system administrators, and junior to mid-level DBAs will learn how the modern practice of site reliability engineering applies to the craft of database architecture and operations. Authors Laine Campbell and Charity Majors provide a framework for professionals looking to join the ranks of today’s database reliability engineers (DBRE). You’ll begin by exploring core operational concepts that DBREs need to master. Then you’ll examine a wide range of database persistence options, including how to implement key technologies to provide resilient, scalable, and performant data storage and retrieval. With a firm foundation in database reliability engineering, you’ll be ready to dive into the architecture and operations of any modern database. This book covers: Service-level requirements and risk management Building and evolving an architecture for operational visibility ...
Full Text Available Peter Holmberg Jørgensen,1 Gunnar Schwarz Lausten,2 Alma B Pedersen3 1Tumor Section, Department of Orthopedic Surgery, Aarhus University Hospital, Aarhus, 2Tumor Section, Department of Orthopedic Surgery, Rigshospitalet, Copenhagen, 3Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, Denmark Aim: The aim of the database is to gather information about sarcomas treated in Denmark in order to continuously monitor and improve the quality of sarcoma treatment in a local, a national, and an international perspective. Study population: Patients in Denmark diagnosed with a sarcoma, both skeletal and ekstraskeletal, are to be registered since 2009. Main variables: The database contains information about appearance of symptoms; date of receiving referral to a sarcoma center; date of first visit; whether surgery has been performed elsewhere before referral, diagnosis, and treatment; tumor characteristics such as location, size, malignancy grade, and growth pattern; details on treatment (kind of surgery, amount of radiation therapy, type and duration of chemotherapy; complications of treatment; local recurrence and metastases; and comorbidity. In addition, several quality indicators are registered in order to measure the quality of care provided by the hospitals and make comparisons between hospitals and with international standards. Descriptive data: Demographic patient-specific data such as age, sex, region of living, comorbidity, World Health Organization's International Classification of Diseases – tenth edition codes and TNM Classification of Malignant Tumours, and date of death (after yearly coupling to the Danish Civil Registration System. Data quality and completeness are currently secured. Conclusion: The Danish Sarcoma Database is population based and includes sarcomas occurring in Denmark since 2009. It is a valuable tool for monitoring sarcoma incidence and quality of treatment and its improvement, postoperative
Feizi, Amir; Banaei-Esfahani, Amir; Nielsen, Jens
The human cancer secretome database (HCSD) is a comprehensive database for human cancer secretome data. The cancer secretome describes proteins secreted by cancer cells and structuring information about the cancer secretome will enable further analysis of how this is related with tumor biology...... database is limiting the ability to query the increasing community knowledge. We therefore developed the Human Cancer Secretome Database (HCSD) to fulfil this gap. HCSD contains >80 000 measurements for about 7000 nonredundant human proteins collected from up to 35 high-throughput studies on 17 cancer...
Wang, Chao; Zhang, Jun; Cai, Mingdeng; Zhu, Zhenggang; Gu, Wenjie; Yu, Yingyan; Zhang, Xiaoyan
The Database of Human Gastric Cancer (DBGC) is a comprehensive database that integrates various human gastric cancer-related data resources. Human gastric cancer-related transcriptomics projects, proteomics projects, mutations, biomarkers and drug-sensitive genes from different sources were collected and unified in this database. Moreover, epidemiological statistics of gastric cancer patients in China and clinicopathological information annotated with gastric cancer cases were also integrated into the DBGC. We believe that this database will greatly facilitate research regarding human gastric cancer in many fields. DBGC is freely available at http://bminfor.tongji.edu.cn/dbgc/index.do PMID:26566288
Rasmussen, Mette; Tønnesen, Hanne
Background: The Danish Smoking Cessation Database (SCDB) was established in 2001 as the first national healthcare register within the field of health promotion. Aim of the database: The aim of the SCDB is to document and evaluate smoking cessation (SC) interventions to assess and improve their qu......‐free. The database is increasingly used in register-based research.......Background: The Danish Smoking Cessation Database (SCDB) was established in 2001 as the first national healthcare register within the field of health promotion. Aim of the database: The aim of the SCDB is to document and evaluate smoking cessation (SC) interventions to assess and improve...... their quality. The database was also designed to function as a basis for register-based research projects. Study population The population includes smokers in Denmark who have been receiving a face-to-face SC intervention offered by an SC clinic affiliated with the SCDB. SC clinics can be any organisation...
Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database
Qureshi, Abid; Thakur, Nishant; Kumar, Manoj
Besides antiretroviral drugs, peptides have also demonstrated potential to inhibit the Human immunodeficiency virus (HIV). For example, T20 has been discovered to effectively block the HIV entry and was approved by the FDA as a novel anti-HIV peptide (AHP). We have collated all experimental information on AHPs at a single platform. HIPdb is a manually curated database of experimentally verified HIV inhibiting peptides targeting various steps or proteins involved in the life cycle of HIV e.g. fusion, integration, reverse transcription etc. This database provides experimental information of 981 peptides. These are of varying length obtained from natural as well as synthetic sources and tested on different cell lines. Important fields included are peptide sequence, length, source, target, cell line, inhibition/IC(50), assay and reference. The database provides user friendly browse, search, sort and filter options. It also contains useful services like BLAST and 'Map' for alignment with user provided sequences. In addition, predicted structure and physicochemical properties of the peptides are also included. HIPdb database is freely available at http://crdd.osdd.net/servers/hipdb. Comprehensive information of this database will be helpful in selecting/designing effective anti-HIV peptides. Thus it may prove a useful resource to researchers for peptide based therapeutics development.
Calm, J.M. [Calm (James M.), Great Falls, VA (United States)
The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufactures and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on many refrigerants including propane, ammonia, water, carbon dioxide, propylene, ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.
This paper reviews the state-of-the art in comprehensive performance codes for fixed-wing aircraft. The importance of system analysis in flight performance is discussed. The paper highlights the role of aerodynamics, propulsion, flight mechanics, aeroacoustics, flight operation, numerical optimisation, stochastic methods and numerical analysis. The latter discipline is used to investigate the sensitivities of the sub-systems to uncertainties in critical state parameters or functional parameters. The paper discusses critically the data used for performance analysis, and the areas where progress is required. Comprehensive analysis codes can be used for mission fuel planning, envelope exploration, competition analysis, a wide variety of environmental studies, marketing analysis, aircraft certification and conceptual aircraft design. A comprehensive program that uses the multi-disciplinary approach for transport aircraft is presented. The model includes a geometry deck, a separate engine input deck with the main parameters, a database of engine performance from an independent simulation, and an operational deck. The comprehensive code has modules for deriving the geometry from bitmap files, an aerodynamics model for all flight conditions, a flight mechanics model for flight envelopes and mission analysis, an aircraft noise model and engine emissions. The model is validated at different levels. Validation of the aerodynamic model is done against the scale models DLR-F4 and F6. A general model analysis and flight envelope exploration are shown for the Boeing B-777-300 with GE-90 turbofan engines with intermediate passenger capacity (394 passengers in 2 classes). Validation of the flight model is done by sensitivity analysis on the wetted area (or profile drag), on the specific air range, the brake-release gross weight and the aircraft noise. A variety of results is shown, including specific air range charts, take-off weight-altitude charts, payload-range performance
The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R- 717 (ammonia), ethers, and others as well as azeotropic and zeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents on compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. A computerized version is available that includes retrieval software.
... Database ACTION: Proposed collection; comment request. SUMMARY: The United States Patent and Trademark... the report was that the USPTO create and maintain an accurate and comprehensive database containing... this recommendation, the Senate Committee on Appropriations directed the USPTO to create this database...
Gray, J.R.; Bernard, J.M.; Stewart, D.W.; McFaul, E.J.; Laurent, K.W.; Schwarz, G.E.; Stinson, J.T.; Jonas, M.M.; Randle, T.J.; Webb, J.W.
The importance of dependable, long-term water supplies, coupled with the need to quantify rates of capacity loss of the Nation’s re servoirs due to sediment deposition, were the most compelling reasons for developing the REServoir- SEDimentation survey information (RESSED) database and website. Created under the auspices of the Advisory Committee on Water Information’s Subcommittee on Sedimenta ion by the U.S. Geological Survey and the Natural Resources Conservation Service, the RESSED database is the most comprehensive compilation of data from reservoir bathymetric and dry-basin surveys in the United States. As of March 2010, the database, which contains data compiled on the 1950s vintage Soil Conservation Service’s Form SCS-34 data sheets, contained results from 6,616 surveys on 1,823 reservoirs in the United States and two surveys on one reservoir in Puerto Rico. The data span the period 1755–1997, with 95 percent of the surveys performed from 1930–1990. The reservoir surface areas range from sub-hectare-scale farm ponds to 658 km2 Lake Powell. The data in the RESSED database can be useful for a number of purposes, including calculating changes in reservoir-storage characteristics, quantifying sediment budgets, and estimating erosion rates in a reservoir’s watershed. The March 2010 version of the RESSED database has a number of deficiencies, including a cryptic and out-of-date database architecture; some geospatial inaccuracies (although most have been corrected); other data errors; an inability to store all data in a readily retrievable manner; and an inability to store all data types that currently exist. Perhaps most importantly, the March 2010 version of RESSED database provides no publically available means to submit new data and corrections to existing data. To address these and other deficiencies, the Subcommittee on Sedimentation, through the U.S. Geological Survey and the U.S. Army Corps of Engineers, began a collaborative project in
Lulat, Zainab; Blain-McLeod, Julie; Grinspun, Doris; Penney, Tasha; Harripaul-Yhap, Anastasia; Rey, Michelle
The appropriate nursing staff mix is imperative to the provision of quality care. Nurse staffing levels and staff mix vary from country to country, as well as between care settings. Understanding how staffing skill mix impacts patient, organizational, and financial outcomes is critical in order to allow policymakers and clinicians to make evidence-informed staffing decisions. This paper reports on the methodology for creation of an electronic database of studies exploring the effectiveness of Registered Nurses (RNs) on clinical and patient outcomes, organizational and nurse outcomes, and financial outcomes. Comprehensive literature searches were conducted in four electronic databases. Inclusion criteria for the database included studies published from 1946 to 2016, peer-reviewed international literature, and studies focused on RNs in all health-care disciplines, settings, and sectors. Masters-prepared nurse researchers conducted title and abstract screening and relevance review to determine eligibility of studies for the database. High-level analysis was conducted to determine key outcomes and the frequency at which they appeared within the database. Of the initial 90,352 records, a total of 626 abstracts were included within the database. Studies were organized into three groups corresponding to clinical and patient outcomes, organizational and nurse-related outcomes, and financial outcomes. Organizational and nurse-related outcomes represented the largest category in the database with 282 studies, followed by clinical and patient outcomes with 244 studies, and lastly financial outcomes, which included 124 studies. The comprehensive database of evidence for RN effectiveness is freely available at https://rnao.ca/bpg/initiatives/RNEffectiveness. The database will serve as a resource for the Registered Nurses' Association of Ontario, as well as a tool for researchers, clinicians, and policymakers for making evidence-informed staffing decisions. © 2018 The Authors
Cohn, Elizabeth; Larson, Elaine
To critically analyze studies published within the past decade about participants' comprehension of informed consent in clinical research and to identify promising intervention strategies. Integrative review of literature. The Cumulative Index of Nursing and Allied Health Literature (CINAHL), PubMed, and the Cochrane Database of Systematic Reviews and Cochrane Central Register of Controlled Trials were searched. Inclusion criteria included studies (a) published between January 1, 1996 and January 1, 2007, (b) designed as descriptive or interventional studies of comprehension of informed consent for clinical research, (c) conducted in nonpsychiatric adult populations who were either patients or volunteer participants, (d) written in English, and (e) published in peer-reviewed journals. Of the 980 studies identified, 319 abstracts were screened, 154 studies were reviewed, and 23 met the inclusion criteria. Thirteen studies (57%) were descriptive, and 10 (43%) were interventional. Interventions tested included simplified written consent documents, multimedia approaches, and the use of a trained professional (consent educator) to assist in the consent process. Collectively, no single intervention strategy was consistently associated with improved comprehension. Studies also varied in regard to the definition of comprehension and the tools used to measure it. Despite increasing regulatory scrutiny, deficiencies still exist in participant comprehension of the research in which they participate, as well as differences in how comprehension is measured and assessed. No single intervention was identified as consistently successful for improving participant comprehension, and results indicated that any successful consent process should at a minimum include various communication modes and is likely to require one-to-one interaction with someone knowledgeable about the study.
Full Text Available Abstract Background The ability to access, search and analyse secondary structures of a large set of known RNA molecules is very important for deriving improved RNA energy models, for evaluating computational predictions of RNA secondary structures and for a better understanding of RNA folding. Currently there is no database that can easily provide these capabilities for almost all RNA molecules with known secondary structures. Results In this paper we describe RNA STRAND – the RNA secondary STRucture and statistical ANalysis Database, a curated database containing known secondary structures of any type and organism. Our new database provides a wide collection of known RNA secondary structures drawn from public databases, searchable and downloadable in a common format. Comprehensive statistical information on the secondary structures in our database is provided using the RNA Secondary Structure Analyser, a new tool we have developed to analyse RNA secondary structures. The information thus obtained is valuable for understanding to which extent and with which probability certain structural motifs can appear. We outline several ways in which the data provided in RNA STRAND can facilitate research on RNA structure, including the improvement of RNA energy models and evaluation of secondary structure prediction programs. In order to keep up-to-date with new RNA secondary structure experiments, we offer the necessary tools to add solved RNA secondary structures to our database and invite researchers to contribute to RNA STRAND. Conclusion RNA STRAND is a carefully assembled database of trusted RNA secondary structures, with easy on-line tools for searching, analyzing and downloading user selected entries, and is publicly available at http://www.rnasoft.ca/strand.
Calm, J.M. [Calm (James M.), Great Falls, VA (United States)
The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates. Citations in this report are divided into the following topics: thermophysical properties; materials compatibility; lubricants and tribology; application data; safety; test and analysis methods; impacts; regulatory actions; substitute refrigerants; identification; absorption and adsorption; research programs; and miscellaneous documents. Information is also presented on ordering instructions for the computerized version.
The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alterative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on various refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.
Full Text Available Searching metabolites against databases according to their masses is often the first step in metabolite identification for a mass spectrometry-based untargeted metabolomics study. Major metabolite databases include Human Metabolome DataBase (HMDB, Madison Metabolomics Consortium Database (MMCD, Metlin, and LIPID MAPS. Since each one of these databases covers only a fraction of the metabolome, integration of the search results from these databases is expected to yield a more comprehensive coverage. However, the manual combination of multiple search results is generally difficult when identification of hundreds of metabolites is desired. We have implemented a web-based software tool that enables simultaneous mass-based search against the four major databases, and the integration of the results. In addition, more complete chemical identifier information for the metabolites is retrieved by cross-referencing multiple databases. The search results are merged based on IUPAC International Chemical Identifier (InChI keys. Besides a simple list of m/z values, the software can accept the ion annotation information as input for enhanced metabolite identification. The performance of the software is demonstrated on mass spectrometry data acquired in both positive and negative ionization modes. Compared with search results from individual databases, MetaboSearch provides better coverage of the metabolome and more complete chemical identifier information.The software tool is available at http://omics.georgetown.edu/MetaboSearch.html.
Klose, M.; Damm, B.
The Federal Republic of Germany has long been among the few European countries that lack a national landslide database. Systematic collection and inventory of landslide data still shows a comprehensive research history in Germany, but only one focused on development of databases with local or regional coverage. This has changed in recent years with the launch of a database initiative aimed at closing the data gap existing at national level. The present contribution reports on this project that is based on a landslide database which evolved over the last 15 years to a database covering large parts of Germany. A strategy of systematic retrieval, extraction, and fusion of landslide data is at the heart of the methodology, providing the basis for a database with a broad potential of application. The database offers a data pool of more than 4,200 landslide data sets with over 13,000 single data files and dates back to 12th century. All types of landslides are covered by the database, which stores not only core attributes, but also various complementary data, including data on landslide causes, impacts, and mitigation. The current database migration to PostgreSQL/PostGIS is focused on unlocking the full scientific potential of the database, while enabling data sharing and knowledge transfer via a web GIS platform. In this contribution, the goals and the research strategy of the database project are highlighted at first, with a summary of best practices in database development providing perspective. Next, the focus is on key aspects of the methodology, which is followed by the results of different case studies in the German Central Uplands. The case study results exemplify database application in analysis of vulnerability to landslides, impact statistics, and hazard or cost modeling.
ir. Sander van Laar
A formal description of a database consists of the description of the relations (tables) of the database together with the constraints that must hold on the database. Furthermore the contents of a database can be retrieved using queries. These constraints and queries for databases can very well be
Grimm, E.C.; Bradshaw, R.H.W; Brewer, S.; Flantua, S.; Giesecke, T.; Lézine, A.M.; Takahara, H.; Williams, J.W.,Jr; Elias, S.A.; Mock, C.J.
During the past 20 years, several pollen database cooperatives have been established. These databases are now constituent databases of the Neotoma Paleoecology Database, a public domain, multiproxy, relational database designed for Quaternary-Pliocene fossil data and modern surface samples. The
Page Home Table of Contents Contents Search Database Search Login Login Databases Advisory Circulars accessed by clicking below: Full-Text WebSearch Databases Database Records Date Advisory Circulars 2092 5 data collection and distribution policies. Document Database Website provided by MicroSearch
The InterAction Database includes demographic and prescription information for more than 500,000 patients in the northern and middle Netherlands and has been integrated with other systems to enhance data collection and analysis.
Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim
The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can
Full Text Available The database “Historical Artificial Radionuclides in the Pacific Ocean and its Marginal Seas”, or HAM database, has been created. The database includes 90Sr, 137Cs, and 239,240Pu concentration data from the seawater of the Pacific Ocean and its marginal seas with some measurements from the sea surface to the bottom. The data in the HAM database were collected from about 90 literature citations, which include published papers; annual reports by the Hydrographic Department, Maritime Safety Agency, Japan; and unpublished data provided by individuals. The data of concentrations of 90Sr, 137Cs, and 239,240Pu have been accumulating since 19571998. The present HAM database includes 7737 records for 137Cs concentration data, 3972 records for 90Sr concentration data, and 2666 records for 239,240Pu concentration data. The spatial variation of sampling stations in the HAM database is heterogeneous, namely, more than 80% of the data for each radionuclide is from the Pacific Ocean and the Sea of Japan, while a relatively small portion of data is from the South Pacific. This HAM database will allow us to use these radionuclides as significant chemical tracers for oceanographic study as well as the assessment of environmental affects of anthropogenic radionuclides for these 5 decades. Furthermore, these radionuclides can be used to verify the oceanic general circulation models in the time scale of several decades.
"This important resource offers a detailed description about the practical considerations and applications in database programming using Java NetBeans 6.8 with authentic examples and detailed explanations. This book provides readers with a clear picture as to how to handle the database programming issues in the Java NetBeans environment. The book is ideal for classroom and professional training material. It includes a wealth of supplemental material that is available for download including Powerpoint slides, solution manuals, and sample databases"--
Comprehensive Hard Materials deals with the production, uses and properties of the carbides, nitrides and borides of these metals and those of titanium, as well as tools of ceramics, the superhard boron nitrides and diamond and related compounds. Articles include the technologies of powder production (including their precursor materials), milling, granulation, cold and hot compaction, sintering, hot isostatic pressing, hot-pressing, injection moulding, as well as on the coating technologies for refractory metals, hard metals and hard materials. The characterization, testing, quality assurance and applications are also covered. Comprehensive Hard Materials provides meaningful insights on materials at the leading edge of technology. It aids continued research and development of these materials and as such it is a critical information resource to academics and industry professionals facing the technological challenges of the future. Hard materials operate at the leading edge of technology, and continued res...
Ugrinska, A.; Mustafa, B.
Full text: Abstract databases available on Internet free of charge were searched for nuclear medicine contents. The only comprehensive database found was PubMed. Analysis of nuclear medicine journals included in PubMed was performed. PubMed contains 25 medical journals that contain the phrase 'nuclear medicine' in different languages in their title. Searching the Internet with the search engine 'Google' we have found four more peer-reviewed journals with the phrase 'nuclear medicine' in their title. In addition, we are fully aware that many articles related to nuclear medicine are published in national medical journals devoted to general medicine. For example in year 2000 colleagues from Institute of Pathophysiology and Nuclear Medicine, Skopje, Macedonia have published 10 articles out of which none could be found on PubMed. This suggested that a big amount of research work is not accessible for the people professionally involved in nuclear medicine. Therefore, we have created a database framework for abstracts that couldn't be found in PubMed. The database is organized in user-friendly manner. There are two main sections: 'post an abstract' and 'search for abstracts'. Authors of the articles are expected to submit their work in the section 'post an abstract'. During the submission process authors should fill the separate boxes with the Title in English, Title in original language, Country of origin, Journal name, Volume, Issue and Pages. Authors should choose up to five keywords from a drop-down menu. Authors are encouraged if the abstract is not published in English to translate it. The section 'search for abstract' is searchable according to Author, Keywords, and words and phrases incorporated in the English title. The abstract database currently resides on an MS Access back-end, with a front-end in ASP (Active Server Pages). In the future, we plan to migrate the database on a MS SQL Server, which should provide a faster and more reliable framework for hosting a
Characterized by clear and accessible explanations, numerous examples and sample sentences, a new section on register and tone, and useful appendices covering topics including age and time, A Comprehensive French Grammar, Sixth Edition is an indispensable tool for advanced students of French language and literature.A revised edition of this established, bestselling French grammarIncludes a new section on register and medium and offers expanded treatment of French punctuationFeatures numerous examples and sample sentences, and useful appendices covering topics including age, time, and dimension
Do impotent men with diabetes have more severe erectile dysfunction and worse quality of life than the general population of impotent patients? Results from the Exploratory Comprehensive Evaluation of Erectile Dysfunction (ExCEED) database.
Penson, David F; Latini, David M; Lubeck, Deborah P; Wallace, Katrine L; Henning, James M; Lue, Tom F
Little is known regarding how diabetic men with erectile dysfunction (ED) differ from the general population of impotent men. The primary objective of this study was to compare disease-specific health-related quality of life (HRQOL) and severity of ED in impotent men with and without diabetes. Validated functional and HRQOL questionnaires (including the International Index of Erectile Function, the Sexual Self-Efficacy Scale, and the Psychological Impact of Erectile Dysfunction scales) were administered to patients in an ED disease registry. Men with ED and a history of diabetes (n = 20) were compared with men with ED and no history of diabetes (n = 90) at baseline and at the 12-month follow-up. Diabetic impotent men reported worse erectile function and intercourse satisfaction at baseline, and ED had a greater impact on their emotional life. Diabetic men with ED had significantly different trends over time in the Erectile Function (P Emotional Life-Psychological Impact domain (P < 0.067). Impotent men with diabetes present with worse ED than nondiabetic men with ED, resulting in worse disease-specific HRQOL in the diabetic men. Although diabetic patients initially respond well to ED treatment, responses do not appear to be durable over time. Therefore, clinicians must provide longer-term follow-up when treating ED in diabetic patients.
Cain, J.M. (Calm (James M.), Great Falls, VA (United States))
The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents to accelerate availability of the information and will be completed or replaced in future updates.
Igarashi, Yoshinobu; Nakatsu, Noriyuki; Yamashita, Tomoya; Ono, Atsushi; Ohno, Yasuo; Urushidani, Tetsuro; Yamada, Hiroshi
Toxicogenomics focuses on assessing the safety of compounds using gene expression profiles. Gene expression signatures from large toxicogenomics databases are expected to perform better than small databases in identifying biomarkers for the prediction and evaluation of drug safety based on a compound's toxicological mechanisms in animal target organs. Over the past 10 years, the Japanese Toxicogenomics Project consortium (TGP) has been developing a large-scale toxicogenomics database consisting of data from 170 compounds (mostly drugs) with the aim of improving and enhancing drug safety assessment. Most of the data generated by the project (e.g. gene expression, pathology, lot number) are freely available to the public via Open TG-GATEs (Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System). Here, we provide a comprehensive overview of the database, including both gene expression data and metadata, with a description of experimental conditions and procedures used to generate the database. Open TG-GATEs is available from http://toxico.nibio.go.jp/english/index.html. PMID:25313160
... and US Department of Agriculture Dietary Supplement Ingredient Database Toggle navigation Menu Home About DSID Mission Current ... values can be saved to build a small database or add to an existing database for national, ...
Consumption Database The California Energy Commission has created this on-line database for informal reporting ) classifications. The database also provides easy downloading of energy consumption data into Microsoft Excel (XLSX
Full Text Available Abstract Background Misfolding and aggregation of proteins into ordered fibrillar structures is associated with a number of severe pathologies, including Alzheimer's disease, prion diseases, and type II diabetes. The rapid accumulation of knowledge about the sequences and structures of these proteins allows using of in silico methods to investigate the molecular mechanisms of their abnormal conformational changes and assembly. However, such an approach requires the collection of accurate data, which are inconveniently dispersed among several generalist databases. Results We therefore created a free online knowledge database (AMYPdb dedicated to amyloid precursor proteins and we have performed large scale sequence analysis of the included data. Currently, AMYPdb integrates data on 31 families, including 1,705 proteins from nearly 600 organisms. It displays links to more than 2,300 bibliographic references and 1,200 3D-structures. A Wiki system is available to insert data into the database, providing a sharing and collaboration environment. We generated and analyzed 3,621 amino acid sequence patterns, reporting highly specific patterns for each amyloid family, along with patterns likely to be involved in protein misfolding and aggregation. Conclusion AMYPdb is a comprehensive online database aiming at the centralization of bioinformatic data regarding all amyloid proteins and their precursors. Our sequence pattern discovery and analysis approach unveiled protein regions of significant interest. AMYPdb is freely accessible 1.
Ball, M.J.; Delagi, N.; Horton, B.; Ivey, J.C.; Leedy, R.; Li, X.; Marshall, B.; Robinson, S.L.; Tompkins, J.C.
The Test Department of the Magnet Systems Division of the Superconducting Super Collider Laboratory (SSCL) is developing a central database of SSC magnet information that will be available to all magnet scientists at the SSCL or elsewhere, via network connections. The database contains information on the magnets' major components, configuration information (specifying which individual items were used in each cable, coil, and magnet), measurements made at major fabrication stages, and the test results on completed magnets. These data will facilitate the correlation of magnet performance with the properties of its constituents. Recent efforts have focused on the development of procedures for user-friendly access to the data, including displays in the format of the production open-quotes travelerclose quotes data sheets, standard summary reports, and a graphical interface for ad hoc queues and plots
Skrzypek, Marek S; Nash, Robert S; Wong, Edith D; MacPherson, Kevin A; Hellerstedt, Sage T; Engel, Stacia R; Karra, Kalpana; Weng, Shuai; Sheppard, Travis K; Binkley, Gail; Simison, Matt; Miyasato, Stuart R; Cherry, J Michael
Abstract The Saccharomyces Genome Database (SGD; http://www.yeastgenome.org) is an expertly curated database of literature-derived functional information for the model organism budding yeast, Saccharomyces cerevisiae. SGD constantly strives to synergize new types of experimental data and bioinformatics predictions with existing data, and to organize them into a comprehensive and up-to-date information resource. The primary mission of SGD is to facilitate research into the biology of yeast and...
US Agency for International Development — The Collecting Taxes Database contains performance and structural indicators about national tax systems. The database contains quantitative revenue performance...
This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...
Full Text Available List Contact us Yeast Interacting Proteins Database Full Data of Yeast Interacting Proteins Database (Origin...al Version) Data detail Data name Full Data of Yeast Interacting Proteins Database (Original Version) DOI 10....18908/lsdba.nbdc00742-004 Description of data contents The entire data in the Yeast Interacting Proteins Database...eir interactions are required. Several sources including YPD (Yeast Proteome Database, Costanzo, M. C., Hoga...ematic name in the SGD (Saccharomyces Genome Database; http://www.yeastgenome.org /). Bait gene name The gen
Raup, B. H.; Khalsa, S. S.; Armstrong, R.
Info, GML (Geography Markup Language) and GMT (Generic Mapping Tools). This "clip-and-ship" function allows users to download only the data they are interested in. Our flexible web interfaces to the database, which includes various support layers (e.g. a layer to help collaborators identify satellite imagery over their region of expertise) will facilitate enhanced analysis to be undertaken on glacier systems, their distribution, and their impacts on other Earth systems.
Full Text Available Comprehensive collection of the available light curves, prediction possibilities and the online model ﬁtting procedure, that are available via Exoplanet Transit Database became very popular in the community. In this paper we summarized the changes, that we made in the ETD during last year (including the Kepler candidates into the prediction section, modeling of an unknown planet in the model-ﬁt section and some other small improvements. All this new tools cannot be found in the main ETD paper.
Full Text Available Abstract Background The Oryza sativa L. indica subspecies is the most widely cultivated rice. During the last few years, we have collected over 20,000 putative full-length cDNAs and over 40,000 ESTs isolated from various cDNA libraries of two indica varieties Guangluai 4 and Minghui 63. A database of the rice indica cDNAs was therefore built to provide a comprehensive web data source for searching and retrieving the indica cDNA clones. Results Rice Indica cDNA Database (RICD is an online MySQL-PHP driven database with a user-friendly web interface. It allows investigators to query the cDNA clones by keyword, genome position, nucleotide or protein sequence, and putative function. It also provides a series of information, including sequences, protein domain annotations, similarity search results, SNPs and InDels information, and hyperlinks to gene annotation in both The Rice Annotation Project Database (RAP-DB and The TIGR Rice Genome Annotation Resource, expression atlas in RiceGE and variation report in Gramene of each cDNA. Conclusion The online rice indica cDNA database provides cDNA resource with comprehensive information to researchers for functional analysis of indica subspecies and for comparative genomics. The RICD database is available through our website http://www.ncgr.ac.cn/ricd.
Yankovich, T.; Beresford, N.A.; Fesenko, S.; Fesenko, J.; Phaneuf, M.; Dagher, E.; Outola, I.; Andersson, P.; Thiessen, K.; Ryan, J.; Wood, M.D.; Bollhöfer, A.
Environmental assessments to evaluate potentials risks to humans and wildlife often involve modelling to predict contaminant exposure through key pathways. Such models require input of parameter values, including concentration ratios, to estimate contaminant concentrations in biota based on measurements or estimates of concentrations in environmental media, such as water. Due to the diversity of species and the range in physicochemical conditions in natural ecosystems, concentration ratios can vary by orders of magnitude, even within similar species. Therefore, to improve model input parameter values for application in aquatic systems, freshwater concentration ratios were collated or calculated from national grey literature, Russian language publications, and refereed papers. Collated data were then input into an international database that is being established by the International Atomic Energy Agency. The freshwater database enables entry of information for all radionuclides listed in ICRP (1983), in addition to the corresponding stable elements, and comprises a total of more than 16,500 concentration ratio (CR wo-water ) values. Although data were available for all broad wildlife groups (with the exception of birds), data were sparse for many organism types. For example, zooplankton, crustaceans, insects and insect larvae, amphibians, and mammals, for which there were CR wo-water values for less than eight elements. Coverage was most comprehensive for fish, vascular plants, and molluscs. To our knowledge, the freshwater database that has now been established represents the most comprehensive set of CR wo-water values for freshwater species currently available for use in radiological environmental assessments
... Facts for Families Guide Facts for Families - Vietnamese Comprehensive Psychiatric Evaluation No. 52; Updated October 2017 Evaluation ... with serious emotional and behavioral problems need a comprehensive psychiatric evaluation. Comprehensive psychiatric evaluations usually require a ...
Segers, P.C.J.; Segers, E.; Broek, P. van den
The present chapter gives an overview of the literature on hypertext comprehension, children's hypertext comprehension and individual variation therein, ending with a perspective for future research. Hypertext comprehension requires the reader to make bridging inferences between the different parts
Nupur, L N U; Vats, Asheema; Dhanda, Sandeep Kumar; Raghava, Gajendra P S; Pinnaka, Anil Kumar; Kumar, Ashwani
Carotenoids have important functions in bacteria, ranging from harvesting light energy to neutralizing oxidants and acting as virulence factors. However, information pertaining to the carotenoids is scattered throughout the literature. Furthermore, information about the genes/proteins involved in the biosynthesis of carotenoids has tremendously increased in the post-genomic era. A web server providing the information about microbial carotenoids in a structured manner is required and will be a valuable resource for the scientific community working with microbial carotenoids. Here, we have created a manually curated, open access, comprehensive compilation of bacterial carotenoids named as ProCarDB- Prokaryotic Carotenoid Database. ProCarDB includes 304 unique carotenoids arising from 50 biosynthetic pathways distributed among 611 prokaryotes. ProCarDB provides important information on carotenoids, such as 2D and 3D structures, molecular weight, molecular formula, SMILES, InChI, InChIKey, IUPAC name, KEGG Id, PubChem Id, and ChEBI Id. The database also provides NMR data, UV-vis absorption data, IR data, MS data and HPLC data that play key roles in the identification of carotenoids. An important feature of this database is the extension of biosynthetic pathways from the literature and through the presence of the genes/enzymes in different organisms. The information contained in the database was mined from published literature and databases such as KEGG, PubChem, ChEBI, LipidBank, LPSN, and Uniprot. The database integrates user-friendly browsing and searching with carotenoid analysis tools to help the user. We believe that this database will serve as a major information centre for researchers working on bacterial carotenoids.
practices familiar to conservationists can be effective in areas where private property dominates or where it is mixed with public lands. These practices...broadly accurate. According to many familiar with the system of appraisals and the market for real property, an appraisal of around fifty percent of...sale of cadastral information to another government entity, the Medellín Public Enterprises ( Empresas Públicas de Medellín, EPM). The cadastral
Olsen, Lars Rønn; Tongchusak, Songsak; Lin, Honghuang
Tumor T cell antigens are both diagnostically and therapeutically valuable molecules. A large number of new peptides are examined as potential tumor epitopes each year, yet there is no infrastructure for storing and accessing the results of these experiments. We have retroactively cataloged more ...
Christiansen, Christian Fynbo; Møller, Morten Hylander; Nielsen, Henrik
AIM OF DATABASE: The aim of this database is to improve the quality of care in Danish intensive care units (ICUs) by monitoring key domains of intensive care and to compare these with predefined standards. STUDY POPULATION: The Danish Intensive Care Database (DID) was established in 2007...... and standardized mortality ratios for death within 30 days after admission using case-mix adjustment (initially using age, sex, and comorbidity level, and, since 2013, using SAPS II) for all patients and for patients with septic shock. DESCRIPTIVE DATA: The DID currently includes 335,564 ICU admissions during 2005...
Primate Info Net Related Databases NCRR PrimateLit: A bibliographic database for primatology Top of any problems with this service. We welcome your feedback. The PrimateLit database is no longer being Resources, National Institutes of Health. The database is a collaborative project of the Wisconsin Primate
Goldsmith, C. Franklin; Magoon, Gregory R.; Green, William H.
High-accuracy ab initio thermochemistry is presented for 219 small molecules relevant in combustion chemistry, including many radical, biradical, and triplet species. These values are critical for accurate kinetic modeling. The RQCISD(T)/cc-PV∞QZ//B3LYP/6-311++G(d,p) method was used to compute the electronic energies. A bond additivity correction for this method has been developed to remove systematic errors in the enthalpy calculations, using the Active Thermochemical Tables as reference values. On the basis of comparison with the benchmark data, the 3σ uncertainty in the standard-state heat of formation is 0.9 kcal/mol, or within chemical accuracy. An uncertainty analysis is presented for the entropy and heat capacity. In many cases, the present values are the most accurate and comprehensive numbers available. The present work is compared to several published databases. In some cases, there are large discrepancies and errors in published databases; the present work helps to resolve these problems. © 2012 American Chemical Society.
Goldsmith, C. Franklin
High-accuracy ab initio thermochemistry is presented for 219 small molecules relevant in combustion chemistry, including many radical, biradical, and triplet species. These values are critical for accurate kinetic modeling. The RQCISD(T)/cc-PV∞QZ//B3LYP/6-311++G(d,p) method was used to compute the electronic energies. A bond additivity correction for this method has been developed to remove systematic errors in the enthalpy calculations, using the Active Thermochemical Tables as reference values. On the basis of comparison with the benchmark data, the 3σ uncertainty in the standard-state heat of formation is 0.9 kcal/mol, or within chemical accuracy. An uncertainty analysis is presented for the entropy and heat capacity. In many cases, the present values are the most accurate and comprehensive numbers available. The present work is compared to several published databases. In some cases, there are large discrepancies and errors in published databases; the present work helps to resolve these problems. © 2012 American Chemical Society.
Cort, B.; Perkins, B.; Cort, G.
The NMT-5 Criticality Database maintains criticality-related data and documentation to ensure the safety of workers handling special nuclear materials at the Plutonium Facility (TA-55) at Los Alamos National Laboratory. The database contains pertinent criticality safety limit information for more than 150 separate locations at which special nuclear materials are handled. Written in 4th Dimension for the Macintosh, it facilitates the production of signs for posting at these areas, tracks the history of postings and related authorizing documentation, and generates in Microsoft Word a current, comprehensive representation of all signs and supporting documentation, such as standard operating procedures and signature approvals. It facilitates the auditing process and is crucial to full and effective compliance with Department of Energy regulations. It has been recommended for installation throughout the Nuclear Materials Technology Division at Los Alamos
Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok
KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.
Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok
KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment
Pratt, Renée M. E.; Smatt, Cindi T.; Wynn, Donald E.
This applied database exercise utilizes a scenario-based case study to teach the basics of Microsoft Access and database management in introduction to information systems and introduction to database course. The case includes background information on a start-up business (i.e., Carol's Travel Club), description of functional business requirements,…
Clack, Doris H.
Explores issues related to bibliographic database authority control, including the nature of standards, quality control, library cooperation, centralized and decentralized databases and authority control systems, and economic considerations. The implications of authority control for linking large scale databases are discussed. (18 references)…
Wang, Yin-Ying; Chen, Wei-Hua; Xiao, Pei-Pei; Xie, Wen-Bin; Luo, Qibin; Bork, Peer; Zhao, Xing-Ming
Drug resistance is becoming a serious problem that leads to the failure of standard treatments, which is generally developed because of genetic mutations of certain molecules. Here, we present GEAR (A database of Genomic Elements Associated with drug Resistance) that aims to provide comprehensive information about genomic elements (including genes, single-nucleotide polymorphisms and microRNAs) that are responsible for drug resistance. Right now, GEAR contains 1631 associations between 201 human drugs and 758 genes, 106 associations between 29 human drugs and 66 miRNAs, and 44 associations between 17 human drugs and 22 SNPs. These relationships are firstly extracted from primary literature with text mining and then manually curated. The drug resistome deposited in GEAR provides insights into the genetic factors underlying drug resistance. In addition, new indications and potential drug combinations can be identified based on the resistome. The GEAR database can be freely accessed through http://gear.comp-sysbio.org. PMID:28294141
O?Brien, Emmet A.; Zhang, Yue; Wang, Eric; Marie, Veronique; Badejoko, Wole; Lang, B. Franz; Burger, Gertraud
The organelle genome database GOBASE, now in its 21st release (June 2008), contains all published mitochondrion-encoded sequences (?913 000) and chloroplast-encoded sequences (?250 000) from a wide range of eukaryotic taxa. For all sequences, information on related genes, exons, introns, gene products and taxonomy is available, as well as selected genome maps and RNA secondary structures. Recent major enhancements to database functionality include: (i) addition of an interface for RNA editing...
Garmany, John; Clark, Terry
INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint
Initially launched in 1983, the CHEMTOX Database was among the first microcomputer databases containing hazardous chemical information. The database is used in many industries and government agencies in more than 17 countries. Updated quarterly, the CHEMTOX Database provides detailed environmental and safety information on 7500-plus hazardous substances covered by dozens of regulatory and advisory sources. This brief listing describes the method of accessing data and provides ordering information for those wishing to obtain the CHEMTOX Database
Full Text Available A metabolome—the collection of comprehensive quantitative data on metabolites in an organism—has been increasingly utilized for applications such as data-intensive systems biology, disease diagnostics, biomarker discovery, and assessment of food quality. A considerable number of tools and databases have been developed to date for the analysis of data generated by various combinations of chromatography and mass spectrometry. We report here a web portal named KOMICS (The Kazusa Metabolomics Portal, where the tools and databases that we developed are available for free to academic users. KOMICS includes the tools and databases for preprocessing, mining, visualization, and publication of metabolomics data. Improvements in the annotation of unknown metabolites and dissemination of comprehensive metabolomic data are the primary aims behind the development of this portal. For this purpose, PowerGet and FragmentAlign include a manual curation function for the results of metabolite feature alignments. A metadata-specific wiki-based database, Metabolonote, functions as a hub of web resources related to the submitters' work. This feature is expected to increase citation of the submitters' work, thereby promoting data publication. As an example of the practical use of KOMICS, a workflow for a study on Jatropha curcas is presented. The tools and databases available at KOMICS should contribute to enhanced production, interpretation, and utilization of metabolomic Big Data.
Sakurai, Nozomu; Ara, Takeshi; Enomoto, Mitsuo; Motegi, Takeshi; Morishita, Yoshihiko; Kurabayashi, Atsushi; Iijima, Yoko; Ogata, Yoshiyuki; Nakajima, Daisuke; Suzuki, Hideyuki; Shibata, Daisuke
A metabolome--the collection of comprehensive quantitative data on metabolites in an organism--has been increasingly utilized for applications such as data-intensive systems biology, disease diagnostics, biomarker discovery, and assessment of food quality. A considerable number of tools and databases have been developed to date for the analysis of data generated by various combinations of chromatography and mass spectrometry. We report here a web portal named KOMICS (The Kazusa Metabolomics Portal), where the tools and databases that we developed are available for free to academic users. KOMICS includes the tools and databases for preprocessing, mining, visualization, and publication of metabolomics data. Improvements in the annotation of unknown metabolites and dissemination of comprehensive metabolomic data are the primary aims behind the development of this portal. For this purpose, PowerGet and FragmentAlign include a manual curation function for the results of metabolite feature alignments. A metadata-specific wiki-based database, Metabolonote, functions as a hub of web resources related to the submitters' work. This feature is expected to increase citation of the submitters' work, thereby promoting data publication. As an example of the practical use of KOMICS, a workflow for a study on Jatropha curcas is presented. The tools and databases available at KOMICS should contribute to enhanced production, interpretation, and utilization of metabolomic Big Data.
Bagger, Frederik Otzen; Rapin, Nicolas; Theilgaard-Mönch, Kim
as well as from more differentiated cell types. Moreover, data from distinct subtypes of human acute myeloid leukemia is included in the database allowing researchers to directly compare gene expression of leukemic cells with those of their closest normal counterpart. Normalization and batch correction...... lead to full integrity of the data in the database. The HemaExplorer has comprehensive visualization interface that can make it useful as a daily tool for biologists and cancer researchers to assess the expression patterns of genes encountered in research or literature. HemaExplorer is relevant for all...... research within the fields of leukemia, immunology, cell differentiation and the biology of the haematopoietic system....
Sanders, Roger E
In DB2 9 for Linux, UNIX, and Windows Database Administration Certification Study Guide, Roger E. Sanders-one of the world's leading DB2 authors and an active participant in the development of IBM's DB2 certification exams-covers everything a reader needs to know to pass the DB2 9 UDB DBA Certification Test (731).This comprehensive study guide steps you through all of the topics that are covered on the test, including server management, data placement, database access, analyzing DB2 activity, DB2 utilities, high availability, security, and much more. Each chapter contains an extensive set of p
Moda, Tiago L; Torres, Leonardo G; Carrara, Alexandre E; Andricopulo, Adriano D
The study of pharmacokinetic properties (PK) is of great importance in drug discovery and development. In the present work, PK/DB (a new freely available database for PK) was designed with the aim of creating robust databases for pharmacokinetic studies and in silico absorption, distribution, metabolism and excretion (ADME) prediction. Comprehensive, web-based and easy to access, PK/DB manages 1203 compounds which represent 2973 pharmacokinetic measurements, including five models for in silico ADME prediction (human intestinal absorption, human oral bioavailability, plasma protein binding, blood-brain barrier and water solubility). http://www.pkdb.ifsc.usp.br
Engine engineering database system is an oriented C AD applied database management system that has the capability managing distributed data. The paper discusses the security issue of the engine engineering database management system (EDBMS). Through studying and analyzing the database security, to draw a series of securi ty rules, which reach B1, level security standard. Which includes discretionary access control (DAC), mandatory access control (MAC) and audit. The EDBMS implem ents functions of DAC, ...
Wolf, Mikyung Kim; Crosson, Amy C.; Resnick, Lauren B.
This study examined the quality of classroom talk and its relation to academic rigor in reading-comprehension lessons. Additionally, the study aimed to characterize effective questions to support rigorous reading comprehension lessons. The data for this study included 21 reading-comprehension lessons in several elementary and middle schools from…
Michel, L.; Nguyen, H. N.; Motch, C.
Many astronomers wish to share datasets with their community but have not enough manpower to develop databases having the functionalities required for high-level scientific applications. The SAADA project aims at automatizing the creation and deployment process of such databases. A generic but scientifically relevant data model has been designed which allows one to build databases by providing only a limited number of product mapping rules. Databases created by SAADA rely on a relational database supporting JDBC and covered by a Java layer including a lot of generated code. Such databases can simultaneously host spectra, images, source lists and plots. Data are grouped in user defined collections whose content can be seen as one unique set per data type even if their formats differ. Datasets can be correlated one with each other using qualified links. These links help, for example, to handle the nature of a cross-identification (e.g., a distance or a likelihood) or to describe their scientific content (e.g., by associating a spectrum to a catalog entry). The SAADA query engine is based on a language well suited to the data model which can handle constraints on linked data, in addition to classical astronomical queries. These constraints can be applied on the linked objects (number, class and attributes) and/or on the link qualifier values. Databases created by SAADA are accessed through a rich WEB interface or a Java API. We are currently developing an inter-operability module implanting VO protocols.
Wescoat, James; Fletcher, Sarah Marie; Novellino, Marianna
National drinking water programs seek to address monitoring challenges that include self-reporting, data sampling, data consistency and quality, and sufficient frequency to assess the sustainability of water systems. India stands out for its comprehensive rural water database known as Integrated Management Information System (IMIS), which conducts annual monitoring of drinking water coverage, water quality, and related program components from the habitation level to the district, state, and n...
Kollegger, James G.; And Others
In the first of three articles, the producer of Energyline, Energynet, and Tele/Scope recalls the development of the databases and database business strategies. The second describes the development of biomedical online databases, and the third discusses future developments, including full text databases, database producers as online host, and…
Petropulos, Dolores; Bittner, David; Murawski, Robert; Golden, Bert
The SmallSat has an unrealized potential in both the private industry and in the federal government. Currently over 70 companies, 50 universities and 17 governmental agencies are involved in SmallSat research and development. In 1994, the U.S. Army Missile and Defense mapped the moon using smallSat imagery. Since then Smart Phones have introduced this imagery to the people of the world as diverse industries watched this trend. The deployment cost of smallSats is also greatly reduced compared to traditional satellites due to the fact that multiple units can be deployed in a single mission. Imaging payloads have become more sophisticated, smaller and lighter. In addition, the growth of small technology obtained from private industries has led to the more widespread use of smallSats. This includes greater revisit rates in imagery, significantly lower costs, the ability to update technology more frequently and the ability to decrease vulnerability of enemy attacks. The popularity of smallSats show a changing mentality in this fast paced world of tomorrow. What impact has this created on the NASA communication networks now and in future years? In this project, we are developing the SmallSat Relational Database which can support a simulation of smallSats within the NASA SCaN Compatability Environment for Networks and Integrated Communications (SCENIC) Modeling and Simulation Lab. The NASA Space Communications and Networks (SCaN) Program can use this modeling to project required network support needs in the next 10 to 15 years. The SmallSat Rational Database could model smallSats just as the other SCaN databases model the more traditional larger satellites, with a few exceptions. One being that the smallSat Database is designed to be built-to-order. The SmallSat database holds various hardware configurations that can be used to model a smallSat. It will require significant effort to develop as the research material can only be populated by hand to obtain the unique data
Wang, Jy-An John [ORNL
Materials behaviors caused by neutron irradiation under fission and/or fusion environments can be little understood without practical examination. Easily accessible material information system with large material database using effective computers is necessary for design of nuclear materials and analyses or simulations of the phenomena. The developed Embrittlement Data Base (EDB) at ORNL is this comprehensive collection of data. EDB database contains power reactor pressure vessel surveillance data, the material test reactor data, foreign reactor data (through bilateral agreements authorized by NRC), and the fracture toughness data. The lessons learned from building EDB program and the associated database management activity regarding Material Database Design Methodology, Architecture and the Embedded QA Protocol are described in this report. The development of IAEA International Database on Reactor Pressure Vessel Materials (IDRPVM) and the comparison of EDB database and IAEA IDRPVM database are provided in the report. The recommended database QA protocol and database infrastructure are also stated in the report.
The metagenomic data obtained from marine environments is significantly useful for understanding marine microbial communities. In comparison with the conventional amplicon-based approach of metagenomics, the recent shotgun sequencing-based approach has become a powerful tool that provides an efficient way of grasping a diversity of the entire microbial community at a sampling point in the sea. However, this approach accelerates accumulation of the metagenome data as well as increase of data complexity. Moreover, when metagenomic approach is used for monitoring a time change of marine environments at multiple locations of the seawater, accumulation of metagenomics data will become tremendous with an enormous speed. Because this kind of situation has started becoming of reality at many marine research institutions and stations all over the world, it looks obvious that the data management and analysis will be confronted by the so-called Big Data issues such as how the database can be constructed in an efficient way and how useful knowledge should be extracted from a vast amount of the data. In this review, we summarize the outline of all the major databases of marine metagenome that are currently publically available, noting that database exclusively on marine metagenome is none but the number of metagenome databases including marine metagenome data are six, unexpectedly still small. We also extend our explanation to the databases, as reference database we call, that will be useful for constructing a marine metagenome database as well as complementing important information with the database. Then, we would point out a number of challenges to be conquered in constructing the marine metagenome database.
Matthews, Kevin M., Jr.; Crocker, Lori; Cupples, J. Scott
As manned space exploration takes on the task of traveling beyond low Earth orbit, many problems arise that must be solved in order to make the journey possible. One major task is protecting humans from the harsh space environment. The current method of protecting astronauts during Extravehicular Activity (EVA) is through use of the specially designed Extravehicular Mobility Unit (EMU). As more rigorous EVA conditions need to be endured at new destinations, the suit will need to be tailored and improved in order to accommodate the astronaut. The Objective behind the EMU Lessons Learned Database(LLD) is to be able to create a tool which will assist in the development of next-generation EMUs, along with maintenance and improvement of the current EMU, by compiling data from Failure Investigation and Analysis Reports (FIARs) which have information on past suit failures. FIARs use a system of codes that give more information on the aspects of the failure, but if one is unfamiliar with the EMU they will be unable to decipher the information. A goal of the EMU LLD is to not only compile the information, but to present it in a user-friendly, organized, searchable database accessible to all familiarity levels with the EMU; both newcomers and veterans alike. The EMU LLD originally started as an Excel database, which allowed easy navigation and analysis of the data through pivot charts. Creating an entry requires access to the Problem Reporting And Corrective Action database (PRACA), which contains the original FIAR data for all hardware. FIAR data are then transferred to, defined, and formatted in the LLD. Work is being done to create a web-based version of the LLD in order to increase accessibility to all of Johnson Space Center (JSC), which includes converting entries from Excel to the HTML format. FIARs related to the EMU have been completed in the Excel version, and now focus has shifted to expanding FIAR data in the LLD to include EVA tools and support hardware such as
Ingeholm, Peter; Gögenur, Ismail; Iversen, Lene H
AIM OF DATABASE: The aim of the database, which has existed for registration of all patients with colorectal cancer in Denmark since 2001, is to improve the prognosis for this patient group. STUDY POPULATION: All Danish patients with newly diagnosed colorectal cancer who are either diagnosed......, and other pathological risk factors. DESCRIPTIVE DATA: The database has had >95% completeness in including patients with colorectal adenocarcinoma with >54,000 patients registered so far with approximately one-third rectal cancers and two-third colon cancers and an overrepresentation of men among rectal...... diagnosis, surgical interventions, and short-term outcomes. The database does not have high-resolution oncological data and does not register recurrences after primary surgery. The Danish Colorectal Cancer Group provides high-quality data and has been documenting an increase in short- and long...
Jung, Sook; Staton, Margaret; Lee, Taein; Blenda, Anna; Svancara, Randall; Abbott, Albert; Main, Dorrie
The Genome Database for Rosaceae (GDR) is a central repository of curated and integrated genetics and genomics data of Rosaceae, an economically important family which includes apple, cherry, peach, pear, raspberry, rose and strawberry. GDR contains annotated databases of all publicly available Rosaceae ESTs, the genetically anchored peach physical map, Rosaceae genetic maps and comprehensively annotated markers and traits. The ESTs are assembled to produce unigene sets of each genus and the entire Rosaceae. Other annotations include putative function, microsatellites, open reading frames, single nucleotide polymorphisms, gene ontology terms and anchored map position where applicable. Most of the published Rosaceae genetic maps can be viewed and compared through CMap, the comparative map viewer. The peach physical map can be viewed using WebFPC/WebChrom, and also through our integrated GDR map viewer, which serves as a portal to the combined genetic, transcriptome and physical mapping information. ESTs, BACs, markers and traits can be queried by various categories and the search result sites are linked to the mapping visualization tools. GDR also provides online analysis tools such as a batch BLAST/FASTA server for the GDR datasets, a sequence assembly server and microsatellite and primer detection tools. GDR is available at http://www.rosaceae.org.
Discusses methods of evaluating commercial online databases and provides examples that illustrate their hidden dimensions. Topics addressed include size, including the number of records or the number of titles; the number of years covered; and the frequency of updates. Comparisons of Readers' Guide Abstracts and Magazine Article Summaries are…
Redmon, Robert J; Denig, William F; Kilcommons, Liam M; Knipp, Delores J
Since the mid 1970's, the Defense Meteorological Satellite Program (DMSP) spacecraft have operated instruments for monitoring the space environment from low earth orbit. As the program evolved, so to have the measurement capabilities such that modern DMSP spacecraft include a comprehensive suite of instruments providing estimates of precipitating electron and ion fluxes, cold/bulk plasma composition and moments, the geomagnetic field, and optical emissions in the far and extreme ultraviolet. We describe the creation of a new public database of precipitating electrons and ions from the Special Sensor J (SSJ) instrument, complete with original counts, calibrated differential fluxes adjusted for penetrating radiation, estimates of the total kinetic energy flux and characteristic energy, uncertainty estimates, and accurate ephemerides. These are provided in a common and self-describing format that covers 30+ years of DMSP spacecraft from F06 (launched in 1982) through F18 (launched in 2009). This new database is accessible at the National Centers for Environmental Information (NCEI) and the Coordinated Data Analysis Web (CDAWeb). We describe how the new database is being applied to high latitude studies of: the co-location of kinetic and electromagnetic energy inputs, ionospheric conductivity variability, field aligned currents and auroral boundary identification. We anticipate that this new database will support a broad range of space science endeavors from single observatory studies to coordinated system science investigations.
Redmon, Robert J.; Denig, William F.; Kilcommons, Liam M.; Knipp, Delores J.
Since the mid-1970s, the Defense Meteorological Satellite Program (DMSP) spacecraft have operated instruments for monitoring the space environment from low Earth orbit. As the program evolved, so have the measurement capabilities such that modern DMSP spacecraft include a comprehensive suite of instruments providing estimates of precipitating electron and ion fluxes, cold/bulk plasma composition and moments, the geomagnetic field, and optical emissions in the far and extreme ultraviolet. We describe the creation of a new public database of precipitating electrons and ions from the Special Sensor J (SSJ) instrument, complete with original counts, calibrated differential fluxes adjusted for penetrating radiation, estimates of the total kinetic energy flux and characteristic energy, uncertainty estimates, and accurate ephemerides. These are provided in a common and self-describing format that covers 30+ years of DMSP spacecraft from F06 (launched in 1982) to F18 (launched in 2009). This new database is accessible at the National Centers for Environmental Information and the Coordinated Data Analysis Web. We describe how the new database is being applied to high-latitude studies of the colocation of kinetic and electromagnetic energy inputs, ionospheric conductivity variability, field-aligned currents, and auroral boundary identification. We anticipate that this new database will support a broad range of space science endeavors from single observatory studies to coordinated system science investigations.
Full Text Available abase Description General information of database Database name PSCDB Alternative n...rial Science and Technology (AIST) Takayuki Amemiya E-mail: Database classification Structure Databases - Protein structure Database...554-D558. External Links: Original website information Database maintenance site Graduate School of Informat...available URL of Web services - Need for user registration Not available About This Database Database Descri...ption Download License Update History of This Database Site Policy | Contact Us Database Description - PSCDB | LSDB Archive ...
The first edition of the Directory of IAEA Databases is intended to describe the computerized information sources available to IAEA staff members. It contains a listing of all databases produced at the IAEA, together with information on their availability
... Indian Health Board) Welcome to the Native Health Database. Please enter your search terms. Basic Search Advanced ... To learn more about searching the Native Health Database, click here. Tutorial Video The NHD has made ...
US Agency for International Development — E3 Staff database is maintained by E3 PDMS (Professional Development & Management Services) office. The database is Mysql. It is manually updated by E3 staff as...
Recently, library staffs arranged and compiled the original research papers that have been written by researchers for 33 years since National Institute of Radiological Sciences (NIRS) established. This papers describes how the internal database of original research papers has been created. This is a small sample of hand-made database. This has been cumulating by staffs who have any knowledge about computer machine or computer programming. (author)
Turnbull, J.A.; Menut, P.; Sartori, E.
The paper describes the International Fuel Performance Experimental (IFPE) database on nuclear fuel performance. The aim of the project is to provide a comprehensive and well-qualified database on Zr clad UO 2 fuel for model development and code validation in the public domain. The data encompass both normal and off-normal operation and include prototypic commercial irradiations as well as experiments performed in material testing reactors. To date, the database contains some 380 individual cases, the majority of which provide data on FGR either from in-pile pressure measurements or PIE techniques including puncturing, electron probe microanalysis (EPMA) and X-ray fluorescence (XRF) measurements. The paper outlines parameters affecting fission gas release and highlights individual datasets addressing these issues. (authors)
Jackson, Kim G; Clarke, Dave T; Murray, Peter; Lovegrove, Julie A; O'Malley, Brendan; Minihane, Anne M; Williams, Christine M
Dysregulation of lipid and glucose metabolism in the postprandial state are recognised as important risk factors for the development of cardiovascular disease and type 2 diabetes. Our objective was to create a comprehensive, standardised database of postprandial studies to provide insights into the physiological factors that influence postprandial lipid and glucose responses. Data were collated from subjects (n = 467) taking part in single and sequential meal postprandial studies conducted by researchers at the University of Reading, to form the DISRUPT (DIetary Studies: Reading Unilever Postprandial Trials) database. Subject attributes including age, gender, genotype, menopausal status, body mass index, blood pressure and a fasting biochemical profile, together with postprandial measurements of triacylglycerol (TAG), non-esterified fatty acids, glucose, insulin and TAG-rich lipoprotein composition are recorded. A particular strength of the studies is the frequency of blood sampling, with on average 10-13 blood samples taken during each postprandial assessment, and the fact that identical test meal protocols were used in a number of studies, allowing pooling of data to increase statistical power. The DISRUPT database is the most comprehensive postprandial metabolism database that exists worldwide and preliminary analysis of the pooled sequential meal postprandial dataset has revealed both confirmatory and novel observations with respect to the impact of gender and age on the postprandial TAG response. Further analysis of the dataset using conventional statistical techniques along with integrated mathematical models and clustering analysis will provide a unique opportunity to greatly expand current knowledge of the aetiology of inter-individual variability in postprandial lipid and glucose responses.
Ann L Griffen
Full Text Available Comparing bacterial 16S rDNA sequences to GenBank and other large public databases via BLAST often provides results of little use for identification and taxonomic assignment of the organisms of interest. The human microbiome, and in particular the oral microbiome, includes many taxa, and accurate identification of sequence data is essential for studies of these communities. For this purpose, a phylogenetically curated 16S rDNA database of the core oral microbiome, CORE, was developed. The goal was to include a comprehensive and minimally redundant representation of the bacteria that regularly reside in the human oral cavity with computationally robust classification at the level of species and genus. Clades of cultivated and uncultivated taxa were formed based on sequence analyses using multiple criteria, including maximum-likelihood-based topology and bootstrap support, genetic distance, and previous naming. A number of classification inconsistencies for previously named species, especially at the level of genus, were resolved. The performance of the CORE database for identifying clinical sequences was compared to that of three publicly available databases, GenBank nr/nt, RDP and HOMD, using a set of sequencing reads that had not been used in creation of the database. CORE offered improved performance compared to other public databases for identification of human oral bacterial 16S sequences by a number of criteria. In addition, the CORE database and phylogenetic tree provide a framework for measures of community divergence, and the focused size of the database offers advantages of efficiency for BLAST searching of large datasets. The CORE database is available as a searchable interface and for download at http://microbiome.osu.edu.
Gustavsson, Marcus; Seijmonsbergen, Arie C.; Kolstrup, Else
This paper presents the structure and contents of a standardised geomorphological GIS database that stores comprehensive scientific geomorphological data and constitutes the basis for processing and extracting spatial thematic data. The geodatabase contains spatial information on morphography/morphometry, hydrography, lithology, genesis, processes and age. A unique characteristic of the GIS geodatabase is that it is constructed in parallel with a new comprehensive geomorphological mapping system designed with GIS applications in mind. This close coupling enables easy digitalisation of the information from the geomorphological map into the GIS database for use in both scientific and practical applications. The selected platform, in which the geomorphological vector, raster and tabular data are stored, is the ESRI Personal geodatabase. Additional data such as an image of the original geomorphological map, DEMs or aerial orthographic images are also included in the database. The structure of the geomorphological database presented in this paper is exemplified for a study site around Liden, central Sweden.
Ensuring database stability and steady performance in the modern world of agile computing is a major challenge. Various changes happening at any level of the computing infrastructure: OS parameters & packages, kernel versions, database parameters & patches, or even schema changes, all can potentially harm production services. This presentation shows how an automatic and regular testing of Oracle databases can be achieved in such agile environment.
Pels, H.J.; Lans, van der R.F.; Pels, H.J.; Meersman, R.A.
Dit artikel introduceert de voornaamste begrippen die een rol spelen rond databases en het geeft een overzicht van de doelstellingen, de functies en de componenten van database-systemen. Hoewel de functie van een database intuitief vrij duidelijk is, is het toch een in technologisch opzicht complex
discard such data after a thorough analysis. The Data Bank also organised a comprehensive review of cross-section data. An efficient review system and associated strategy were developed to systematically compare more than 10 000 cross-section data sets from EXFOR with the corresponding values in the main evaluated nuclear data libraries, including JEFF. The review initially covered all neutron-induced threshold and activation reactions such as (n,n'), (n,2n), (n,p) and (n,α). The resulting statistical information showed various interesting trends in the data, including a list of suspicious data sets for which the cross-section values deviate greatly from the major evaluated nuclear data libraries and/or other measurements. The original publications associated with these data have also been systematically checked. This work confirmed that most of the experimental data were compiled correctly in the EXFOR database, and it identified a few compilation mistakes that have since been corrected. A second part of the review devoted to the (n,γ) cross-section is underway. This part of the review is challenging because of the large fluctuations of data in the resonance region that make the comparison more difficult. If successful, the review could be completed with other non-threshold cross-sections such as (n,f), (n,tot) and (n,n). All of these initiatives have been very useful to maintain the highest level of quality in the EXFOR database. In addition, future development versions of the JEFF library can be automatically benchmarked against other evaluated libraries and against a more reliable experimental database. Such work will contribute to improving the quality of evaluated nuclear data for the benefit of all users. (author)
Arcot, Divya K.
Exposure to particular hazardous materials in a work environment is dangerous to the employees who work directly with or around the materials as well as those who come in contact with them indirectly. In order to maintain a national standard for safe working environments and protect worker health, the Occupational Safety and Health Administration (OSHA) has set forth numerous precautionary regulations. NASA has been proactive in adhering to these regulations by implementing standards which are often stricter than regulation limits and administering frequent health risk assessments. The primary objective of this project is to create the infrastructure for an Asbestos Exposure Assessment Database specific to NASA Johnson Space Center (JSC) which will compile all of the exposure assessment data into a well-organized, navigable format. The data includes Sample Types, Samples Durations, Crafts of those from whom samples were collected, Job Performance Requirements (JPR) numbers, Phased Contrast Microscopy (PCM) and Transmission Electron Microscopy (TEM) results and qualifiers, Personal Protective Equipment (PPE), and names of industrial hygienists who performed the monitoring. This database will allow NASA to provide OSHA with specific information demonstrating that JSC s work procedures are protective enough to minimize the risk of future disease from the exposures. The data has been collected by the NASA contractors Computer Sciences Corporation (CSC) and Wyle Laboratories. The personal exposure samples were collected from devices worn by laborers working at JSC and by building occupants located in asbestos-containing buildings.
Full Text Available base Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...19&lang=en Whole data download - Referenced database Rice Expression Database (RED) Rice full-length cDNA Database... (KOME) Rice Genome Integrated Map Database (INE) Rice Mutant Panel Database (Tos17) Rice Genome Annotation Database
Courtwright, Andrew M; Gabriel, Peter E
A clinical database is a repository of patient medical and sociodemographic information focused on one or more specific health condition or exposure. Although clinical databases may be used for research purposes, their primary goal is to collect and track patient data for quality improvement, quality assurance, and/or actual clinical management. This article aims to provide an introduction and practical advice on the development of small-scale clinical databases for chest physicians and practice groups. Through example projects, we discuss the pros and cons of available technical platforms, including Microsoft Excel and Access, relational database management systems such as Oracle and PostgreSQL, and Research Electronic Data Capture. We consider approaches to deciding the base unit of data collection, creating consensus around variable definitions, and structuring routine clinical care to complement database aims. We conclude with an overview of regulatory and security considerations for clinical databases. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.
This is a comprehensive investigation of the influence of system factors on utilisation of Research4Life databases. It is part of a doctoral dissertation. Research4Life databases are new innovative technologies being investigated in a new context – utilisation by NARIs scientists for research. The study adopted the descriptive ...
Blin, Kai; Medema, Marnix H.; Kottmann, Renzo
Secondary metabolites produced by microorganisms are the main source of bioactive compounds that are in use as antimicrobial and anticancer drugs, fungicides, herbicides and pesticides. In the last decade, the increasing availability of microbial genomes has established genome mining as a very...
Afolabi, Muhammed O; Okebe, Joseph U; McGrath, Nuala; Larson, Heidi J; Bojang, Kalifa; Chandramohan, Daniel
Previous reviews on participants' comprehension of informed consent information have focused on developed countries. Experience has shown that ethical standards developed on Western values may not be appropriate for African settings where research concepts are unfamiliar. We undertook this review to describe how informed consent comprehension is defined and measured in African research settings. We conducted a comprehensive search involving five electronic databases: Medline, Embase, Global Health, EthxWeb and Bioethics Literature Database (BELIT). We also examined African Index Medicus and Google Scholar for relevant publications on informed consent comprehension in clinical studies conducted in sub-Saharan Africa. 29 studies satisfied the inclusion criteria; meta-analysis was possible in 21 studies. We further conducted a direct comparison of participants' comprehension on domains of informed consent in all eligible studies. Comprehension of key concepts of informed consent varies considerably from country to country and depends on the nature and complexity of the study. Meta-analysis showed that 47% of a total of 1633 participants across four studies demonstrated comprehension about randomisation (95% CI 13.9-80.9%). Similarly, 48% of 3946 participants in six studies had understanding about placebo (95% CI 19.0-77.5%), while only 30% of 753 participants in five studies understood the concept of therapeutic misconception (95% CI 4.6-66.7%). Measurement tools for informed consent comprehension were developed with little or no validation. Assessment of comprehension was carried out at variable times after disclosure of study information. No uniform definition of informed consent comprehension exists to form the basis for development of an appropriate tool to measure comprehension in African participants. Comprehension of key concepts of informed consent is poor among study participants across Africa. There is a vital need to develop a uniform definition for
Yu, Jeffrey Xu; Chang, Lijun
It has become highly desirable to provide users with flexible ways to query/search information over databases as simple as keyword search like Google search. This book surveys the recent developments on keyword search over databases, and focuses on finding structural information among objects in a database using a set of keywords. Such structural information to be returned can be either trees or subgraphs representing how the objects, that contain the required keywords, are interconnected in a relational database or in an XML database. The structural keyword search is completely different from
Nguyen-Nielsen, Mary; Høyer, Søren; Friis, Søren
variables include Gleason scores, cancer staging, prostate-specific antigen values, and therapeutic measures (active surveillance, surgery, radiotherapy, endocrine therapy, and chemotherapy). DESCRIPTIVE DATA: In total, 22,332 patients with prostate cancer were registered in DAPROCAdata as of April 2015......AIM OF DATABASE: The Danish Prostate Cancer Database (DAPROCAdata) is a nationwide clinical cancer database that has prospectively collected data on patients with incident prostate cancer in Denmark since February 2010. The overall aim of the DAPROCAdata is to improve the quality of prostate cancer...... care in Denmark by systematically collecting key clinical variables for the purposes of health care monitoring, quality improvement, and research. STUDY POPULATION: All Danish patients with histologically verified prostate cancer are included in the DAPROCAdata. MAIN VARIABLES: The DAPROCAdata...
Errol A. Blake
Full Text Available Database security has evolved; data security professionals have developed numerous techniques and approaches to assure data confidentiality, integrity, and availability. This paper will show that the Traditional Database Security, which has focused primarily on creating user accounts and managing user privileges to database objects are not enough to protect data confidentiality, integrity, and availability. This paper is a compilation of different journals, articles and classroom discussions will focus on unifying the process of securing data or information whether it is in use, in storage or being transmitted. Promoting a change in Database Curriculum Development trends may also play a role in helping secure databases. This paper will take the approach that if one make a conscientious effort to unifying the Database Security process, which includes Database Management System (DBMS selection process, following regulatory compliances, analyzing and learning from the mistakes of others, Implementing Networking Security Technologies, and Securing the Database, may prevent database breach.
Basic, I.; Vrbanic, I.; Zabric, I.; Savli, S.
The aspects of plant ageing management (AM) gained increasing attention over the last ten years. Numerous technical studies have been performed to study the impact of ageing mechanisms on the safe and reliable operation of nuclear power plants. National research activities have been initiated or are in progress to provide the technical basis for decision making processes. The long-term operation of nuclear power plants is influenced by economic considerations, the socio-economic environment including public acceptance, developments in research and the regulatory framework, the availability of technical infrastructure to maintain and service the systems, structures and components as well as qualified personnel. Besides national activities there are a number of international activities in particular under the umbrella of the IAEA, the OECD and the EU. The paper discusses the process, procedure and database developed for Slovenian Nuclear Safety Administration (SNSA) surveillance of ageing process of Nuclear power Plant Krsko.(author)
Metabolic panel - comprehensive; Chem-20; SMA20; Sequential multi-channel analysis with computer-20; SMAC20; Metabolic panel 20 ... Chernecky CC, Berger BJ. Comprehensive metabolic panel (CMP) - blood. In: ... Tests and Diagnostic Procedures . 6th ed. St Louis, MO: ...
Full Text Available ase Description General information of database Database name RPD Alternative name Rice Proteome Database...titute of Crop Science, National Agriculture and Food Research Organization Setsuko Komatsu E-mail: Database... classification Proteomics Resources Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database... description Rice Proteome Database contains information on protei...and entered in the Rice Proteome Database. The database is searchable by keyword,
Full Text Available base Description General information of database Database name JSNP Alternative nam...n Science and Technology Agency Creator Affiliation: Contact address E-mail : Database...sapiens Taxonomy ID: 9606 Database description A database of about 197,000 polymorphisms in Japanese populat...1):605-610 External Links: Original website information Database maintenance site Institute of Medical Scien...er registration Not available About This Database Database Description Download License Update History of This Database
Full Text Available abase Description General information of database Database name ASTRA Alternative n...tics Journal Search: Contact address Database classification Nucleotide Sequence Databases - Gene structure,...3702 Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The database represents classified p...(10):1211-6. External Links: Original website information Database maintenance site National Institute of Ad... for user registration Not available About This Database Database Description Dow
Full Text Available ase Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti... Article title: Rice Expression Database: the gateway to rice functional genomics...nt Science (2002) Dec 7 (12):563-564 External Links: Original website information Database maintenance site
Full Text Available abase Description General information of database Database name PLACE Alternative name A Database...Kannondai, Tsukuba, Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Databas...e classification Plant databases Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database...99, Vol.27, No.1 :297-300 External Links: Original website information Database maintenance site National In...- Need for user registration Not available About This Database Database Descripti
Ingeholm, Peter; Gögenur, Ismail; Iversen, Lene H
The aim of the database, which has existed for registration of all patients with colorectal cancer in Denmark since 2001, is to improve the prognosis for this patient group. All Danish patients with newly diagnosed colorectal cancer who are either diagnosed or treated in a surgical department of a public Danish hospital. The database comprises an array of surgical, radiological, oncological, and pathological variables. The surgeons record data such as diagnostics performed, including type and results of radiological examinations, lifestyle factors, comorbidity and performance, treatment including the surgical procedure, urgency of surgery, and intra- and postoperative complications within 30 days after surgery. The pathologists record data such as tumor type, number of lymph nodes and metastatic lymph nodes, surgical margin status, and other pathological risk factors. The database has had >95% completeness in including patients with colorectal adenocarcinoma with >54,000 patients registered so far with approximately one-third rectal cancers and two-third colon cancers and an overrepresentation of men among rectal cancer patients. The stage distribution has been more or less constant until 2014 with a tendency toward a lower rate of stage IV and higher rate of stage I after introduction of the national screening program in 2014. The 30-day mortality rate after elective surgery has been reduced from >7% in 2001-2003 to database is a national population-based clinical database with high patient and data completeness for the perioperative period. The resolution of data is high for description of the patient at the time of diagnosis, including comorbidities, and for characterizing diagnosis, surgical interventions, and short-term outcomes. The database does not have high-resolution oncological data and does not register recurrences after primary surgery. The Danish Colorectal Cancer Group provides high-quality data and has been documenting an increase in short- and long
Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing
Siesling, Sabine; van der Zwan, Jan Maarten; Izarzugaza, Isabel; Jaal, Jana; Treasure, Tom; Foschi, Roberto; Ricardi, Umberto; Groen, Harry; Tavilla, Andrea; Ardanaz, Eva
Rare thoracic cancers include those of the trachea, thymus and mesothelioma (including peritoneum mesothelioma). The aim of this study was to describe the incidence, prevalence and survival of rare thoracic tumours using a large database, which includes cancer patients diagnosed from 1978 to 2002,
Siesling, Sabine; Zwan, J.M.V.D.; Izarzugaza, I.; Jaal, J.; Treasure, T.; Foschi, R.; Ricardi, U.; Groen, H.; Tavilla, A.; Ardanaz, E.
Rare thoracic cancers include those of the trachea, thymus and mesothelioma (including peritoneum mesothelioma). The aim of this study was to describe the incidence, prevalence and survival of rare thoracic tumours using a large database, which includes cancer patients diagnosed from 1978 to 2002,
Ion beam analysis techniques are non-destructive analytical techniques used to identify the composition and provide elemental depth profiles in surface layers of materials. The applications of such techniques are diverse and include environmental control, cultural heritage and conservation and fusion technologies. Their reliability and accuracy depends strongly on our knowledge of the nuclear reaction cross sections, and this publication describes the coordinated effort to measure, compile and evaluate cross section data relevant to these techniques and make these data available to the user community through a comprehensive online database. It includes detailed assessments of experimental cross sections as well as attempts to benchmark these data against appropriate integral measurements
Sofu, T.; Ley, H.; Turski, R.B.
As an integral part of DOE's International Nuclear Safety Center (INSC) at Argonne National Laboratory, the INSC Database has been established to provide an interactively accessible information resource for the world's nuclear facilities and to promote free and open exchange of nuclear safety information among nations. The INSC Database is a comprehensive resource database aimed at a scope and level of detail suitable for safety analysis and risk evaluation for the world's nuclear power plants and facilities. It also provides an electronic forum for international collaborative safety research for the Department of Energy and its international partners. The database is intended to provide plant design information, material properties, computational tools, and results of safety analysis. Initial emphasis in data gathering is given to Soviet-designed reactors in Russia, the former Soviet Union, and Eastern Europe. The implementation is performed under the Oracle database management system, and the World Wide Web is used to serve as the access path for remote users. An interface between the Oracle database and the Web server is established through a custom designed Web-Oracle gateway which is used mainly to perform queries on the stored data in the database tables
Revised by Sloan, Jan; Henry, Christopher D.; Hopkins, Melanie; Ludington, Steve; Original database by Zartman, Robert E.; Bush, Charles A.; Abston, Carl
The National Geochronological Data Base (NGDB) was established by the United States Geological Survey (USGS) to collect and organize published isotopic (also known as radiometric) ages of rocks in the United States. The NGDB (originally known as the Radioactive Age Data Base, RADB) was started in 1974. A committee appointed by the Director of the USGS was given the mission to investigate the feasibility of compiling the published radiometric ages for the United States into a computerized data bank for ready access by the user community. A successful pilot program, which was conducted in 1975 and 1976 for the State of Wyoming, led to a decision to proceed with the compilation of the entire United States. For each dated rock sample reported in published literature, a record containing information on sample location, rock description, analytical data, age, interpretation, and literature citation was constructed and included in the NGDB. The NGDB was originally constructed and maintained on a mainframe computer, and later converted to a Helix Express relational database maintained on an Apple Macintosh desktop computer. The NGDB and a program to search the data files were published and distributed on Compact Disc-Read Only Memory (CD-ROM) in standard ISO 9660 format as USGS Digital Data Series DDS-14 (Zartman and others, 1995). As of May 1994, the NGDB consisted of more than 18,000 records containing over 30,000 individual ages, which is believed to represent approximately one-half the number of ages published for the United States through 1991. Because the organizational unit responsible for maintaining the database was abolished in 1996, and because we wanted to provide the data in more usable formats, we have reformatted the data, checked and edited the information in some records, and provided this online version of the NGDB. This report describes the changes made to the data and formats, and provides instructions for the use of the database in geographic
Grams, W H
The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from t...
Full Text Available Almost every organization has at its centre a database. The database provides support for conducting different activities, whether it is production, sales and marketing or internal operations. Every day, a database is accessed for help in strategic decisions. The satisfaction therefore of such needs is entailed with a high quality security and availability. Those needs can be realised using a DBMS (Database Management System which is, in fact, software for a database. Technically speaking, it is software which uses a standard method of cataloguing, recovery, and running different data queries. DBMS manages the input data, organizes it, and provides ways of modifying or extracting the data by its users or other programs. Managing the database is an operation that requires periodical updates, optimizing and monitoring.
Hirakawa, Hideki; Mun, Terry; Sato, Shusei
Since the genome sequence of Lotus japonicus, a model plant of family Fabaceae, was determined in 2008 (Sato et al. 2008), the genomes of other members of the Fabaceae family, soybean (Glycine max) (Schmutz et al. 2010) and Medicago truncatula (Young et al. 2011), have been sequenced. In this sec....... In this section, we introduce representative, publicly accessible online resources related to plant materials, integrated databases containing legume genome information, and databases for genome sequence and derived marker information of legume species including L. japonicus...
The interaction of database and AI technologies is crucial to such applications as data mining, active databases, and knowledge-based expert systems. This volume collects the primary readings on the interactions, actual and potential, between these two fields. The editors have chosen articles to balance significant early research and the best and most comprehensive articles from the 1980s. An in-depth introduction discusses basic research motivations, giving a survey of the history, concepts, and terminology of the interaction. Major themes, approaches and results, open issues and future
Full Text Available Human immunodeficiency virus (HIV is responsible for millions of deaths every year. The current treatment involves the use of multiple antiretroviral agents that may harm patients due to their toxic nature. RNA interference (RNAi is a potent candidate for the future treatment of HIV, uses short interfering RNA (siRNA/shRNA for silencing HIV genes. In this study, attempts have been made to create a database HIVsirDB of siRNAs responsible for silencing HIV genes.HIVsirDB is a manually curated database of HIV inhibiting siRNAs that provides comprehensive information about each siRNA or shRNA. Information was collected and compiled from literature and public resources. This database contains around 750 siRNAs that includes 75 partially complementary siRNAs differing by one or more bases with the target sites and over 100 escape mutant sequences. HIVsirDB structure contains sixteen fields including siRNA sequence, HIV strain, targeted genome region, efficacy and conservation of target sequences. In order to facilitate user, many tools have been integrated in this database that includes; i siRNAmap for mapping siRNAs on target sequence, ii HIVsirblast for BLAST search against database, iii siRNAalign for aligning siRNAs.HIVsirDB is a freely accessible database of siRNAs which can silence or degrade HIV genes. It covers 26 types of HIV strains and 28 cell types. This database will be very useful for developing models for predicting efficacy of HIV inhibiting siRNAs. In summary this is a useful resource for researchers working in the field of siRNA based HIV therapy. HIVsirDB database is accessible at http://crdd.osdd.net/raghava/hivsir/.
van den Heever, Marc; Mittal, Anubhav; Haydock, Matthew; Windsor, John
Acute pancreatitis (AP) is a complex disease with multiple aetiological factors, wide ranging severity, and multiple challenges to effective triage and management. Databases, data mining and machine learning algorithms (MLAs), including artificial neural networks (ANNs), may assist by storing and interpreting data from multiple sources, potentially improving clinical decision-making. 1) Identify database technologies used to store AP data, 2) collate and categorise variables stored in AP databases, 3) identify the MLA technologies, including ANNs, used to analyse AP data, and 4) identify clinical and non-clinical benefits and obstacles in establishing a national or international AP database. Comprehensive systematic search of online reference databases. The predetermined inclusion criteria were all papers discussing 1) databases, 2) data mining or 3) MLAs, pertaining to AP, independently assessed by two reviewers with conflicts resolved by a third author. Forty-three papers were included. Three data mining technologies and five ANN methodologies were reported in the literature. There were 187 collected variables identified. ANNs increase accuracy of severity prediction, one study showed ANNs had a sensitivity of 0.89 and specificity of 0.96 six hours after admission--compare APACHE II (cutoff score ≥8) with 0.80 and 0.85 respectively. Problems with databases were incomplete data, lack of clinical data, diagnostic reliability and missing clinical data. This is the first systematic review examining the use of databases, MLAs and ANNs in the management of AP. The clinical benefits these technologies have over current systems and other advantages to adopting them are identified. Copyright © 2013 IAP and EPC. Published by Elsevier B.V. All rights reserved.
Juntunen, R. (Risto)
Abstract In a distributed database data is spread throughout the network into separated nodes with different DBMS systems (Date, 2000). According to CAP-theorem three database properties — consistency, availability and partition tolerance cannot be achieved simultaneously in distributed database systems. Two of these properties can be achieved but not all three at the same time (Brewer, 2000). Since this theorem there has b...
Full Text Available Kristian Antonsen,1 Charlotte Vallentin Rosenstock,2 Lars Hyldborg Lundstrøm2 1Board of Directors, Copenhagen University Hospital, Bispebjerg and Frederiksberg Hospital, Capital Region of Denmark, Denmark; 2Department of Anesthesiology, Copenhagen University Hospital, Nordsjællands Hospital-Hillerød, Capital Region of Denmark, Denmark Aim of database: The aim of the Danish Anaesthesia Database (DAD is the nationwide collection of data on all patients undergoing anesthesia. Collected data are used for quality assurance, quality development, and serve as a basis for research projects. Study population: The DAD was founded in 2004 as a part of Danish Clinical Registries (Regionernes Kliniske Kvalitetsudviklings Program [RKKP]. Patients undergoing general anesthesia, regional anesthesia with or without combined general anesthesia as well as patients under sedation are registered. Data are retrieved from public and private anesthesia clinics, single-centers as well as multihospital corporations across Denmark. In 2014 a total of 278,679 unique entries representing a national coverage of ~70% were recorded, data completeness is steadily increasing. Main variable: Records are aggregated for determining 13 defined quality indicators and eleven defined complications all covering the anesthetic process from the preoperative assessment through anesthesia and surgery until the end of the postoperative recovery period. Descriptive data: Registered variables include patients' individual social security number (assigned to all Danes and both direct patient-related lifestyle factors enabling a quantification of patients' comorbidity as well as variables that are strictly related to the type, duration, and safety of the anesthesia. Data and specific data combinations can be extracted within each department in order to monitor patient treatment. In addition, an annual DAD report is a benchmark for departments nationwide. Conclusion: The DAD is covering the
Bagger, Frederik Otzen; Rapin, Nicolas; Theilgaard-Mönch, Kim
lead to full integrity of the data in the database. The HemaExplorer has comprehensive visualization interface that can make it useful as a daily tool for biologists and cancer researchers to assess the expression patterns of genes encountered in research or literature. HemaExplorer is relevant for all......The HemaExplorer (http://servers.binf.ku.dk/hemaexplorer) is a curated database of processed mRNA Gene expression profiles (GEPs) that provides an easy display of gene expression in haematopoietic cells. HemaExplorer contains GEPs derived from mouse/human haematopoietic stem and progenitor cells...... as well as from more differentiated cell types. Moreover, data from distinct subtypes of human acute myeloid leukemia is included in the database allowing researchers to directly compare gene expression of leukemic cells with those of their closest normal counterpart. Normalization and batch correction...
Laborte, Alice G; Gutierrez, Mary Anne; Balanza, Jane Girly; Saito, Kazuki; Zwart, Sander J; Boschetti, Mirco; Murty, M V R; Villano, Lorena; Aunario, Jorrel Khalil; Reinke, Russell; Koo, Jawoo; Hijmans, Robert J; Nelson, Andrew
Knowing where, when, and how much rice is planted and harvested is crucial information for understanding the effects of policy, trade, and global and technological change on food security. We developed RiceAtlas, a spatial database on the seasonal distribution of the world's rice production. It consists of data on rice planting and harvesting dates by growing season and estimates of monthly production for all rice-producing countries. Sources used for planting and harvesting dates include global and regional databases, national publications, online reports, and expert knowledge. Monthly production data were estimated based on annual or seasonal production statistics, and planting and harvesting dates. RiceAtlas has 2,725 spatial units. Compared with available global crop calendars, RiceAtlas is nearly ten times more spatially detailed and has nearly seven times more spatial units, with at least two seasons of calendar data, making RiceAtlas the most comprehensive and detailed spatial database on rice calendar and production.
Full Text Available base Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...e databases - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description... Links: Original website information Database maintenance site The Molecular Profiling Research Center for D...stration Not available About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - SAHG | LSDB Archive ...
Department of Transportation — The Intermodal Passenger Connectivity Database (IPCD) is a nationwide data table of passenger transportation terminals, with data on the availability of connections...
Department of Veterans Affairs — The Residency Allocation Database is used to determine allocation of funds for residency programs offered by Veterans Affairs Medical Centers (VAMCs). Information...
U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block...
The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.
National Oceanic and Atmospheric Administration, Department of Commerce — This database contains trip-level reports submitted by vessels participating in Exempted Fishery projects with IVR reporting requirements.
Bonnet, Philippe; Gehrke, Johannes; Seshadri, Praveen
. These systems lack flexibility because data is extracted in a predefined way; also, they do not scale to a large number of devices because large volumes of raw data are transferred regardless of the queries that are submitted. In our new concept of sensor database system, queries dictate which data is extracted...... from the sensors. In this paper, we define the concept of sensor databases mixing stored data represented as relations and sensor data represented as time series. Each long-running query formulated over a sensor database defines a persistent view, which is maintained during a given time interval. We...... also describe the design and implementation of the COUGAR sensor database system....
Bernstein, P.A.; DeWitt, D.; Heuer, A.
There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems.......There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems....
U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block...
Nazarova, K.; Wasilewski, P.; Didenko, A.; Genshaft, Y.; Pashkevich, I.
A Magnetic Petrology Database (MPDB) is now being compiled at NASA/Goddard Space Flight Center in cooperation with Russian and Ukrainian Institutions. The purpose of this database is to provide the geomagnetic community with a comprehensive and user-friendly method of accessing magnetic petrology data via Internet for more realistic interpretation of satellite magnetic anomalies. Magnetic Petrology Data had been accumulated in NASA/Goddard Space Flight Center, United Institute of Physics of the Earth (Russia) and Institute of Geophysics (Ukraine) over several decades and now consists of many thousands of records of data in our archives. The MPDB was, and continues to be in big demand especially since recent launching in near Earth orbit of the mini-constellation of three satellites - Oersted (in 1999), Champ (in 2000), and SAC-C (in 2000) which will provide lithospheric magnetic maps with better spatial and amplitude resolution (about 1 nT). The MPDB is focused on lower crustal and upper mantle rocks and will include data on mantle xenoliths, serpentinized ultramafic rocks, granulites, iron quartzites and rocks from Archean-Proterozoic metamorphic sequences from all around the world. A substantial amount of data is coming from the area of unique Kursk Magnetic Anomaly and Kola Deep Borehole (which recovered 12 km of continental crust). A prototype MPDB can be found on the Geodynamics Branch web server of Goddard Space Flight Center at http://core2.gsfc.nasa.gov/terr_mag/magnpetr.html. The MPDB employs a searchable relational design and consists of 7 interrelated tables. The schema of database is shown at http://core2.gsfc.nasa.gov/terr_mag/doc.html. MySQL database server was utilized to implement MPDB. The SQL (Structured Query Language) is used to query the database. To present the results of queries on WEB and for WEB programming we utilized PHP scripting language and CGI scripts. The prototype MPDB is designed to search database by major satellite magnetic
Barg, M. I.; Stobie, E. B.; Ferro, A. J.; O'Neil, E. J.
In the spring of 2000, at the request of the ADASS Program Organizing Committee (POC), we began organizing information from previous ADASS conferences in an effort to create a centralized database. The beginnings of this database originated from data (invited speakers, participants, papers, etc.) extracted from HyperText Markup Language (HTML) documents from past ADASS host sites. Unfortunately, not all HTML documents are well formed and parsing them proved to be an iterative process. It was evident at the beginning that if these Web documents were organized in a standardized way, such as XML (Extensible Markup Language), the processing of this information across the Web could be automated, more efficient, and less error prone. This paper will briefly review the many programming tools available for processing XML, including Java, Perl and Python, and will explore the mapping of relational data from our MySQL database to XML.
Helgstrand, Frederik; Jorgensen, Lars Nannestad
Aim: The Danish Ventral Hernia Database (DVHD) provides national surveillance of current surgical practice and clinical postoperative outcomes. The intention is to reduce postoperative morbidity and hernia recurrence, evaluate new treatment strategies, and facilitate nationwide implementation of ...... of operations and is an excellent tool for observing changes over time, including adjustment of several confounders. This national database registry has impacted on clinical practice in Denmark and led to a high number of scientific publications in recent years.......Aim: The Danish Ventral Hernia Database (DVHD) provides national surveillance of current surgical practice and clinical postoperative outcomes. The intention is to reduce postoperative morbidity and hernia recurrence, evaluate new treatment strategies, and facilitate nationwide implementation...... to the surgical repair are recorded. Data registration is mandatory. Data may be merged with other Danish health registries and information from patient questionnaires or clinical examinations. Descriptive data: More than 37,000 operations have been registered. Data have demonstrated high agreement with patient...
Full Text Available ase Description General information of database Database name RMG Alternative name ...raki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database... classification Nucleotide Sequence Databases Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database...rnal: Mol Genet Genomics (2002) 268: 434–445 External Links: Original website information Database...available URL of Web services - Need for user registration Not available About This Database Database Descri
Full Text Available base Description General information of database Database name KOME Alternative nam... Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice ...Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description Information about approximately ...Hayashizaki Y, Kikuchi S. Journal: PLoS One. 2007 Nov 28; 2(11):e1235. External Links: Original website information Database...OS) Rice mutant panel database (Tos17) A Database of Plant Cis-acting Regulatory
Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27 Arabidopsis Phenome Data...base English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...
Full Text Available List Contact us SKIP Stemcell Database Update History of This Database Date Update contents 2017/03/13 SKIP Stemcell Database... English archive site is opened. 2013/03/29 SKIP Stemcell Database ( https://www.skip.med.k...eio.ac.jp/SKIPSearch/top?lang=en ) is opened. About This Database Database Description Download License Update History of This Databa...se Site Policy | Contact Us Update History of This Database - SKIP Stemcell Database | LSDB Archive ...
Full Text Available List Contact us Yeast Interacting Proteins Database Update History of This Database Date Update contents 201...0/03/29 Yeast Interacting Proteins Database English archive site is opened. 2000/12/4 Yeast Interacting Proteins Database...( http://itolab.cb.k.u-tokyo.ac.jp/Y2H/ ) is released. About This Database Database Description... Download License Update History of This Database Site Policy | Contact Us Update History of This Database... - Yeast Interacting Proteins Database | LSDB Archive ...
Laplante, Philip A
The Comprehensive Dictionary of Electrical Engineering is a complete lexicon covering all the fields of electrical engineering.Areas examined include:applied electrical engineeringmicrowave engineeringcontrol engineeringpower engineeringdigital systems engineeringdevice electronicsand much more! The book provides workable definitions for practicing engineers, serves as a reference and research tool for students, and offers practical information for scientists and engineers in other disciplines.
Hubbard, T; Barker, D; Birney, E; Cameron, G; Chen, Y; Clark, L; Cox, T; Cuff, J; Curwen, V; Down, T; Durbin, R; Eyras, E; Gilbert, J; Hammond, M; Huminiecki, L; Kasprzyk, A; Lehvaslaiho, H; Lijnzaad, P; Melsopp, C; Mongin, E; Pettett, R; Pocock, M; Potter, S; Rust, A; Schmidt, E; Searle, S; Slater, G; Smith, J; Spooner, W; Stabenau, A; Stalker, J; Stupka, E; Ureta-Vidal, A; Vastrik, I; Clamp, M
The Ensembl (http://www.ensembl.org/) database project provides a bioinformatics framework to organise biology around the sequences of large genomes. It is a comprehensive source of stable automatic annotation of the human genome sequence, with confirmed gene predictions that have been integrated with external data sources, and is available as either an interactive web site or as flat files. It is also an open source software engineering project to develop a portable system able to handle very large genomes and associated requirements from sequence analysis to data storage and visualisation. The Ensembl site is one of the leading sources of human genome sequence annotation and provided much of the analysis for publication by the international human genome project of the draft genome. The Ensembl system is being installed around the world in both companies and academic sites on machines ranging from supercomputers to laptops.
Rasmussen, Filip Anselm; Thygesen, Kristian Sommer
We present a comprehensive first-principles study of the electronic structure of 51 semiconducting monolayer transition-metal dichalcogenides and -oxides in the 2H and 1T hexagonal phases. The quasiparticle (QP) band structures with spin-orbit coupling are calculated in the G(0)W(0) approximation...... and used as input to a 2D hydrogenic model to estimate exciton binding energies. Throughout the paper we focus on trends and correlations in the electronic structure rather than detailed analysis of specific materials. All the computed data is available in an open database......., and comparison is made with different density functional theory descriptions. Pitfalls related to the convergence of GW calculations for two-dimensional (2D) materials are discussed together with possible solutions. The monolayer band edge positions relative to vacuum are used to estimate the band alignment...
Full Text Available List Contact us Trypanosomes Database Download First of all, please read the license of this database. Data ...1.4 KB) Simple search and download Downlaod via FTP FTP server is sometimes jammed. If it is, access [here]. About This Database Data...base Description Download License Update History of This Database Site Policy | Contact Us Download - Trypanosomes Database | LSDB Archive ...
The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.
Full Text Available Abstract Background The genus Pseudomonas comprises more than 100 species of environmental, clinical, agricultural, and biotechnological interest. Although, the recommended method for discriminating bacterial species is DNA-DNA hybridisation, alternative techniques based on multigenic sequence analysis are becoming a common practice in bacterial species discrimination studies. Since there is not a general criterion for determining which genes are more useful for species resolution; the number of strains and genes analysed is increasing continuously. As a result, sequences of different genes are dispersed throughout several databases. This sequence information needs to be collected in a common database, in order to be useful for future identification-based projects. Description The PseudoMLSA Database is a comprehensive database of multiple gene sequences from strains of Pseudomonas species. The core of the database is composed of selected gene sequences from all Pseudomonas type strains validly assigned to the genus through 2008. The database is aimed to be useful for MultiLocus Sequence Analysis (MLSA procedures, for the identification and characterisation of any Pseudomonas bacterial isolate. The sequences are available for download via a direct connection to the National Center for Biotechnology Information (NCBI. Additionally, the database includes an online BLAST interface for flexible nucleotide queries and similarity searches with the user's datasets, and provides a user-friendly output for easily parsing, navigating, and analysing BLAST results. Conclusions The PseudoMLSA database amasses strains and sequence information of validly described Pseudomonas species, and allows free querying of the database via a user-friendly, web-based interface available at http://www.uib.es/microbiologiaBD/Welcome.html. The web-based platform enables easy retrieval at strain or gene sequence information level; including references to published peer
Yang, Yaohua; Feng, Jie; Li, Tao; Ge, Feng; Zhao, Jindong
Cyanobacteria are an important group of organisms that carry out oxygenic photosynthesis and play vital roles in both the carbon and nitrogen cycles of the Earth. The annotated genome of Synechococcus sp. PCC 7002, as an ideal model cyanobacterium, is available. A series of transcriptomic and proteomic studies of Synechococcus sp. PCC 7002 cells grown under different conditions have been reported. However, no database of such integrated omics studies has been constructed. Here we present CyanOmics, a database based on the results of Synechococcus sp. PCC 7002 omics studies. CyanOmics comprises one genomic dataset, 29 transcriptomic datasets and one proteomic dataset and should prove useful for systematic and comprehensive analysis of all those data. Powerful browsing and searching tools are integrated to help users directly access information of interest with enhanced visualization of the analytical results. Furthermore, Blast is included for sequence-based similarity searching and Cluster 3.0, as well as the R hclust function is provided for cluster analyses, to increase CyanOmics's usefulness. To the best of our knowledge, it is the first integrated omics analysis database for cyanobacteria. This database should further understanding of the transcriptional patterns, and proteomic profiling of Synechococcus sp. PCC 7002 and other cyanobacteria. Additionally, the entire database framework is applicable to any sequenced prokaryotic genome and could be applied to other integrated omics analysis projects. Database URL: http://lag.ihb.ac.cn/cyanomics. © The Author(s) 2015. Published by Oxford University Press.
Full Text Available Christian Fynbo Christiansen,1 Morten Hylander Møller,2 Henrik Nielsen,1 Steffen Christensen3 1Department of Clinical Epidemiology, Institute of Clinical Medicine, Aarhus University Hospital, Aarhus, 2Department of Intensive Care 4131, Copenhagen University Hospital Rigshospitalet, Copenhagen, 3Department of Intensive Care, Aarhus University Hospital, Aarhus, Denmark Aim of database: The aim of this database is to improve the quality of care in Danish intensive care units (ICUs by monitoring key domains of intensive care and to compare these with predefined standards. Study population: The Danish Intensive Care Database (DID was established in 2007 and includes virtually all ICU admissions in Denmark since 2005. The DID obtains data from the Danish National Registry of Patients, with complete follow-up through the Danish Civil Registration System. Main variables: For each ICU admission, the DID includes data on the date and time of ICU admission, type of admission, organ supportive treatments, date and time of discharge, status at discharge, and mortality up to 90 days after admission. Descriptive variables include age, sex, Charlson comorbidity index score, and, since 2010, the Simplified Acute Physiology Score (SAPS II. The variables are recorded with 90%–100% completeness in the recent years, except for SAPS II score, which is 73%–76% complete. The DID currently includes five quality indicators. Process indicators include out-of-hour discharge and transfer to other ICUs for capacity reasons. Outcome indicators include ICU readmission within 48 hours and standardized mortality ratios for death within 30 days after admission using case-mix adjustment (initially using age, sex, and comorbidity level, and, since 2013, using SAPS II for all patients and for patients with septic shock. Descriptive data: The DID currently includes 335,564 ICU admissions during 2005–2015 (average 31,958 ICU admissions per year. Conclusion: The DID provides a
SRD 102 HIV Structural Database (Web, free access) The HIV Protease Structural Database is an archive of experimentally determined 3-D structures of Human Immunodeficiency Virus 1 (HIV-1), Human Immunodeficiency Virus 2 (HIV-2) and Simian Immunodeficiency Virus (SIV) Proteases and their complexes with inhibitors or products of substrate cleavage.
Vassilev, Kiril; Pedashenko, Hristo; Alexandrova, Alexandra; Tashev, Alexandar; Ganeva, Anna; Gavrilova, Anna; Gradevska, Asya; Assenov, Assen; Vitkova, Antonina; Grigorov, Borislav; Gussev, Chavdar; Filipova, Eva; Aneva, Ina; Knollová, Ilona; Nikolov, Ivaylo; Georgiev, Georgi; Gogushev, Georgi; Tinchev, Georgi; Pachedjieva, Kalina; Koev, Koycho; Lyubenova, Mariyana; Dimitrov, Marius; Apostolova-Stoyanova, Nadezhda; Velev, Nikolay; Zhelev, Petar; Glogov, Plamen; Natcheva, Rayna; Tzonev, Rossen; Boch, Steffen; Hennekens, Stephan M.; Georgiev, Stoyan; Stoyanov, Stoyan; Karakiev, Todor; Kalníková, Veronika; Shivarov, Veselin; Russakova, Veska; Vulchev, Vladimir
The Balkan Vegetation Database (BVD; GIVD ID: EU-00-019; http://www.givd.info/ID/EU-00- 019) is a regional database that consists of phytosociological relevés from different vegetation types from six countries on the Balkan Peninsula (Albania, Bosnia and Herzegovina, Bulgaria, Kosovo, Montenegro
R. Veenhoven (Ruut)
textabstractABSTRACT The World Database of Happiness is an ongoing register of research on subjective appreciation of life. Its purpose is to make the wealth of scattered findings accessible, and to create a basis for further meta-analytic studies. The database involves four sections:
a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996......a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996...
SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access) This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.
The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use information is compiled from multiple sources while product information is gathered from publicly available Material Safety Data Sheets (MSDS). EPA researchers are evaluating the possibility of expanding the database with additional product and use information.
Describes a specialist bibliographic database of literature in the field of artificial intelligence created by the Turing Institute (Glasgow, Scotland) using the BRS/Search information retrieval software. The subscription method for end-users--i.e., annual fee entitles user to unlimited access to database, document provision, and printed awareness…
Wishart, David S; Jewison, Timothy; Guo, An Chi; Wilson, Michael; Knox, Craig; Liu, Yifeng; Djoumbou, Yannick; Mandal, Rupasri; Aziat, Farid; Dong, Edison; Bouatra, Souhaila; Sinelnikov, Igor; Arndt, David; Xia, Jianguo; Liu, Philip; Yallou, Faizath; Bjorndahl, Trent; Perez-Pineiro, Rolando; Eisner, Roman; Allen, Felicity; Neveu, Vanessa; Greiner, Russ; Scalbert, Augustin
The Human Metabolome Database (HMDB) (www.hmdb.ca) is a resource dedicated to providing scientists with the most current and comprehensive coverage of the human metabolome. Since its first release in 2007, the HMDB has been used to facilitate research for nearly 1000 published studies in metabolomics, clinical biochemistry and systems biology. The most recent release of HMDB (version 3.0) has been significantly expanded and enhanced over the 2009 release (version 2.0). In particular, the number of annotated metabolite entries has grown from 6500 to more than 40,000 (a 600% increase). This enormous expansion is a result of the inclusion of both 'detected' metabolites (those with measured concentrations or experimental confirmation of their existence) and 'expected' metabolites (those for which biochemical pathways are known or human intake/exposure is frequent but the compound has yet to be detected in the body). The latest release also has greatly increased the number of metabolites with biofluid or tissue concentration data, the number of compounds with reference spectra and the number of data fields per entry. In addition to this expansion in data quantity, new database visualization tools and new data content have been added or enhanced. These include better spectral viewing tools, more powerful chemical substructure searches, an improved chemical taxonomy and better, more interactive pathway maps. This article describes these enhancements to the HMDB, which was previously featured in the 2009 NAR Database Issue. (Note to referees, HMDB 3.0 will go live on 18 September 2012.).
NoSQL database scaling is a decision, where system resources or financial expenses are traded for database performance or other benefits. By scaling a database, database performance and resource usage might increase or decrease, such changes might have a negative impact on an application that uses the database. In this work it is analyzed how database scaling affect database resource usage and performance. As a results, calculations are acquired, using which database scaling types and differe...
Abadie, L; Van Herwijnen, Eric; Jacobsson, R; Jost, B; Neufeld, N
The aim of the LHCb configuration database is to store information about all the controllable devices of the detector. The experiment's control system (that uses PVSS ) will configure, start up and monitor the detector from the information in the configuration database. The database will contain devices with their properties, connectivity and hierarchy. The ability to store and rapidly retrieve huge amounts of data, and the navigability between devices are important requirements. We have collected use cases to ensure the completeness of the design. Using the entity relationship modelling technique we describe the use cases as classes with attributes and links. We designed the schema for the tables using relational diagrams. This methodology has been applied to the TFC (switches) and DAQ system. Other parts of the detector will follow later. The database has been implemented using Oracle to benefit from central CERN database support. The project also foresees the creation of tools to populate, maintain, and co...
Lucieli Dias Pedreschi Chaves
Full Text Available ABSTRACT Objective: To reflect on nursing supervision as a management tool for care comprehensiveness by nurses, considering its potential and limits in the current scenario. Method: A reflective study based on discourse about nursing supervision, presenting theoretical and practical concepts and approaches. Results: Limits on the exercise of supervision are related to the organization of healthcare services based on the functional and clinical model of care, in addition to possible gaps in the nurse training process and work overload. Regarding the potential, researchers emphasize that supervision is a tool for coordinating care and management actions, which may favor care comprehensiveness, and stimulate positive attitudes toward cooperation and contribution within teams, co-responsibility, and educational development at work. Final considerations: Nursing supervision may help enhance care comprehensiveness by implying continuous reflection on including the dynamics of the healthcare work process and user needs in care networks.
The book discusses the emerging topic of comprehensive energy management in electric vehicles from the viewpoint of academia and from the industrial perspective. It provides a seamless coverage of all relevant systems and control algorithms for comprehensive energy management, their integration on a multi-core system and their reliability assurance (validation and test). Relevant European projects contributing to the evolvement of comprehensive energy management in fully electric vehicles are also included.
Caivano, Jose L.
The paper describes the methodology and results of a project under development, aimed at the elaboration of an interactive bibliographical database on color in all fields of application: philosophy, psychology, semiotics, education, anthropology, physical and natural sciences, biology, medicine, technology, industry, architecture and design, arts, linguistics, geography, history. The project is initially based upon an already developed bibliography, published in different journals, updated in various opportunities, and now available at the Internet, with more than 2,000 entries. The interactive database will amplify that bibliography, incorporating hyperlinks and contents (indexes, abstracts, keywords, introductions, or eventually the complete document), and devising mechanisms for information retrieval. The sources to be included are: books, doctoral dissertations, multimedia publications, reference works. The main arrangement will be chronological, but the design of the database will allow rearrangements or selections by different fields: subject, Decimal Classification System, author, language, country, publisher, etc. A further project is to develop another database, including color-specialized journals or newsletters, and articles on color published in international journals, arranged in this case by journal name and date of publication, but allowing also rearrangements or selections by author, subject and keywords.
Full Text Available Abstract Background Trichophyton rubrum is the most common dermatophyte species and the most frequent cause of fungal skin infections in humans worldwide. It's a major concern because feet and nail infections caused by this organism is extremely difficult to cure. A large set of expression data including expressed sequence tags (ESTs and transcriptional profiles of this important fungal pathogen are now available. Careful analysis of these data can give valuable information about potential virulence factors, antigens and novel metabolic pathways. We intend to create an integrated database TrED to facilitate the study of dermatophytes, and enhance the development of effective diagnostic and treatment strategies. Description All publicly available ESTs and expression profiles of T. rubrum during conidial germination in time-course experiments and challenged with antifungal agents are deposited in the database. In addition, comparative genomics hybridization results of 22 dermatophytic fungi strains from three genera, Trichophyton, Microsporum and Epidermophyton, are also included. ESTs are clustered and assembled to elongate the sequence length and abate redundancy. TrED provides functional analysis based on GenBank, Pfam, and KOG databases, along with KEGG pathway and GO vocabulary. It is integrated with a suite of custom web-based tools that facilitate querying and retrieving various EST properties, visualization and comparison of transcriptional profiles, and sequence-similarity searching by BLAST. Conclusion TrED is built upon a relational database, with a web interface offering analytic functions, to provide integrated access to various expression data of T. rubrum and comparative results of dermatophytes. It is devoted to be a comprehensive resource and platform to assist functional genomic studies in dermatophytes. TrED is available from URL: http://www.mgc.ac.cn/TrED/.
J. E. Daniell
Full Text Available The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes.
Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon.
Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected, and economic losses (direct, indirect, aid, and insured.
Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto ($214 billion USD damage; 2011 HNDECI-adjusted dollars compared to the 2011 Tohoku (>$300 billion USD at time of writing, 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product, exchange rate, wage information, population, HDI (Human Development Index, and insurance information have been collected globally to form comparisons.
This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global
Daniell, J. E.; Khazai, B.; Wenzel, F.; Vervaeck, A.
The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture) database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes. Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon. Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected), and economic losses (direct, indirect, aid, and insured). Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto (214 billion USD damage; 2011 HNDECI-adjusted dollars) compared to the 2011 Tohoku (>300 billion USD at time of writing), 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product), exchange rate, wage information, population, HDI (Human Development Index), and insurance information have been collected globally to form comparisons. This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global reinsurance field.
Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database
Chen, B. Q.; Yang, M.; Jiang, B. W.
A database for the pulsating variable stars is constructed for Chinese astronomers to study the variable stars conveniently. The database includes about 230000 variable stars in the Galactic bulge, LMC and SMC observed by the MACHO (MAssive Compact Halo Objects) and OGLE (Optical Gravitational Lensing Experiment) projects at present. The software used for the construction is LAMP, i.e., Linux+Apache+MySQL+PHP. A web page is provided to search the photometric data and the light curve in the database through the right ascension and declination of the object. More data will be incorporated into the database.
Following Cullen Recommendation 39 which states that: ''The regulatory body should be responsible for maintaining a database with regard to hydrocarbon leaks, spills, and ignitions in the Industry and for the benefit of Industry'', HSE Offshore Safety Division (HSE-OSD) has now been operating the Hydrocarbon Releases (HCR) Database for approximately 3 years. This paper deals with the reporting of Offshore Hydrocarbon Releases, the setting up of the HCR Database, the collection of associated equipment population data, and the main features and benefits of the database, including discussion on the latest output information. (author)
Akishina, E.P.; Alexandrov, E.I.; Alexandrov, I.N.; Filozova, I.A.; Ivanov, V.V.; Friese, V.
This paper presents the implementation of the component database for the CBM experiment. The considered database is designed to effectively manage a large number of components for different CBM detectors during their manufacture, installation and operation. This database contains information about the production company, quality indicators, including test results, information on the whereabouts of the component and its status. A functional model, a design of the database schema, a description of tables and catalogs as well as a graphical user interface system are shown. [ru
Kandel, Abraham; Bunke, Horst
Adding the time dimension to real-world databases produces Time SeriesDatabases (TSDB) and introduces new aspects and difficulties to datamining and knowledge discovery. This book covers the state-of-the-artmethodology for mining time series databases. The novel data miningmethods presented in the book include techniques for efficientsegmentation, indexing, and classification of noisy and dynamic timeseries. A graph-based method for anomaly detection in time series isdescribed and the book also studies the implications of a novel andpotentially useful representation of time series as strings. Theproblem of detecting changes in data mining models that are inducedfrom temporal databases is additionally discussed.
Wei, J; Jain, A; Peggs, S; Pilat, F; Bottura, L; Sabbi, G L; MacKay, W W
The US-LHC Magnet Database is designed for production-magnet quality assurance, field and alignment error impact analysis, cryostat assembly assistance, and ring installation assistance. The database consists of tables designed to store magnet field and alignment measurements data and quench data. This information will also be essential for future machine operations including local IR corrections. (7 refs).
Wei, J.; McChesney, D.; Jain, A.; Peggs, S.; Pilat, F.; Bottura, L.; Sabbi, G.
The US-LHC Magnet Database is designed for production-magnet quality assurance, field and alignment error impact analysis, cryostat assembly assistance, and ring installation assistance. The database consists of tables designed to store magnet field and alignment measurements data and quench data. This information will also be essential for future machine operations including local IR corrections
Describes the development of an in-house bibliographic database by the U.S. Army Corp of Engineers Cold Regions Research and Engineering Laboratory on arctic wetlands research. Topics discussed include planning; identifying relevant search terms and commercial online databases; downloading citations; criteria for software selection; management…
Clausen, Jürgen; Albrecht, Henning
The aim of the present report is to provide an overview of the first database on clinical research in veterinary homeopathy. Detailed searches in the database 'Veterinary Clinical Research-Database in Homeopathy' (http://www.carstens-stiftung.de/clinresvet/index.php). The database contains about 200 entries of randomised clinical trials, non-randomised clinical trials, observational studies, drug provings, case reports and case series. Twenty-two clinical fields are covered and eight different groups of species are included. The database is free of charge and open to all interested veterinarians and researchers. The database enables researchers and veterinarians, sceptics and supporters to get a quick overview of the status of veterinary clinical research in homeopathy and alleviates the preparation of systematical reviews or may stimulate reproductions or even new studies. 2010 Elsevier Ltd. All rights reserved.
The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for the Tank Waste Remediation System (TWRS) Final Safety Analysis Report (FSAR). The FSAR is part of the approved TWRS Authorization Basis (AB). This document describes, identifies, and defines the contents and structure of the TWRS FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The TWRS Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The database supports the preparation of Chapters 3,4, and 5 of the TWRS FSAR and the USQ process and consists of two major, interrelated data sets: (1) Hazard Evaluation Database--Data from the results of the hazard evaluations; and (2) Hazard Topography Database--Data from the system familiarization and hazard identification.
Swinney, Christian C; Allison, Zain
Spaceflight and the associated gravitational fluctuations may impact various components of the central nervous system. These include changes in intracranial pressure, the spine, and neurocognitive performance. The implications of altered astronaut performance on critical spaceflight missions are potentially significant. The current body of research on this important topic is extremely limited, and a comprehensive review has not been published. Herein, the authors address this notable gap, as well as the role of the neurosurgeon in optimizing potential diagnostic and therapeutic modalities. A literature search was conducted using the PubMed, EMBASE, and Google Scholar databases, with no time constraints. Significant manuscripts on physiologic changes associated with spaceflight and microgravity were identified and reviewed. Manifestations were separated into 1 of 3 general categories, including changes in intracranial pressure, the spine, and neurocognitive performance. A comprehensive literature review yielded 27 studies with direct relevance to the impact of microgravity and spaceflight on nervous system physiology. This included 7 studies related to intracranial pressure fluctuations, 17 related to changes in the spinal column, and 3 related to neurocognitive change. The microgravity environment encountered during spaceflight impacts intracranial physiology. This includes changes in intracranial pressure, the spinal column, and neurocognitive performance. Herein, we present a systematic review of the published literature on this issue. Neurosurgeons should have a key role in the continued study of this important topic, contributing to both diagnostic and therapeutic understanding. Copyright © 2017 Elsevier Inc. All rights reserved.
Cochrane, Guy; Karsch-Mizrachi, Ilene; Nakamura, Yasukazu
Under the International Nucleotide Sequence Database Collaboration (INSDC; http://www.insdc.org), globally comprehensive public domain nucleotide sequence is captured, preserved and presented. The partners of this long-standing collaboration work closely together to provide data formats and conventions that enable consistent data submission to their databases and support regular data exchange around the globe. Clearly defined policy and governance in relation to free access to data and relationships with journal publishers have positioned INSDC databases as a key provider of the scientific record and a core foundation for the global bioinformatics data infrastructure. While growth in sequence data volumes comes no longer as a surprise to INSDC partners, the uptake of next-generation sequencing technology by mainstream science that we have witnessed in recent years brings a step-change to growth, necessarily making a clear mark on INSDC strategy. In this article, we introduce the INSDC, outline data growth patterns and comment on the challenges of increased growth.
Arenas, Marcelo; Gutierrez, Claudio; Pérez, Jorge
The goal of this paper is to give an overview of the basics of the theory of RDF databases. We provide a formal definition of RDF that includes the features that distinguish this model from other graph data models. We then move into the fundamental issue of querying RDF data. We start by considering the RDF query language SPARQL, which is a W3C Recommendation since January 2008. We provide an algebraic syntax and a compositional semantics for this language, study the complexity of the evaluation problem for different fragments of SPARQL, and consider the problem of optimizing the evaluation of SPARQL queries, showing that a natural fragment of this language has some good properties in this respect. We furthermore study the expressive power of SPARQL, by comparing it with some well-known query languages such as relational algebra. We conclude by considering the issue of querying RDF data in the presence of RDFS vocabulary. In particular, we present a recently proposed extension of SPARQL with navigational capabilities.
Kantak, Anil V.
A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.
Trahern, C.G.; Zhou, J.
When completed the Superconducting Super Collider will be the world's largest accelerator complex. In order to build this system on schedule, the use of database technologies will be essential. In this paper we discuss one of the database efforts underway at the SSC, the lattice database. The SSC lattice database provides a centralized source for the design of each major component of the accelerator complex. This includes the two collider rings, the High Energy Booster, Medium Energy Booster, Low Energy Booster, and the LINAC as well as transfer and test beam lines. These designs have been created using a menagerie of programs such as SYNCH, DIMAD, MAD, TRANSPORT, MAGIC, TRACE3D AND TEAPOT. However, once a design has been completed, it is entered into a uniform database schema in the database system. In this paper we discuss the reasons for creating the lattice database and its implementation via the commercial database system SYBASE. Each lattice in the lattice database is composed of a set of tables whose data structure can describe any of the SSC accelerator lattices. In order to allow the user community access to the databases, a programmatic interface known as dbsf (for database to several formats) has been written. Dbsf creates ascii input files appropriate to the above mentioned accelerator design programs. In addition it has a binary dataset output using the Self Describing Standard data discipline provided with the Integrated Scientific Tool Kit software tools. Finally we discuss the graphical interfaces to the lattice database. The primary interface, known as OZ, is a simulation environment as well as a database browser
The goal of this project is to organize and centralize the data about software tools available to CERN employees, as well as provide a system that would simplify the license management process by providing information about the available licenses and their expiry dates. The project development process is consisted of two steps: modeling the products (software tools), product licenses, legal agreements and other data related to these entities in a relational database and developing the front-end user interface so that the user can interact with the database. The result is an ASP.NET MVC web application with interactive views for displaying and managing the data in the underlying database.
Iftikhar, Nadeem; Pedersen, Torben Bach
and reporting purposes. This paper presents the LandIT database; which is result of the LandIT project, which refers to an industrial collaboration project that developed technologies for communication and data integration between farming devices and systems. The LandIT database in principal is based...... on the ISOBUS standard; however the standard is extended with additional requirements, such as gradual data aggregation and flexible exchange of farming data. This paper describes the conceptual and logical schemas of the proposed database based on a real-life farming case study....
The computer programme, which builds atom-bond connection tables from nomenclatures, is developed. Chemical substances with their nomenclature and varieties of trivial names or experimental code numbers are inputted. The chemical structures of the database are stereospecifically stored and are able to be searched and displayed according to stereochemistry. Source data are from laws and regulations of Japan, RTECS of US and so on. The database plays a central role within the integrated fact database service of JICST and makes interrelational retrieval possible.
Park, T.S. [Korea Energy Economics Institute, Euiwang (Korea, Republic of)
An integrated energy database should be prepared in advance for managing energy statistics comprehensively. However, since much manpower and budget is required for developing an integrated energy database, it is difficult to establish a database within a short period of time. Therefore, this study sets the purpose in drawing methods to analyze existing statistical data lists and to consolidate insufficient data as first stage work for the energy database, and at the same time, in analyzing general concepts and the data structure of the database. I also studied the data content and items of energy databases in operation in international energy-related organizations such as IEA, APEC, Japan, and the USA as overseas cases as well as domestic conditions in energy databases, and the hardware operating systems of Japanese databases. I analyzed the making-out system of Korean energy databases, discussed the KEDB system which is representative of total energy databases, and present design concepts for new energy databases. In addition, I present the establishment directions and their contents of future Korean energy databases, data contents that should be collected by supply and demand statistics, and the establishment of data collection organization, etc. by analyzing the Korean energy statistical data and comparing them with the system of OECD/IEA. 26 refs., 15 figs., 11 tabs.
For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.
Bader, Markus; Lamers, Monique
Research on human language comprehension has been heavily influenced by properties of the English language. Since case plays only a minor role in English, its role for language comprehension has only recently become a topic for extensive research on psycholinguistics. In the psycholinguistic
Blasiak, W.; Godlewska, M.; Rosiek, R.; Wcislo, D.
The paper presents the results of research on the relationship between self-assessed comprehension of physics lectures and final grades of junior high school students (aged 13-15), high school students (aged 16-18) and physics students at the Pedagogical University of Cracow, Poland (aged 21). Students' declared level of comprehension was measured…
Sherwood Alison R
Full Text Available Abstract Background Biodiversity databases serve the important role of highlighting species-level diversity from defined geographical regions. Databases that are specially designed to accommodate the types of data gathered during regional surveys are valuable in allowing full data access and display to researchers not directly involved with the project, while serving as a Laboratory Information Management System (LIMS. The Hawaiian Freshwater Algal Database, or HfwADB, was modified from the Hawaiian Algal Database to showcase non-marine algal specimens collected from the Hawaiian Archipelago by accommodating the additional level of organization required for samples including multiple species. Description The Hawaiian Freshwater Algal Database is a comprehensive and searchable database containing photographs and micrographs of samples and collection sites, geo-referenced collecting information, taxonomic data and standardized DNA sequence data. All data for individual samples are linked through unique 10-digit accession numbers (“Isolate Accession”, the first five of which correspond to the collection site (“Environmental Accession”. Users can search online for sample information by accession number, various levels of taxonomy, habitat or collection site. HfwADB is hosted at the University of Hawaii, and was made publicly accessible in October 2011. At the present time the database houses data for over 2,825 samples of non-marine algae from 1,786 collection sites from the Hawaiian Archipelago. These samples include cyanobacteria, red and green algae and diatoms, as well as lesser representation from some other algal lineages. Conclusions HfwADB is a digital repository that acts as a Laboratory Information Management System for Hawaiian non-marine algal data. Users can interact with the repository through the web to view relevant habitat data (including geo-referenced collection locations and download images of collection sites, specimen
Deciding which intersections in the state of Kentucky warrant safety improvements requires a comprehensive inventory : with information on every intersection in the public roadway network. The Kentucky Transportation Cabinet (KYTC) : had previously c...
Syntheses of paleoclimate data in North America are essential for understanding long-term spatiotemporal variability in climate and for properly assessing risk on decadal and longer timescales. Existing reconstructions of the past 2,000 years rely almost exclusively on tree-ring records, which can underestimate low-frequency variability and rarely extend beyond the last millennium. Meanwhile, many records from the full spectrum of paleoclimate archives are available and hold the potential of enhancing our understanding of past climate across North America over the past 2000 years. The second phase of the Past Global Changes (PAGES) North America 2k project began in 2014, with a primary goal of assembling these disparate paleoclimate records into a unified database. This effort is currently supported by the USGS Powell Center together with PAGES. Its success requires grassroots support from the community of researchers developing and interpreting paleoclimatic evidence relevant to the past 2000 years. Most likely, fewer than half of the published records appropriate for this database are publicly archived, and far fewer include the data needed to quantify geochronologic uncertainty, or to concisely describe how best to interpret the data in context of a large-scale paleoclimatic synthesis. The current version of the database includes records that (1) have been published in a peer-reviewed journal (including evidence of the record's relationship to climate), (2) cover a substantial portion of the past 2000 yr (>300 yr for annual records, >500 yr for lower frequency records) at relatively high resolution (<50 yr/observation), and (3) have reasonably small and quantifiable age uncertainty. Presently, the database includes records from boreholes, ice cores, lake and marine sediments, speleothems, and tree rings. This poster presentation will display the site locations and basic metadata of the records currently in the database. We invite anyone with interest in
Friend, Margaret; Smolak, Erin; Liu, Yushuang; Poulin-Dubois, Diane; Zesiger, Pascal
Recent studies demonstrate that emerging literacy depends on earlier language achievement. Importantly, most extant work focuses on parent-reported production prior to 30 months of age. Of interest is whether and how directly assessed vocabulary comprehension in the 2nd year of life supports vocabulary and kindergarten readiness in the 4th year. We first contrasted orthogonal indices of parent-reported production and directly assessed vocabulary comprehension and found that comprehension was a stronger predictor of child outcomes. We then assessed prediction from vocabulary comprehension controlling for maternal education, preschool attendance, and child sex. In 3 studies early, decontextualized vocabulary comprehension emerged as a significant predictor of 4th year language and kindergarten readiness accounting for unique variance above demographic control variables. Further we found that the effect of early vocabulary on 4th year kindergarten readiness was not mediated by 4th year vocabulary. This pattern of results emerged in English monolingual children (N = 48) and replicated in French monolingual (N = 58) and French-English bilingual children (N = 34). Our findings suggest that early, decontextualized vocabulary may provide a platform for the establishment of a conceptual system that supports both later vocabulary and kindergarten readiness, including the acquisition of a wide range of concepts including print and number. Differences between parent-reported and directly assessed vocabulary and the mechanisms by which decontextualized vocabulary may contribute to conceptual development are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Mewes, H W; Frishman, D; Güldener, U; Mannhaupt, G; Mayer, K; Mokrejs, M; Morgenstern, B; Münsterkötter, M; Rudd, S; Weil, B
The Munich Information Center for Protein Sequences (MIPS-GSF, Neuherberg, Germany) continues to provide genome-related information in a systematic way. MIPS supports both national and European sequencing and functional analysis projects, develops and maintains automatically generated and manually annotated genome-specific databases, develops systematic classification schemes for the functional annotation of protein sequences, and provides tools for the comprehensive analysis of protein sequences. This report updates the information on the yeast genome (CYGD), the Neurospora crassa genome (MNCDB), the databases for the comprehensive set of genomes (PEDANT genomes), the database of annotated human EST clusters (HIB), the database of complete cDNAs from the DHGP (German Human Genome Project), as well as the project specific databases for the GABI (Genome Analysis in Plants) and HNB (Helmholtz-Netzwerk Bioinformatik) networks. The Arabidospsis thaliana database (MATDB), the database of mitochondrial proteins (MITOP) and our contribution to the PIR International Protein Sequence Database have been described elsewhere [Schoof et al. (2002) Nucleic Acids Res., 30, 91-93; Scharfe et al. (2000) Nucleic Acids Res., 28, 155-158; Barker et al. (2001) Nucleic Acids Res., 29, 29-32]. All databases described, the protein analysis tools provided and the detailed descriptions of our projects can be accessed through the MIPS World Wide Web server (http://mips.gsf.de).
The present book's subject is multidimensional data models and data modeling concepts as they are applied in real data warehouses. The book aims to present the most important concepts within this subject in a precise and understandable manner. The book's coverage of fundamental concepts includes data cubes and their elements, such as dimensions, facts, and measures and their representation in a relational setting; it includes architecture-related concepts; and it includes the querying of multidimensional databases.The book also covers advanced multidimensional concepts that are considered to b
U.S. Environmental Protection Agency — The Toxicity Reference Database (ToxRefDB) contains approximately 30 years and $2 billion worth of animal studies. ToxRefDB allows scientists and the interested...
U.S. Department of Health & Human Services — For a drug product that does not have a dissolution test method in the United States Pharmacopeia (USP), the FDA Dissolution Methods Database provides information on...
US Agency for International Development — OTI's worldwide activity database is a simple and effective information system that serves as a program management, tracking, and reporting tool. In each country,...
Calm, J.M. [Calm (James M.), Great Falls, VA (United States)
The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.
The purpose of this project was to take the data gathered for the Maritime Claims chart and create a Maritime Jurisdictions digital database suitable for use with oceanographic mission planning objectives...
U.S. Department of Health & Human Services — The Environmental Scanning and Program Characteristic (ESPC) Database is in a Microsoft (MS) Access format and contains Medicaid and CHIP data, for the 50 states and...
US Agency for International Development — The Records Management Database is tool created in Microsoft Access specifically for USAID use. It contains metadata in order to access and retrieve the information...
U.S. Environmental Protection Agency — The Reach Address Database (RAD) stores the reach address of each Water Program feature that has been linked to the underlying surface water features (streams,...
... of Products Manufacturers Ingredients About the Database FAQ Product ... control bulbs carpenter ants caterpillars crabgrass control deer dogs dogs/cats fertilizer w/insecticide fertilizer w/weed ...
U.S. Department of Health & Human Services — The Mouse Phenome Database (MPD) has characterizations of hundreds of strains of laboratory mice to facilitate translational discoveries and to assist in selection...
U.S. Environmental Protection Agency — The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use...
U.S. Environmental Protection Agency — THIS DATA ASSET NO LONGER ACTIVE: This is metadata documentation for the Region 7 Drycleaner Database (R7DryClnDB) which tracks all Region7 drycleaners who notify...
U.S. Environmental Protection Agency — The National Assessment Database stores and tracks state water quality assessment decisions, Total Maximum Daily Loads (TMDLs) and other watershed plans designed to...
National Oceanic and Atmospheric Administration, Department of Commerce — This database contains trip-level reports submitted by vessels participating in Research Set-Aside projects with IVR reporting requirements.
U.S. Department of Health & Human Services — The Rat Genome Database (RGD) is a collaborative effort between leading research institutions involved in rat genetic and genomic research to collect, consolidate,...
Nielsen, Thomas Lund; Abildskov, Jens; Harper, Peter Mathias
in the compound. This classification makes the CAPEC database a very useful tool, for example, in the development of new property models, since properties of chemically similar compounds are easily obtained. A program with efficient search and retrieval functions of properties has been developed.......The Computer-Aided Process Engineering Center (CAPEC) database of measured data was established with the aim to promote greater data exchange in the chemical engineering community. The target properties are pure component properties, mixture properties, and special drug solubility data....... The database divides pure component properties into primary, secondary, and functional properties. Mixture properties are categorized in terms of the number of components in the mixture and the number of phases present. The compounds in the database have been classified on the basis of the functional groups...
Hansen, Ulla Darling; Gradel, Kim Oren; Larsen, Michael Due
, complications if relevant, implants used if relevant, 3-6-month postoperative recording of symptoms, if any. A set of clinical quality indicators is being maintained by the steering committee for the database and is published in an annual report which also contains extensive descriptive statistics. The database......The Danish Urogynaecological Database is established in order to ensure high quality of treatment for patients undergoing urogynecological surgery. The database contains details of all women in Denmark undergoing incontinence surgery or pelvic organ prolapse surgery amounting to ~5,200 procedures...... has a completeness of over 90% of all urogynecological surgeries performed in Denmark. Some of the main variables have been validated using medical records as gold standard. The positive predictive value was above 90%. The data are used as a quality monitoring tool by the hospitals and in a number...
Guldberg, Rikke; Brostrøm, Søren; Hansen, Jesper Kjær
in the DugaBase from 1 January 2009 to 31 October 2010, using medical records as a reference. RESULTS: A total of 16,509 urogynaecological procedures were registered in the DugaBase by 31 December 2010. The database completeness has increased by calendar time, from 38.2 % in 2007 to 93.2 % in 2010 for public......INTRODUCTION AND HYPOTHESIS: The Danish Urogynaecological Database (DugaBase) is a nationwide clinical database established in 2006 to monitor, ensure and improve the quality of urogynaecological surgery. We aimed to describe its establishment and completeness and to validate selected variables....... This is the first study based on data from the DugaBase. METHODS: The database completeness was calculated as a comparison between urogynaecological procedures reported to the Danish National Patient Registry and to the DugaBase. Validity was assessed for selected variables from a random sample of 200 women...
Fristrup, Claus; Detlefsen, Sönke; Palnæs Hansen, Carsten
: Death is monitored using data from the Danish Civil Registry. This registry monitors the survival status of the Danish population, and the registration is virtually complete. All data in the database are audited by all participating institutions, with respect to baseline characteristics, key indicators......AIM OF DATABASE: The Danish Pancreatic Cancer Database aims to prospectively register the epidemiology, diagnostic workup, diagnosis, treatment, and outcome of patients with pancreatic cancer in Denmark at an institutional and national level. STUDY POPULATION: Since May 1, 2011, all patients...... with microscopically verified ductal adenocarcinoma of the pancreas have been registered in the database. As of June 30, 2014, the total number of patients registered was 2,217. All data are cross-referenced with the Danish Pathology Registry and the Danish Patient Registry to ensure the completeness of registrations...
National Oceanic and Atmospheric Administration, Department of Commerce — The NEFSC Food Habits Database has two major sources of data. The first, and most extensive, is the standard NEFSC Bottom Trawl Surveys Program. During these...
U.S. Environmental Protection Agency — National Land Cover Database 2011 (NLCD 2011) is the most recent national land cover product created by the Multi-Resolution Land Characteristics (MRLC) Consortium....
U.S. Department of Health & Human Services — The Medicare Coverage Database (MCD) contains all National Coverage Determinations (NCDs) and Local Coverage Determinations (LCDs), local articles, and proposed NCD...
U.S. Department of Health & Human Services — This database links over 4,000 consumer brands to health effects from Material Safety Data Sheets (MSDS) provided by the manufacturers and allows scientists and...
National Oceanic and Atmospheric Administration, Department of Commerce — NGDC maintains a database of over 1,500 volcano locations obtained from the Smithsonian Institution Global Volcanism Program, Volcanoes of the World publication. The...
National Oceanic and Atmospheric Administration, Department of Commerce — The 1988 Spitak Earthquake database is an extensive collection of geophysical and geological data, maps, charts, images and descriptive text pertaining to the...
U.S. Environmental Protection Agency — A GIS compiled locational database in Microsoft Access of ~15,000 mines with uranium occurrence or production, primarily in the western United States. The metadata...
INIST is a CNRS (Centre National de la Recherche Scientifique) laboratory devoted to the treatment of scientific and technical informations and to the management of these informations compiled in a database. Reorientation of the database content has been proposed in 1994 to increase the transfer of research towards enterprises and services, to develop more automatized accesses to the informations, and to create a quality assurance plan. The catalog of publications comprises 5800 periodical titles (1300 for fundamental research and 4500 for applied research). A science and technology multi-thematic database will be created in 1995 for the retrieval of applied and technical informations. ''Grey literature'' (reports, thesis, proceedings..) and human and social sciences data will be added to the base by the use of informations selected in the existing GRISELI and Francis databases. Strong modifications are also planned in the thematic cover of Earth sciences and will considerably reduce the geological information content. (J.S.). 1 tab
Kansas Data Access and Support Center — The Kansas Cartographic Database (KCD) is an exact digital representation of selected features from the USGS 7.5 minute topographic map series. Features that are...
Full Text Available Under the support of the National Digital Archive Program (NDAP, basic species information about most Taiwanese fishes, including their morphology, ecology, distribution, specimens with photos, and literatures have been compiled into the "Fish Database of Taiwan" (http://fishdb.sinica.edu.tw. We expect that the all Taiwanese fish species databank (RSD, with 2800+ species, and the digital "Fish Fauna of Taiwan" will be completed in 2007. Underwater ecological photos and video images for all 2,800+ fishes are quite difficult to achieve but will be collected continuously in the future. In the last year of NDAP, we have successfully integrated all fish specimen data deposited at 7 different institutes in Taiwan as well as their collection maps on the Google Map and Google Earth. Further, the database also provides the pronunciation of Latin scientific names and transliteration of Chinese common names by referring to the Romanization system for all Taiwanese fishes (2,902 species in 292 families so far. The Taiwanese fish species checklist with Chinese common/vernacular names and specimen data has been updated periodically and provided to the global FishBase as well as the Global Biodiversity Information Facility (GBIF through the national portal of the Taiwan Biodiversity Information Facility (TaiBIF. Thus, Taiwanese fish data can be queried and browsed on the WWW. For contributing to the "Barcode of Life" and "All Fishes" international projects, alcohol-preserved specimens of more than 1,800 species and cryobanking tissues of 800 species have been accumulated at RCBAS in the past two years. Through this close collaboration between local and global databases, "The Fish Database of Taiwan" now attracts more than 250,000 visitors and achieves 5 million hits per month. We believe that this local database is becoming an important resource for education, research, conservation, and sustainable use of fish in Taiwan.
Full text: There are a number of databases available to the diffraction community. Two of the more important of these are the Powder Diffraction File (PDF) maintained by the International Centre for Diffraction Data (ICDD), and the Inorganic Crystal Structure Database (ICSD) maintained by Fachsinformationzentrum (FIZ, Karlsruhe). In application, the PDF has been used as an indispensable tool in phase identification and identification of unknowns. The ICSD database has extensive and explicit reference to the structures of compounds: atomic coordinates, space group and even thermal vibration parameters. A similar database, but for organic compounds, is maintained by the Cambridge Crystallographic Data Centre. These databases are often used as independent sources of information. However, little thought has been given on how to exploit the combined properties of structural database tools. A recently completed agreement between ICDD and FIZ, plus ICDD and Cambridge, provides a first step in complementary use of the PDF and the ICSD databases. The focus of this paper (as indicated below) is to examine ways of exploiting the combined properties of both databases. In 1996, there were approximately 76,000 entries in the PDF and approximately 43,000 entries in the ICSD database. The ICSD database has now been used to calculate entries in the PDF. Thus, to derive d-spacing and peak intensity data requires the synthesis of full diffraction patterns, i.e., we use the structural data in the ICSD database and then add instrumental resolution information. The combined data from PDF and ICSD can be effectively used in many ways. For example, we can calculate PDF data for an ideally random crystal distribution and also in the absence of preferred orientation. Again, we can use systematic studies of intermediate members in solid solutions series to help produce reliable quantitative phase analyses. In some cases, we can study how solid solution properties vary with composition and
This report describes the design of a Replication Framework that facilitates the implementation and com-parison of database replication techniques. Furthermore, it discusses the implementation of a Database Replication Prototype and compares the performance measurements of two replication techniques based on the Atomic Broadcast communication primitive: pessimistic active replication and optimistic active replication. The main contributions of this report can be split into four parts....
Højstrup, J.; Ejsing Jørgensen, Hans; Lundtang Petersen, Erik
his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061......his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061...
This paper presents some security issues, namely security database system level, data level security, user-level security, user management, resource management and password management. Security is a constant concern in the design and database development. Usually, there are no concerns about the existence of security, but rather how large it should be. A typically DBMS has several levels of security, in addition to those offered by the operating system or network. Typically, a DBMS has user a...
Day, C.T.; Loken, S.; MacFarlane, J.F.; May, E.; Lifka, D.; Lusk, E.; Price, L.E.; Baden, A.
The major SSC experiments are expected to produce up to 1 Petabyte of data per year each. Once the primary reconstruction is completed by farms of inexpensive processors. I/O becomes a major factor in further analysis of the data. We believe that the application of database techniques can significantly reduce the I/O performed in these analyses. We present examples of such I/O reductions in prototype based on relational and object-oriented databases of CDF data samples
Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...
Gradel, Kim Oren; Schønheyder, Henrik Carl; Arpi, Magnus
% of the Danish population). The database also includes data on comorbidity from the Danish National Patient Registry, vital status from the Danish Civil Registration System, and clinical data on 31% of nonselected records in the database. Use of the unique civil registration number given to all Danish residents......The Danish Collaborative Bacteraemia Network (DACOBAN) research database includes microbiological data obtained from positive blood cultures from a geographically and demographically well-defined population serviced by three clinical microbiology departments (1.7 million residents, 32...... enables linkage to additional registries for specific research projects. The DACOBAN database is continuously updated, and it currently comprises 39,292 patients with 49,951 bacteremic episodes from 2000 through 2011. The database is part of an international network of population-based bacteremia...
Lee H. Pratt
Full Text Available The rapidly increasing rate at which biological data is being produced requires a corresponding growth in relational databases and associated tools that can help laboratories contend with that data. With this need in mind, we describe here a Modular Approach to a Genomic, Integrated and Comprehensive (MAGIC Database. This Oracle 9i database derives from an initial focus in our laboratory on gene discovery via production and analysis of expressed sequence tags (ESTs, and subsequently on gene expression as assessed by both EST clustering and microarrays. The MAGIC Gene Discovery portion of the database focuses on information derived from DNA sequences and on its biological relevance. In addition to MAGIC SEQ-LIMS, which is designed to support activities in the laboratory, it contains several additional subschemas. The latter include MAGIC Admin for database administration, MAGIC Sequence for sequence processing as well as sequence and clone attributes, MAGIC Cluster for the results of EST clustering, MAGIC Polymorphism in support of microsatellite and single-nucleotide-polymorphism discovery, and MAGIC Annotation for electronic annotation by BLAST and BLAT. The MAGIC Microarray portion is a MIAME-compliant database with two components at present. These are MAGIC Array-LIMS, which makes possible remote entry of all information into the database, and MAGIC Array Analysis, which provides data mining and visualization. Because all aspects of interaction with the MAGIC Database are via a web browser, it is ideally suited not only for individual research laboratories but also for core facilities that serve clients at any distance.
Ikusawa, Yoshihisa; Ozawa, Takayuki
We developed MOX Fuel Database, which included valuable data from several irradiation tests in FUGEN and Halden reactor, for help of LWR MOX use. This database includes the data of fabrication and irradiation, and the results of post-irradiation examinations for seven fuel assemblies, i.e. P06, P2R, E03, E06, E07, E08 and E09, irradiated in FUGEN. The highest pellet peak burn-up reached ∼48GWd/t in MOX fuels, of which the maximum plutonium content was ∼6 wt%, irradiated in E09 fuel assembly without any failure. Also the data from the instrumented MOX fuels irradiated in HBWR to study the irradiation behavior of BWR MOX fuels under the steady state condition (IFA-514/565 and IFA-529), under the load-follow operation condition (IFA-554/555) and under the transit condition (IFA-591) are included in this database. The highest assembly burn-up reached ∼56 GWd/t in IFA-565 steady state irradiation test, and the maximum linear power of MOX fuel rods was 58.3-68.4 kW/m without any failure in IFA-591 ramp test. In addition, valuable instrument data, i.e. cladding elongation, fuel stack elongation, fuel center temperature and rod inner pressure were obtained from IFA-554/555 load-follow test. (author)
The aim of the IDEAS project was to develop General Guidelines for the Assessment of Internal Dose from Monitoring Data. The project was divided into 5 Work Packages for the major tasks. Work Package 1 entitled Collection of incorporation cases was devoted to the collection of data by means of bibliographic research (survey of the open literature), contacting and collecting data from specific organisations and using information from existing databases on incorporation cases. To ensure that the guidelines would be applicable to a wide range of practical situations, a database of cases of internal contamination including monitoring data suitable for dose assessment was compiled. The IDEAS Bibliography database and the IDEAS Internal Contamination database were prepared and some reference cases were selected for use in Work Package 3. The other Work packages of the IDEAS Project (WP-2 Preparation of evaluation software, WP-3 Evaluation of incorporation cases, WP-4 Development of the general guidelines and WP-5 Practical testing of general guidelines) have been described in detail elsewhere and can be found on the IDEAS website. A search for reference from the open literature, which contained information on cases of internal contamination from which intake and committed doses could be assessed, has been compiled into a database. The IDEAS Bibliography Database includes references to papers which might (but were not certain to) contain such information, or which included references to papers which contained such information. This database contains the usual bibliographical information: authors' name(s), year of publication, title of publication and the journal or report number. Up to now, a comprehensive Bibliography Database containing 563 references has been compiled. Not surprisingly more than half of the references are from Health Physics and Radiation Protection Dosimetry Journals.The next step was for the partners of the IDEAS project to obtain the references
G. E. Bodeker
Full Text Available A new database of trace gases and aerosols with global coverage, derived from high vertical resolution profile measurements, has been assembled as a collection of binary data files; hereafter referred to as the "Binary DataBase of Profiles" (BDBP. Version 1.0 of the BDBP, described here, includes measurements from different satellite- (HALOE, POAM II and III, SAGE I and II and ground-based measurement systems (ozonesondes. In addition to the primary product of ozone, secondary measurements of other trace gases, aerosol extinction, and temperature are included. All data are subjected to very strict quality control and for every measurement a percentage error on the measurement is included. To facilitate analyses, each measurement is added to 3 different instances (3 different grids of the database where measurements are indexed by: (1 geographic latitude, longitude, altitude (in 1 km steps and time, (2 geographic latitude, longitude, pressure (at levels ~1 km apart and time, (3 equivalent latitude, potential temperature (8 levels from 300 K to 650 K and time.
In contrast to existing zonal mean databases, by including a wider range of measurement sources (both satellite and ozonesondes, the BDBP is sufficiently dense to permit calculation of changes in ozone by latitude, longitude and altitude. In addition, by including other trace gases such as water vapour, this database can be used for comprehensive radiative transfer calculations. By providing the original measurements rather than derived monthly means, the BDBP is applicable to a wider range of applications than databases containing only monthly mean data. Monthly mean zonal mean ozone concentrations calculated from the BDBP are compared with the database of Randel and Wu, which has been used in many earlier analyses. As opposed to that database which is generated from regression model fits, the BDBP uses the original (quality controlled measurements with no smoothing applied in any
Salman Sadullah Usmani
Full Text Available THPdb (http://crdd.osdd.net/raghava/thpdb/ is a manually curated repository of Food and Drug Administration (FDA approved therapeutic peptides and proteins. The information in THPdb has been compiled from 985 research publications, 70 patents and other resources like DrugBank. The current version of the database holds a total of 852 entries, providing comprehensive information on 239 US-FDA approved therapeutic peptides and proteins and their 380 drug variants. The information on each peptide and protein includes their sequences, chemical properties, composition, disease area, mode of activity, physical appearance, category or pharmacological class, pharmacodynamics, route of administration, toxicity, target of activity, etc. In addition, we have annotated the structure of most of the protein and peptides. A number of user-friendly tools have been integrated to facilitate easy browsing and data analysis. To assist scientific community, a web interface and mobile App have also been developed.
Dong, Xiuwen Sue; Largay, Julie A; Wang, Xuanwen; Cain, Chris Trahan; Romano, Nancy
The National Institute for Occupational Safety and Health (NIOSH) has published reports detailing the results of investigations on selected work-related fatalities through the Fatality Assessment and Control Evaluation (FACE) program since 1982. Information from construction-related FACE reports was coded into the Construction FACE Database (CFD). Use of the CFD was illustrated by analyzing major CFD variables. A total of 768 construction fatalities were included in the CFD. Information on decedents, safety training, use of PPE, and FACE recommendations were coded. Analysis shows that one in five decedents in the CFD died within the first two months on the job; 75% and 43% of reports recommended having safety training or installing protection equipment, respectively. Comprehensive research using FACE reports may improve understanding of work-related fatalities and provide much-needed information on injury prevention. The CFD allows researchers to analyze the FACE reports quantitatively and efficiently. Copyright © 2017 Elsevier Ltd and National Safety Council. All rights reserved.
Full Text Available sidue (or mutant) in a protein. The experimental data are collected from the literature both by searching th...the sequence database, UniProt, structural database, PDB, and literature database
Full Text Available Quotients and comprehension are fundamental mathematical constructions that can be described via adjunctions in categorical logic. This paper reveals that quotients and comprehension are related to measurement, not only in quantum logic, but also in probabilistic and classical logic. This relation is presented by a long series of examples, some of them easy, and some also highly non-trivial (esp. for von Neumann algebras. We have not yet identified a unifying theory. Nevertheless, the paper contributes towards such a theory by introducing the new quotient-and-comprehension perspective on measurement instruments, and by describing the examples on which such a theory should be built.
Kruse, Marie; Hochstrasser, Stefan; Zwisler, Ann-Dorthe O
OBJECTIVES: The costs of comprehensive cardiac rehabilitation are established and compared to the corresponding costs of usual care. The effect on health-related quality of life is analyzed. METHODS: An unprecedented and very detailed cost assessment was carried out, as no guidelines existed...... and may be as high as euro 1.877. CONCLUSIONS: Comprehensive cardiac rehabilitation is more costly than usual care, and the higher costs are not outweighed by a quality of life gain. Comprehensive cardiac rehabilitation is, therefore, not cost-effective....
U.S. Department of Health & Human Services — A peer-reviewed and fully referenced database of drugs to which breastfeeding mothers may be exposed. Among the data included are maternal and infant levels of...
Sengupta, Manajit; Habte, Aron; Lopez, Anthony; Xie, Yu; Molling, Christine; Gueymard, Christian
This presentation provides a high-level overview of the National Solar Radiation Database (NSRDB), including sensing, measurement and forecasting, and discusses observations that are needed for research and product development.
U.S. Environmental Protection Agency — An Excel database on NRC and Agreement State licensed mills providing status, locational/operational/restoration data, maps, and environmental reports including...
National Oceanic and Atmospheric Administration, Department of Commerce — The database contains multiple spreadsheets that hold data collected during each small-boat survey project conducted by the PIFSC CRP. This includes a summary of the...
Rezaei Mahdiraji, Alireza; Baumann, Peter Peter
Unstructured meshes are used in several application domains such as earth sciences (e.g., seismology), medicine, oceanography, cli- mate modeling, GIS as approximate representations of physical objects. Meshes subdivide a domain into smaller geometric elements (called cells) which are glued together by incidence relationships. The subdivision of a domain allows computational manipulation of complicated physical structures. For instance, seismologists model earthquakes using elastic wave propagation solvers on hexahedral meshes. The hexahedral con- tains several hundred millions of grid points and millions of hexahedral cells. Each vertex node in the hexahedrals stores a multitude of data fields. To run simulation on such meshes, one needs to iterate over all the cells, iterate over incident cells to a given cell, retrieve coordinates of cells, assign data values to cells, etc. Although meshes are used in many application domains, to the best of our knowledge there is no database vendor that support unstructured mesh features. Currently, the main tool for querying and manipulating unstructured meshes are mesh libraries, e.g., CGAL and GRAL. Mesh li- braries are dedicated libraries which includes mesh algorithms and can be run on mesh representations. The libraries do not scale with dataset size, do not have declarative query language, and need deep C++ knowledge for query implementations. Furthermore, due to high coupling between the implementations and input file structure, the implementations are less reusable and costly to maintain. A dedicated mesh database offers the following advantages: 1) declarative querying, 2) ease of maintenance, 3) hiding mesh storage structure from applications, and 4) transparent query optimization. To design a mesh database, the first challenge is to define a suitable generic data model for unstructured meshes. We proposed ImG-Complexes data model as a generic topological mesh data model which extends incidence graph model to multi
Full Text Available base Description General information of database Database name RPSD Alternative nam...e Rice Protein Structure Database DOI 10.18908/lsdba.nbdc00749-000 Creator Creator Name: Toshimasa Yamazaki ... Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Databas...e classification Structure Databases - Protein structure Organism Taxonomy Name: Or...or name(s): Journal: External Links: Original website information Database maintenance site National Institu
Toffer, H.; Erickson, D.G.; Samuel, T.J.; Pearson, J.S.
A computerized, knowledge-screened, comprehensive database of the nuclear criticality safety documentation has been assembled as part of the Nuclear Criticality Technology and Safety (NCTS) Project. The database is focused on nuclear criticality parameter studies. The database has been computerized using dBASE III Plus and can be used on a personal computer or a workstation. More than 1300 documents have been reviewed by nuclear criticality specialists over the last 5 years to produce over 800 database entries. Nuclear criticality specialists will be able to access the database and retrieve information about topical parameter studies, authors, and chronology. The database places the accumulated knowledge in the nuclear criticality area over the last 50 years at the fingertips of a criticality analyst
Full Text Available Theoretical and applied environmental sounds research is gaining prominence but progress has been hampered by the lack of a comprehensive, high quality, accessible database of environmental sounds. An ongoing project to develop such a resource is described, which is based upon experimental evidence as to the way we listen to sounds in the world. The database will include a large number of sounds produced by different sound sources, with a thorough background for each sound file, including experimentally obtained perceptual data. In this way DESRA can contain a wide variety of acoustic, contextual, semantic, and behavioral information related to an individual sound. It will be accessible on the Internet and will be useful to researchers, engineers, sound designers, and musicians.
One of the tasks of WPEC Subgroup 38 (SG38) is to design a database structure for storing the particle information needed for nuclear reaction databases and transport codes. Since the same particle may appear many times in a reaction database (produced by many different reactions on different targets), one of the long-term goals for SG38 is to move towards a central database of particle information to reduce redundancy and ensure consistency among evaluations. The database structure must be general enough to describe all relevant particles and their properties, including mass, charge, spin and parity, half-life, decay properties, and so on. Furthermore, it must be broad enough to handle not only excited nuclear states but also excited atomic states that can de-excite through atomic relaxation. Databases built with this hierarchy will serve as central repositories for particle information that can be linked to from codes and other databases. It is hoped that the final product is general enough for use in other projects besides SG38. While this is called a 'particle database', the definition of a particle (as described in Section 2) is very broad. The database must describe nucleons, nuclei, excited nuclear states (and possibly atomic states) in addition to fundamental particles like photons, electrons, muons, etc. Under this definition the list of possible particles becomes quite large. To help organize them the database will need a way of grouping related particles (e.g., all the isotopes of an element, or all the excited levels of an isotope) together into particle 'groups'. The database will also need a way to classify particles that belong to the same 'family' (such as 'leptons', 'baryons', etc.). Each family of particles may have special requirements as to what properties are required. One important function of the particle database will be to provide an easy way for codes and external databases to look up any particle stored inside. In order to make access as
Full Text Available Abstract Background Peach is being developed as a model organism for Rosaceae, an economically important family that includes fruits and ornamental plants such as apple, pear, strawberry, cherry, almond and rose. The genomics and genetics data of peach can play a significant role in the gene discovery and the genetic understanding of related species. The effective utilization of these peach resources, however, requires the development of an integrated and centralized database with associated analysis tools. Description The Genome Database for Rosaceae (GDR is a curated and integrated web-based relational database. GDR contains comprehensive data of the genetically anchored peach physical map, an annotated peach EST database, Rosaceae maps and markers and all publicly available Rosaceae sequences. Annotations of ESTs include contig assembly, putative function, simple sequence repeats, and anchored position to the peach physical map where applicable. Our integrated map viewer provides graphical interface to the genetic, transcriptome and physical mapping information. ESTs, BACs and markers can be queried by various categories and the search result sites are linked to the integrated map viewer or to the WebFPC physical map sites. In addition to browsing and querying the database, users can compare their sequences with the annotated GDR sequences via a dedicated sequence similarity server running either the BLAST or FASTA algorithm. To demonstrate the utility of the integrated and fully annotated database and analysis tools, we describe a case study where we anchored Rosaceae sequences to the peach physical and genetic map by sequence similarity. Conclusions The GDR has been initiated to meet the major deficiency in Rosaceae genomics and genetics research, namely a centralized web database and bioinformatics tools for data storage, analysis and exchange. GDR can be accessed at http://www.genome.clemson.edu/gdr/.
Jung, Sook; Jesudurai, Christopher; Staton, Margaret; Du, Zhidian; Ficklin, Stephen; Cho, Ilhyung; Abbott, Albert; Tomkins, Jeffrey; Main, Dorrie
Peach is being developed as a model organism for Rosaceae, an economically important family that includes fruits and ornamental plants such as apple, pear, strawberry, cherry, almond and rose. The genomics and genetics data of peach can play a significant role in the gene discovery and the genetic understanding of related species. The effective utilization of these peach resources, however, requires the development of an integrated and centralized database with associated analysis tools. The Genome Database for Rosaceae (GDR) is a curated and integrated web-based relational database. GDR contains comprehensive data of the genetically anchored peach physical map, an annotated peach EST database, Rosaceae maps and markers and all publicly available Rosaceae sequences. Annotations of ESTs include contig assembly, putative function, simple sequence repeats, and anchored position to the peach physical map where applicable. Our integrated map viewer provides graphical interface to the genetic, transcriptome and physical mapping information. ESTs, BACs and markers can be queried by various categories and the search result sites are linked to the integrated map viewer or to the WebFPC physical map sites. In addition to browsing and querying the database, users can compare their sequences with the annotated GDR sequences via a dedicated sequence similarity server running either the BLAST or FASTA algorithm. To demonstrate the utility of the integrated and fully annotated database and analysis tools, we describe a case study where we anchored Rosaceae sequences to the peach physical and genetic map by sequence similarity. The GDR has been initiated to meet the major deficiency in Rosaceae genomics and genetics research, namely a centralized web database and bioinformatics tools for data storage, analysis and exchange. GDR can be accessed at http://www.genome.clemson.edu/gdr/.