WorldWideScience

Sample records for manually curated database

  1. The curation paradigm and application tool used for manual curation of the scientific literature at the Comparative Toxicogenomics Database

    Science.gov (United States)

    Davis, Allan Peter; Wiegers, Thomas C.; Murphy, Cynthia G.; Mattingly, Carolyn J.

    2011-01-01

    The Comparative Toxicogenomics Database (CTD) is a public resource that promotes understanding about the effects of environmental chemicals on human health. CTD biocurators read the scientific literature and convert free-text information into a structured format using official nomenclature, integrating third party controlled vocabularies for chemicals, genes, diseases and organisms, and a novel controlled vocabulary for molecular interactions. Manual curation produces a robust, richly annotated dataset of highly accurate and detailed information. Currently, CTD describes over 349 000 molecular interactions between 6800 chemicals, 20 900 genes (for 330 organisms) and 4300 diseases that have been manually curated from over 25 400 peer-reviewed articles. This manually curated data are further integrated with other third party data (e.g. Gene Ontology, KEGG and Reactome annotations) to generate a wealth of toxicogenomic relationships. Here, we describe our approach to manual curation that uses a powerful and efficient paradigm involving mnemonic codes. This strategy allows biocurators to quickly capture detailed information from articles by generating simple statements using codes to represent the relationships between data types. The paradigm is versatile, expandable, and able to accommodate new data challenges that arise. We have incorporated this strategy into a web-based curation tool to further increase efficiency and productivity, implement quality control in real-time and accommodate biocurators working remotely. Database URL: http://ctd.mdibl.org PMID:21933848

  2. DAMPD: A manually curated antimicrobial peptide database

    KAUST Repository

    Seshadri Sundararajan, Vijayaraghava

    2011-11-21

    The demand for antimicrobial peptides (AMPs) is rising because of the increased occurrence of pathogens that are tolerant or resistant to conventional antibiotics. Since naturally occurring AMPs could serve as templates for the development of new anti-infectious agents to which pathogens are not resistant, a resource that contains relevant information on AMP is of great interest. To that extent, we developed the Dragon Antimicrobial Peptide Database (DAMPD, http://apps.sanbi.ac.za/dampd) that contains 1232 manually curated AMPs. DAMPD is an update and a replacement of the ANTIMIC database. In DAMPD an integrated interface allows in a simple fashion querying based on taxonomy, species, AMP family, citation, keywords and a combination of search terms and fields (Advanced Search). A number of tools such as Blast, ClustalW, HMMER, Hydrocalculator, SignalP, AMP predictor, as well as a number of other resources that provide additional information about the results are also provided and integrated into DAMPD to augment biological analysis of AMPs. The Author(s) 2011. Published by Oxford University Press.

  3. DAMPD: A manually curated antimicrobial peptide database

    KAUST Repository

    Seshadri Sundararajan, Vijayaraghava; Gabere, Musa Nur; Pretorius, Ashley; Adam, Saleem; Christoffels, Alan; Lehvaslaiho, Minna; Archer, John A.C.; Bajic, Vladimir B.

    2011-01-01

    The demand for antimicrobial peptides (AMPs) is rising because of the increased occurrence of pathogens that are tolerant or resistant to conventional antibiotics. Since naturally occurring AMPs could serve as templates for the development of new anti-infectious agents to which pathogens are not resistant, a resource that contains relevant information on AMP is of great interest. To that extent, we developed the Dragon Antimicrobial Peptide Database (DAMPD, http://apps.sanbi.ac.za/dampd) that contains 1232 manually curated AMPs. DAMPD is an update and a replacement of the ANTIMIC database. In DAMPD an integrated interface allows in a simple fashion querying based on taxonomy, species, AMP family, citation, keywords and a combination of search terms and fields (Advanced Search). A number of tools such as Blast, ClustalW, HMMER, Hydrocalculator, SignalP, AMP predictor, as well as a number of other resources that provide additional information about the results are also provided and integrated into DAMPD to augment biological analysis of AMPs. The Author(s) 2011. Published by Oxford University Press.

  4. SolCyc: a database hub at the Sol Genomics Network (SGN) for the manual curation of metabolic networks in Solanum and Nicotiana specific databases

    Science.gov (United States)

    Foerster, Hartmut; Bombarely, Aureliano; Battey, James N D; Sierro, Nicolas; Ivanov, Nikolai V; Mueller, Lukas A

    2018-01-01

    Abstract SolCyc is the entry portal to pathway/genome databases (PGDBs) for major species of the Solanaceae family hosted at the Sol Genomics Network. Currently, SolCyc comprises six organism-specific PGDBs for tomato, potato, pepper, petunia, tobacco and one Rubiaceae, coffee. The metabolic networks of those PGDBs have been computationally predicted by the pathologic component of the pathway tools software using the manually curated multi-domain database MetaCyc (http://www.metacyc.org/) as reference. SolCyc has been recently extended by taxon-specific databases, i.e. the family-specific SolanaCyc database, containing only curated data pertinent to species of the nightshade family, and NicotianaCyc, a genus-specific database that stores all relevant metabolic data of the Nicotiana genus. Through manual curation of the published literature, new metabolic pathways have been created in those databases, which are complemented by the continuously updated, relevant species-specific pathways from MetaCyc. At present, SolanaCyc comprises 199 pathways and 29 superpathways and NicotianaCyc accounts for 72 pathways and 13 superpathways. Curator-maintained, taxon-specific databases such as SolanaCyc and NicotianaCyc are characterized by an enrichment of data specific to these taxa and free of falsely predicted pathways. Both databases have been used to update recently created Nicotiana-specific databases for Nicotiana tabacum, Nicotiana benthamiana, Nicotiana sylvestris and Nicotiana tomentosiformis by propagating verifiable data into those PGDBs. In addition, in-depth curation of the pathways in N.tabacum has been carried out which resulted in the elimination of 156 pathways from the 569 pathways predicted by pathway tools. Together, in-depth curation of the predicted pathway network and the supplementation with curated data from taxon-specific databases has substantially improved the curation status of the species–specific N.tabacum PGDB. The implementation of this

  5. MortalityPredictors.org: a manually-curated database of published biomarkers of human all-cause mortality.

    Science.gov (United States)

    Peto, Maximus V; De la Guardia, Carlos; Winslow, Ksenia; Ho, Andrew; Fortney, Kristen; Morgen, Eric

    2017-08-31

    Biomarkers of all-cause mortality are of tremendous clinical and research interest. Because of the long potential duration of prospective human lifespan studies, such biomarkers can play a key role in quantifying human aging and quickly evaluating any potential therapies. Decades of research into mortality biomarkers have resulted in numerous associations documented across hundreds of publications. Here, we present MortalityPredictors.org , a manually-curated, publicly accessible database, housing published, statistically-significant relationships between biomarkers and all-cause mortality in population-based or generally healthy samples. To gather the information for this database, we searched PubMed for appropriate research papers and then manually curated relevant data from each paper. We manually curated 1,576 biomarker associations, involving 471 distinct biomarkers. Biomarkers ranged in type from hematologic (red blood cell distribution width) to molecular (DNA methylation changes) to physical (grip strength). Via the web interface, the resulting data can be easily browsed, searched, and downloaded for further analysis. MortalityPredictors.org provides comprehensive results on published biomarkers of human all-cause mortality that can be used to compare biomarkers, facilitate meta-analysis, assist with the experimental design of aging studies, and serve as a central resource for analysis. We hope that it will facilitate future research into human mortality and aging.

  6. miRSponge: a manually curated database for experimentally supported miRNA sponges and ceRNAs.

    Science.gov (United States)

    Wang, Peng; Zhi, Hui; Zhang, Yunpeng; Liu, Yue; Zhang, Jizhou; Gao, Yue; Guo, Maoni; Ning, Shangwei; Li, Xia

    2015-01-01

    In this study, we describe miRSponge, a manually curated database, which aims at providing an experimentally supported resource for microRNA (miRNA) sponges. Recent evidence suggests that miRNAs are themselves regulated by competing endogenous RNAs (ceRNAs) or 'miRNA sponges' that contain miRNA binding sites. These competitive molecules can sequester miRNAs to prevent them interacting with their natural targets to play critical roles in various biological and pathological processes. It has become increasingly important to develop a high quality database to record and store ceRNA data to support future studies. To this end, we have established the experimentally supported miRSponge database that contains data on 599 miRNA-sponge interactions and 463 ceRNA relationships from 11 species following manual curating from nearly 1200 published articles. Database classes include endogenously generated molecules including coding genes, pseudogenes, long non-coding RNAs and circular RNAs, along with exogenously introduced molecules including viral RNAs and artificial engineered sponges. Approximately 70% of the interactions were identified experimentally in disease states. miRSponge provides a user-friendly interface for convenient browsing, retrieval and downloading of dataset. A submission page is also included to allow researchers to submit newly validated miRNA sponge data. Database URL: http://www.bio-bigdata.net/miRSponge. © The Author(s) 2015. Published by Oxford University Press.

  7. MSDD: a manually curated database of experimentally supported associations among miRNAs, SNPs and human diseases

    OpenAIRE

    Yue, Ming; Zhou, Dianshuang; Zhi, Hui; Wang, Peng; Zhang, Yan; Gao, Yue; Guo, Maoni; Li, Xin; Wang, Yanxia; Zhang, Yunpeng; Ning, Shangwei; Li, Xia

    2017-01-01

    Abstract The MiRNA SNP Disease Database (MSDD, http://www.bio-bigdata.com/msdd/) is a manually curated database that provides comprehensive experimentally supported associations among microRNAs (miRNAs), single nucleotide polymorphisms (SNPs) and human diseases. SNPs in miRNA-related functional regions such as mature miRNAs, promoter regions, pri-miRNAs, pre-miRNAs and target gene 3′-UTRs, collectively called ‘miRSNPs’, represent a novel category of functional molecules. miRSNPs can lead to m...

  8. Lnc2Cancer: a manually curated database of experimentally supported lncRNAs associated with various human cancers.

    Science.gov (United States)

    Ning, Shangwei; Zhang, Jizhou; Wang, Peng; Zhi, Hui; Wang, Jianjian; Liu, Yue; Gao, Yue; Guo, Maoni; Yue, Ming; Wang, Lihua; Li, Xia

    2016-01-04

    Lnc2Cancer (http://www.bio-bigdata.net/lnc2cancer) is a manually curated database of cancer-associated long non-coding RNAs (lncRNAs) with experimental support that aims to provide a high-quality and integrated resource for exploring lncRNA deregulation in various human cancers. LncRNAs represent a large category of functional RNA molecules that play a significant role in human cancers. A curated collection and summary of deregulated lncRNAs in cancer is essential to thoroughly understand the mechanisms and functions of lncRNAs. Here, we developed the Lnc2Cancer database, which contains 1057 manually curated associations between 531 lncRNAs and 86 human cancers. Each association includes lncRNA and cancer name, the lncRNA expression pattern, experimental techniques, a brief functional description, the original reference and additional annotation information. Lnc2Cancer provides a user-friendly interface to conveniently browse, retrieve and download data. Lnc2Cancer also offers a submission page for researchers to submit newly validated lncRNA-cancer associations. With the rapidly increasing interest in lncRNAs, Lnc2Cancer will significantly improve our understanding of lncRNA deregulation in cancer and has the potential to be a timely and valuable resource. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. NSDNA: a manually curated database of experimentally supported ncRNAs associated with nervous system diseases.

    Science.gov (United States)

    Wang, Jianjian; Cao, Yuze; Zhang, Huixue; Wang, Tianfeng; Tian, Qinghua; Lu, Xiaoyu; Lu, Xiaoyan; Kong, Xiaotong; Liu, Zhaojun; Wang, Ning; Zhang, Shuai; Ma, Heping; Ning, Shangwei; Wang, Lihua

    2017-01-04

    The Nervous System Disease NcRNAome Atlas (NSDNA) (http://www.bio-bigdata.net/nsdna/) is a manually curated database that provides comprehensive experimentally supported associations about nervous system diseases (NSDs) and noncoding RNAs (ncRNAs). NSDs represent a common group of disorders, some of which are characterized by high morbidity and disabilities. The pathogenesis of NSDs at the molecular level remains poorly understood. ncRNAs are a large family of functionally important RNA molecules. Increasing evidence shows that diverse ncRNAs play a critical role in various NSDs. Mining and summarizing NSD-ncRNA association data can help researchers discover useful information. Hence, we developed an NSDNA database that documents 24 713 associations between 142 NSDs and 8593 ncRNAs in 11 species, curated from more than 1300 articles. This database provides a user-friendly interface for browsing and searching and allows for data downloading flexibility. In addition, NSDNA offers a submission page for researchers to submit novel NSD-ncRNA associations. It represents an extremely useful and valuable resource for researchers who seek to understand the functions and molecular mechanisms of ncRNA involved in NSDs. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. CCDB: a curated database of genes involved in cervix cancer.

    Science.gov (United States)

    Agarwal, Subhash M; Raghav, Dhwani; Singh, Harinder; Raghava, G P S

    2011-01-01

    The Cervical Cancer gene DataBase (CCDB, http://crdd.osdd.net/raghava/ccdb) is a manually curated catalog of experimentally validated genes that are thought, or are known to be involved in the different stages of cervical carcinogenesis. In spite of the large women population that is presently affected from this malignancy still at present, no database exists that catalogs information on genes associated with cervical cancer. Therefore, we have compiled 537 genes in CCDB that are linked with cervical cancer causation processes such as methylation, gene amplification, mutation, polymorphism and change in expression level, as evident from published literature. Each record contains details related to gene like architecture (exon-intron structure), location, function, sequences (mRNA/CDS/protein), ontology, interacting partners, homology to other eukaryotic genomes, structure and links to other public databases, thus augmenting CCDB with external data. Also, manually curated literature references have been provided to support the inclusion of the gene in the database and establish its association with cervix cancer. In addition, CCDB provides information on microRNA altered in cervical cancer as well as search facility for querying, several browse options and an online tool for sequence similarity search, thereby providing researchers with easy access to the latest information on genes involved in cervix cancer.

  11. A computational platform to maintain and migrate manual functional annotations for BioCyc databases.

    Science.gov (United States)

    Walsh, Jesse R; Sen, Taner Z; Dickerson, Julie A

    2014-10-12

    BioCyc databases are an important resource for information on biological pathways and genomic data. Such databases represent the accumulation of biological data, some of which has been manually curated from literature. An essential feature of these databases is the continuing data integration as new knowledge is discovered. As functional annotations are improved, scalable methods are needed for curators to manage annotations without detailed knowledge of the specific design of the BioCyc database. We have developed CycTools, a software tool which allows curators to maintain functional annotations in a model organism database. This tool builds on existing software to improve and simplify annotation data imports of user provided data into BioCyc databases. Additionally, CycTools automatically resolves synonyms and alternate identifiers contained within the database into the appropriate internal identifiers. Automating steps in the manual data entry process can improve curation efforts for major biological databases. The functionality of CycTools is demonstrated by transferring GO term annotations from MaizeCyc to matching proteins in CornCyc, both maize metabolic pathway databases available at MaizeGDB, and by creating strain specific databases for metabolic engineering.

  12. The MIntAct project—IntAct as a common curation platform for 11 molecular interaction databases

    Science.gov (United States)

    Orchard, Sandra; Ammari, Mais; Aranda, Bruno; Breuza, Lionel; Briganti, Leonardo; Broackes-Carter, Fiona; Campbell, Nancy H.; Chavali, Gayatri; Chen, Carol; del-Toro, Noemi; Duesbury, Margaret; Dumousseau, Marine; Galeota, Eugenia; Hinz, Ursula; Iannuccelli, Marta; Jagannathan, Sruthi; Jimenez, Rafael; Khadake, Jyoti; Lagreid, Astrid; Licata, Luana; Lovering, Ruth C.; Meldal, Birgit; Melidoni, Anna N.; Milagros, Mila; Peluso, Daniele; Perfetto, Livia; Porras, Pablo; Raghunath, Arathi; Ricard-Blum, Sylvie; Roechert, Bernd; Stutz, Andre; Tognolli, Michael; van Roey, Kim; Cesareni, Gianni; Hermjakob, Henning

    2014-01-01

    IntAct (freely available at http://www.ebi.ac.uk/intact) is an open-source, open data molecular interaction database populated by data either curated from the literature or from direct data depositions. IntAct has developed a sophisticated web-based curation tool, capable of supporting both IMEx- and MIMIx-level curation. This tool is now utilized by multiple additional curation teams, all of whom annotate data directly into the IntAct database. Members of the IntAct team supply appropriate levels of training, perform quality control on entries and take responsibility for long-term data maintenance. Recently, the MINT and IntAct databases decided to merge their separate efforts to make optimal use of limited developer resources and maximize the curation output. All data manually curated by the MINT curators have been moved into the IntAct database at EMBL-EBI and are merged with the existing IntAct dataset. Both IntAct and MINT are active contributors to the IMEx consortium (http://www.imexconsortium.org). PMID:24234451

  13. MSDD: a manually curated database of experimentally supported associations among miRNAs, SNPs and human diseases.

    Science.gov (United States)

    Yue, Ming; Zhou, Dianshuang; Zhi, Hui; Wang, Peng; Zhang, Yan; Gao, Yue; Guo, Maoni; Li, Xin; Wang, Yanxia; Zhang, Yunpeng; Ning, Shangwei; Li, Xia

    2018-01-04

    The MiRNA SNP Disease Database (MSDD, http://www.bio-bigdata.com/msdd/) is a manually curated database that provides comprehensive experimentally supported associations among microRNAs (miRNAs), single nucleotide polymorphisms (SNPs) and human diseases. SNPs in miRNA-related functional regions such as mature miRNAs, promoter regions, pri-miRNAs, pre-miRNAs and target gene 3'-UTRs, collectively called 'miRSNPs', represent a novel category of functional molecules. miRSNPs can lead to miRNA and its target gene dysregulation, and resulting in susceptibility to or onset of human diseases. A curated collection and summary of miRSNP-associated diseases is essential for a thorough understanding of the mechanisms and functions of miRSNPs. Here, we describe MSDD, which currently documents 525 associations among 182 human miRNAs, 197 SNPs, 153 genes and 164 human diseases through a review of more than 2000 published papers. Each association incorporates information on the miRNAs, SNPs, miRNA target genes and disease names, SNP locations and alleles, the miRNA dysfunctional pattern, experimental techniques, a brief functional description, the original reference and additional annotation. MSDD provides a user-friendly interface to conveniently browse, retrieve, download and submit novel data. MSDD will significantly improve our understanding of miRNA dysfunction in disease, and thus, MSDD has the potential to serve as a timely and valuable resource. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Automatic vs. manual curation of a multi-source chemical dictionary: the impact on text mining

    Science.gov (United States)

    2010-01-01

    Background Previously, we developed a combined dictionary dubbed Chemlist for the identification of small molecules and drugs in text based on a number of publicly available databases and tested it on an annotated corpus. To achieve an acceptable recall and precision we used a number of automatic and semi-automatic processing steps together with disambiguation rules. However, it remained to be investigated which impact an extensive manual curation of a multi-source chemical dictionary would have on chemical term identification in text. ChemSpider is a chemical database that has undergone extensive manual curation aimed at establishing valid chemical name-to-structure relationships. Results We acquired the component of ChemSpider containing only manually curated names and synonyms. Rule-based term filtering, semi-automatic manual curation, and disambiguation rules were applied. We tested the dictionary from ChemSpider on an annotated corpus and compared the results with those for the Chemlist dictionary. The ChemSpider dictionary of ca. 80 k names was only a 1/3 to a 1/4 the size of Chemlist at around 300 k. The ChemSpider dictionary had a precision of 0.43 and a recall of 0.19 before the application of filtering and disambiguation and a precision of 0.87 and a recall of 0.19 after filtering and disambiguation. The Chemlist dictionary had a precision of 0.20 and a recall of 0.47 before the application of filtering and disambiguation and a precision of 0.67 and a recall of 0.40 after filtering and disambiguation. Conclusions We conclude the following: (1) The ChemSpider dictionary achieved the best precision but the Chemlist dictionary had a higher recall and the best F-score; (2) Rule-based filtering and disambiguation is necessary to achieve a high precision for both the automatically generated and the manually curated dictionary. ChemSpider is available as a web service at http://www.chemspider.com/ and the Chemlist dictionary is freely available as an XML file in

  15. MS_HistoneDB, a manually curated resource for proteomic analysis of human and mouse histones.

    Science.gov (United States)

    El Kennani, Sara; Adrait, Annie; Shaytan, Alexey K; Khochbin, Saadi; Bruley, Christophe; Panchenko, Anna R; Landsman, David; Pflieger, Delphine; Govin, Jérôme

    2017-01-01

    Histones and histone variants are essential components of the nuclear chromatin. While mass spectrometry has opened a large window to their characterization and functional studies, their identification from proteomic data remains challenging. Indeed, the current interpretation of mass spectrometry data relies on public databases which are either not exhaustive (Swiss-Prot) or contain many redundant entries (UniProtKB or NCBI). Currently, no protein database is ideally suited for the analysis of histones and the complex array of mammalian histone variants. We propose two proteomics-oriented manually curated databases for mouse and human histone variants. We manually curated >1700 gene, transcript and protein entries to produce a non-redundant list of 83 mouse and 85 human histones. These entries were annotated in accordance with the current nomenclature and unified with the "HistoneDB2.0 with Variants" database. This resource is provided in a format that can be directly read by programs used for mass spectrometry data interpretation. In addition, it was used to interpret mass spectrometry data acquired on histones extracted from mouse testis. Several histone variants, which had so far only been inferred by homology or detected at the RNA level, were detected by mass spectrometry, confirming the existence of their protein form. Mouse and human histone entries were collected from different databases and subsequently curated to produce a non-redundant protein-centric resource, MS_HistoneDB. It is dedicated to the proteomic study of histones in mouse and human and will hopefully facilitate the identification and functional study of histone variants.

  16. SABIO-RK: an updated resource for manually curated biochemical reaction kinetics

    Science.gov (United States)

    Rey, Maja; Weidemann, Andreas; Kania, Renate; Müller, Wolfgang

    2018-01-01

    Abstract SABIO-RK (http://sabiork.h-its.org/) is a manually curated database containing data about biochemical reactions and their reaction kinetics. The data are primarily extracted from scientific literature and stored in a relational database. The content comprises both naturally occurring and alternatively measured biochemical reactions and is not restricted to any organism class. The data are made available to the public by a web-based search interface and by web services for programmatic access. In this update we describe major improvements and extensions of SABIO-RK since our last publication in the database issue of Nucleic Acid Research (2012). (i) The website has been completely revised and (ii) allows now also free text search for kinetics data. (iii) Additional interlinkages with other databases in our field have been established; this enables users to gain directly comprehensive knowledge about the properties of enzymes and kinetics beyond SABIO-RK. (iv) Vice versa, direct access to SABIO-RK data has been implemented in several systems biology tools and workflows. (v) On request of our experimental users, the data can be exported now additionally in spreadsheet formats. (vi) The newly established SABIO-RK Curation Service allows to respond to specific data requirements. PMID:29092055

  17. IMPPAT: A curated database of Indian Medicinal Plants, Phytochemistry And Therapeutics.

    Science.gov (United States)

    Mohanraj, Karthikeyan; Karthikeyan, Bagavathy Shanmugam; Vivek-Ananth, R P; Chand, R P Bharath; Aparna, S R; Mangalapandi, Pattulingam; Samal, Areejit

    2018-03-12

    Phytochemicals of medicinal plants encompass a diverse chemical space for drug discovery. India is rich with a flora of indigenous medicinal plants that have been used for centuries in traditional Indian medicine to treat human maladies. A comprehensive online database on the phytochemistry of Indian medicinal plants will enable computational approaches towards natural product based drug discovery. In this direction, we present, IMPPAT, a manually curated database of 1742 Indian Medicinal Plants, 9596 Phytochemicals, And 1124 Therapeutic uses spanning 27074 plant-phytochemical associations and 11514 plant-therapeutic associations. Notably, the curation effort led to a non-redundant in silico library of 9596 phytochemicals with standard chemical identifiers and structure information. Using cheminformatic approaches, we have computed the physicochemical, ADMET (absorption, distribution, metabolism, excretion, toxicity) and drug-likeliness properties of the IMPPAT phytochemicals. We show that the stereochemical complexity and shape complexity of IMPPAT phytochemicals differ from libraries of commercial compounds or diversity-oriented synthesis compounds while being similar to other libraries of natural products. Within IMPPAT, we have filtered a subset of 960 potential druggable phytochemicals, of which majority have no significant similarity to existing FDA approved drugs, and thus, rendering them as good candidates for prospective drugs. IMPPAT database is openly accessible at: https://cb.imsc.res.in/imppat .

  18. Text mining effectively scores and ranks the literature for improving chemical-gene-disease curation at the comparative toxicogenomics database.

    Directory of Open Access Journals (Sweden)

    Allan Peter Davis

    Full Text Available The Comparative Toxicogenomics Database (CTD; http://ctdbase.org/ is a public resource that curates interactions between environmental chemicals and gene products, and their relationships to diseases, as a means of understanding the effects of environmental chemicals on human health. CTD provides a triad of core information in the form of chemical-gene, chemical-disease, and gene-disease interactions that are manually curated from scientific articles. To increase the efficiency, productivity, and data coverage of manual curation, we have leveraged text mining to help rank and prioritize the triaged literature. Here, we describe our text-mining process that computes and assigns each article a document relevancy score (DRS, wherein a high DRS suggests that an article is more likely to be relevant for curation at CTD. We evaluated our process by first text mining a corpus of 14,904 articles triaged for seven heavy metals (cadmium, cobalt, copper, lead, manganese, mercury, and nickel. Based upon initial analysis, a representative subset corpus of 3,583 articles was then selected from the 14,094 articles and sent to five CTD biocurators for review. The resulting curation of these 3,583 articles was analyzed for a variety of parameters, including article relevancy, novel data content, interaction yield rate, mean average precision, and biological and toxicological interpretability. We show that for all measured parameters, the DRS is an effective indicator for scoring and improving the ranking of literature for the curation of chemical-gene-disease information at CTD. Here, we demonstrate how fully incorporating text mining-based DRS scoring into our curation pipeline enhances manual curation by prioritizing more relevant articles, thereby increasing data content, productivity, and efficiency.

  19. Text Mining Effectively Scores and Ranks the Literature for Improving Chemical-Gene-Disease Curation at the Comparative Toxicogenomics Database

    Science.gov (United States)

    Johnson, Robin J.; Lay, Jean M.; Lennon-Hopkins, Kelley; Saraceni-Richards, Cynthia; Sciaky, Daniela; Murphy, Cynthia Grondin; Mattingly, Carolyn J.

    2013-01-01

    The Comparative Toxicogenomics Database (CTD; http://ctdbase.org/) is a public resource that curates interactions between environmental chemicals and gene products, and their relationships to diseases, as a means of understanding the effects of environmental chemicals on human health. CTD provides a triad of core information in the form of chemical-gene, chemical-disease, and gene-disease interactions that are manually curated from scientific articles. To increase the efficiency, productivity, and data coverage of manual curation, we have leveraged text mining to help rank and prioritize the triaged literature. Here, we describe our text-mining process that computes and assigns each article a document relevancy score (DRS), wherein a high DRS suggests that an article is more likely to be relevant for curation at CTD. We evaluated our process by first text mining a corpus of 14,904 articles triaged for seven heavy metals (cadmium, cobalt, copper, lead, manganese, mercury, and nickel). Based upon initial analysis, a representative subset corpus of 3,583 articles was then selected from the 14,094 articles and sent to five CTD biocurators for review. The resulting curation of these 3,583 articles was analyzed for a variety of parameters, including article relevancy, novel data content, interaction yield rate, mean average precision, and biological and toxicological interpretability. We show that for all measured parameters, the DRS is an effective indicator for scoring and improving the ranking of literature for the curation of chemical-gene-disease information at CTD. Here, we demonstrate how fully incorporating text mining-based DRS scoring into our curation pipeline enhances manual curation by prioritizing more relevant articles, thereby increasing data content, productivity, and efficiency. PMID:23613709

  20. A curated gluten protein sequence database to support development of proteomics methods for determination of gluten in gluten-free foods.

    Science.gov (United States)

    Bromilow, Sophie; Gethings, Lee A; Buckley, Mike; Bromley, Mike; Shewry, Peter R; Langridge, James I; Clare Mills, E N

    2017-06-23

    The unique physiochemical properties of wheat gluten enable a diverse range of food products to be manufactured. However, gluten triggers coeliac disease, a condition which is treated using a gluten-free diet. Analytical methods are required to confirm if foods are gluten-free, but current immunoassay-based methods can unreliable and proteomic methods offer an alternative but require comprehensive and well annotated sequence databases which are lacking for gluten. A manually a curated database (GluPro V1.0) of gluten proteins, comprising 630 discrete unique full length protein sequences has been compiled. It is representative of the different types of gliadin and glutenin components found in gluten. An in silico comparison of their coeliac toxicity was undertaken by analysing the distribution of coeliac toxic motifs. This demonstrated that whilst the α-gliadin proteins contained more toxic motifs, these were distributed across all gluten protein sub-types. Comparison of annotations observed using a discovery proteomics dataset acquired using ion mobility MS/MS showed that more reliable identifications were obtained using the GluPro V1.0 database compared to the complete reviewed Viridiplantae database. This highlights the value of a curated sequence database specifically designed to support the proteomic workflows and the development of methods to detect and quantify gluten. We have constructed the first manually curated open-source wheat gluten protein sequence database (GluPro V1.0) in a FASTA format to support the application of proteomic methods for gluten protein detection and quantification. We have also analysed the manually verified sequences to give the first comprehensive overview of the distribution of sequences able to elicit a reaction in coeliac disease, the prevalent form of gluten intolerance. Provision of this database will improve the reliability of gluten protein identification by proteomic analysis, and aid the development of targeted mass

  1. MIPS: curated databases and comprehensive secondary data resources in 2010.

    Science.gov (United States)

    Mewes, H Werner; Ruepp, Andreas; Theis, Fabian; Rattei, Thomas; Walter, Mathias; Frishman, Dmitrij; Suhre, Karsten; Spannagl, Manuel; Mayer, Klaus F X; Stümpflen, Volker; Antonov, Alexey

    2011-01-01

    The Munich Information Center for Protein Sequences (MIPS at the Helmholtz Center for Environmental Health, Neuherberg, Germany) has many years of experience in providing annotated collections of biological data. Selected data sets of high relevance, such as model genomes, are subjected to careful manual curation, while the bulk of high-throughput data is annotated by automatic means. High-quality reference resources developed in the past and still actively maintained include Saccharomyces cerevisiae, Neurospora crassa and Arabidopsis thaliana genome databases as well as several protein interaction data sets (MPACT, MPPI and CORUM). More recent projects are PhenomiR, the database on microRNA-related phenotypes, and MIPS PlantsDB for integrative and comparative plant genome research. The interlinked resources SIMAP and PEDANT provide homology relationships as well as up-to-date and consistent annotation for 38,000,000 protein sequences. PPLIPS and CCancer are versatile tools for proteomics and functional genomics interfacing to a database of compilations from gene lists extracted from literature. A novel literature-mining tool, EXCERBT, gives access to structured information on classified relations between genes, proteins, phenotypes and diseases extracted from Medline abstracts by semantic analysis. All databases described here, as well as the detailed descriptions of our projects can be accessed through the MIPS WWW server (http://mips.helmholtz-muenchen.de).

  2. BioM2MetDisease: a manually curated database for associations between microRNAs, metabolites, small molecules and metabolic diseases.

    Science.gov (United States)

    Xu, Yanjun; Yang, Haixiu; Wu, Tan; Dong, Qun; Sun, Zeguo; Shang, Desi; Li, Feng; Xu, Yingqi; Su, Fei; Liu, Siyao; Zhang, Yunpeng; Li, Xia

    2017-01-01

    BioM2MetDisease is a manually curated database that aims to provide a comprehensive and experimentally supported resource of associations between metabolic diseases and various biomolecules. Recently, metabolic diseases such as diabetes have become one of the leading threats to people’s health. Metabolic disease associated with alterations of multiple types of biomolecules such as miRNAs and metabolites. An integrated and high-quality data source that collection of metabolic disease associated biomolecules is essential for exploring the underlying molecular mechanisms and discovering novel therapeutics. Here, we developed the BioM2MetDisease database, which currently documents 2681 entries of relationships between 1147 biomolecules (miRNAs, metabolites and small molecules/drugs) and 78 metabolic diseases across 14 species. Each entry includes biomolecule category, species, biomolecule name, disease name, dysregulation pattern, experimental technique, a brief description of metabolic disease-biomolecule relationships, the reference, additional annotation information etc. BioM2MetDisease provides a user-friendly interface to explore and retrieve all data conveniently. A submission page was also offered for researchers to submit new associations between biomolecules and metabolic diseases. BioM2MetDisease provides a comprehensive resource for studying biology molecules act in metabolic diseases, and it is helpful for understanding the molecular mechanisms and developing novel therapeutics for metabolic diseases. http://www.bio-bigdata.com/BioM2MetDisease/. © The Author(s) 2017. Published by Oxford University Press.

  3. EVLncRNAs: a manually curated database for long non-coding RNAs validated by low-throughput experiments

    Science.gov (United States)

    Zhao, Huiying; Yu, Jiafeng; Guo, Chengang; Dou, Xianghua; Song, Feng; Hu, Guodong; Cao, Zanxia; Qu, Yuanxu

    2018-01-01

    Abstract Long non-coding RNAs (lncRNAs) play important functional roles in various biological processes. Early databases were utilized to deposit all lncRNA candidates produced by high-throughput experimental and/or computational techniques to facilitate classification, assessment and validation. As more lncRNAs are validated by low-throughput experiments, several databases were established for experimentally validated lncRNAs. However, these databases are small in scale (with a few hundreds of lncRNAs only) and specific in their focuses (plants, diseases or interactions). Thus, it is highly desirable to have a comprehensive dataset for experimentally validated lncRNAs as a central repository for all of their structures, functions and phenotypes. Here, we established EVLncRNAs by curating lncRNAs validated by low-throughput experiments (up to 1 May 2016) and integrating specific databases (lncRNAdb, LncRANDisease, Lnc2Cancer and PLNIncRBase) with additional functional and disease-specific information not covered previously. The current version of EVLncRNAs contains 1543 lncRNAs from 77 species that is 2.9 times larger than the current largest database for experimentally validated lncRNAs. Seventy-four percent lncRNA entries are partially or completely new, comparing to all existing experimentally validated databases. The established database allows users to browse, search and download as well as to submit experimentally validated lncRNAs. The database is available at http://biophy.dzu.edu.cn/EVLncRNAs. PMID:28985416

  4. CPAD, Curated Protein Aggregation Database: A Repository of Manually Curated Experimental Data on Protein and Peptide Aggregation.

    Science.gov (United States)

    Thangakani, A Mary; Nagarajan, R; Kumar, Sandeep; Sakthivel, R; Velmurugan, D; Gromiha, M Michael

    2016-01-01

    Accurate distinction between peptide sequences that can form amyloid-fibrils or amorphous β-aggregates, identification of potential aggregation prone regions in proteins, and prediction of change in aggregation rate of a protein upon mutation(s) are critical to research on protein misfolding diseases, such as Alzheimer's and Parkinson's, as well as biotechnological production of protein based therapeutics. We have developed a Curated Protein Aggregation Database (CPAD), which has collected results from experimental studies performed by scientific community aimed at understanding protein/peptide aggregation. CPAD contains more than 2300 experimentally observed aggregation rates upon mutations in known amyloidogenic proteins. Each entry includes numerical values for the following parameters: change in rate of aggregation as measured by fluorescence intensity or turbidity, name and source of the protein, Uniprot and Protein Data Bank codes, single point as well as multiple mutations, and literature citation. The data in CPAD has been supplemented with five different types of additional information: (i) Amyloid fibril forming hexa-peptides, (ii) Amorphous β-aggregating hexa-peptides, (iii) Amyloid fibril forming peptides of different lengths, (iv) Amyloid fibril forming hexa-peptides whose crystal structures are available in the Protein Data Bank (PDB) and (v) Experimentally validated aggregation prone regions found in amyloidogenic proteins. Furthermore, CPAD is linked to other related databases and resources, such as Uniprot, Protein Data Bank, PUBMED, GAP, TANGO, WALTZ etc. We have set up a web interface with different search and display options so that users have the ability to get the data in multiple ways. CPAD is freely available at http://www.iitm.ac.in/bioinfo/CPAD/. The potential applications of CPAD have also been discussed.

  5. Curation of complex, context-dependent immunological data

    Directory of Open Access Journals (Sweden)

    Sidney John

    2006-07-01

    Full Text Available Abstract Background The Immune Epitope Database and Analysis Resource (IEDB is dedicated to capturing, housing and analyzing complex immune epitope related data http://www.immuneepitope.org. Description To identify and extract relevant data from the scientific literature in an efficient and accurate manner, novel processes were developed for manual and semi-automated annotation. Conclusion Formalized curation strategies enable the processing of a large volume of context-dependent data, which are now available to the scientific community in an accessible and transparent format. The experiences described herein are applicable to other databases housing complex biological data and requiring a high level of curation expertise.

  6. Text Mining Genotype-Phenotype Relationships from Biomedical Literature for Database Curation and Precision Medicine.

    Science.gov (United States)

    Singhal, Ayush; Simmons, Michael; Lu, Zhiyong

    2016-11-01

    The practice of precision medicine will ultimately require databases of genes and mutations for healthcare providers to reference in order to understand the clinical implications of each patient's genetic makeup. Although the highest quality databases require manual curation, text mining tools can facilitate the curation process, increasing accuracy, coverage, and productivity. However, to date there are no available text mining tools that offer high-accuracy performance for extracting such triplets from biomedical literature. In this paper we propose a high-performance machine learning approach to automate the extraction of disease-gene-variant triplets from biomedical literature. Our approach is unique because we identify the genes and protein products associated with each mutation from not just the local text content, but from a global context as well (from the Internet and from all literature in PubMed). Our approach also incorporates protein sequence validation and disease association using a novel text-mining-based machine learning approach. We extract disease-gene-variant triplets from all abstracts in PubMed related to a set of ten important diseases (breast cancer, prostate cancer, pancreatic cancer, lung cancer, acute myeloid leukemia, Alzheimer's disease, hemochromatosis, age-related macular degeneration (AMD), diabetes mellitus, and cystic fibrosis). We then evaluate our approach in two ways: (1) a direct comparison with the state of the art using benchmark datasets; (2) a validation study comparing the results of our approach with entries in a popular human-curated database (UniProt) for each of the previously mentioned diseases. In the benchmark comparison, our full approach achieves a 28% improvement in F1-measure (from 0.62 to 0.79) over the state-of-the-art results. For the validation study with UniProt Knowledgebase (KB), we present a thorough analysis of the results and errors. Across all diseases, our approach returned 272 triplets (disease

  7. Argo: an integrative, interactive, text mining-based workbench supporting curation

    Science.gov (United States)

    Rak, Rafal; Rowley, Andrew; Black, William; Ananiadou, Sophia

    2012-01-01

    Curation of biomedical literature is often supported by the automatic analysis of textual content that generally involves a sequence of individual processing components. Text mining (TM) has been used to enhance the process of manual biocuration, but has been focused on specific databases and tasks rather than an environment integrating TM tools into the curation pipeline, catering for a variety of tasks, types of information and applications. Processing components usually come from different sources and often lack interoperability. The well established Unstructured Information Management Architecture is a framework that addresses interoperability by defining common data structures and interfaces. However, most of the efforts are targeted towards software developers and are not suitable for curators, or are otherwise inconvenient to use on a higher level of abstraction. To overcome these issues we introduce Argo, an interoperable, integrative, interactive and collaborative system for text analysis with a convenient graphic user interface to ease the development of processing workflows and boost productivity in labour-intensive manual curation. Robust, scalable text analytics follow a modular approach, adopting component modules for distinct levels of text analysis. The user interface is available entirely through a web browser that saves the user from going through often complicated and platform-dependent installation procedures. Argo comes with a predefined set of processing components commonly used in text analysis, while giving the users the ability to deposit their own components. The system accommodates various areas and levels of user expertise, from TM and computational linguistics to ontology-based curation. One of the key functionalities of Argo is its ability to seamlessly incorporate user-interactive components, such as manual annotation editors, into otherwise completely automatic pipelines. As a use case, we demonstrate the functionality of an in

  8. miRandola 2017: a curated knowledge base of non-invasive biomarkers

    DEFF Research Database (Denmark)

    Russo, Francesco; Di Bella, Sebastiano; Vannini, Federica

    2018-01-01

    databases. Data are manually curated from 314 articles that describe miRNAs, long non-coding RNAs and circular RNAs. Fourteen organisms are now included in the database, and associations of ncRNAs with 25 drugs, 47 sample types and 197 diseases. miRandola also classifies extracellular RNAs based...

  9. The MIntAct project--IntAct as a common curation platform for 11 molecular interaction databases

    OpenAIRE

    Orchard, S; Ammari, M; Aranda, B; Breuza, L; Briganti, L; Broackes-Carter, F; Campbell, N; Chavali, G; Chen, C; del-Toro, N; Duesbury, M; Dumousseau, M; Galeota, E; Hinz, U; Iannuccelli, M

    2014-01-01

    IntAct (freely available at http://www.ebi.ac.uk/intact) is an open-source, open data molecular interaction database populated by data either curated from the literature or from direct data depositions. IntAct has developed a sophisticated web-based curation tool, capable of supporting both IMEx- and MIMIx-level curation. This tool is now utilized by multiple additional curation teams, all of whom annotate data directly into the IntAct database. Members of the IntAct team supply appropriate l...

  10. Text Mining Genotype-Phenotype Relationships from Biomedical Literature for Database Curation and Precision Medicine.

    Directory of Open Access Journals (Sweden)

    Ayush Singhal

    2016-11-01

    Full Text Available The practice of precision medicine will ultimately require databases of genes and mutations for healthcare providers to reference in order to understand the clinical implications of each patient's genetic makeup. Although the highest quality databases require manual curation, text mining tools can facilitate the curation process, increasing accuracy, coverage, and productivity. However, to date there are no available text mining tools that offer high-accuracy performance for extracting such triplets from biomedical literature. In this paper we propose a high-performance machine learning approach to automate the extraction of disease-gene-variant triplets from biomedical literature. Our approach is unique because we identify the genes and protein products associated with each mutation from not just the local text content, but from a global context as well (from the Internet and from all literature in PubMed. Our approach also incorporates protein sequence validation and disease association using a novel text-mining-based machine learning approach. We extract disease-gene-variant triplets from all abstracts in PubMed related to a set of ten important diseases (breast cancer, prostate cancer, pancreatic cancer, lung cancer, acute myeloid leukemia, Alzheimer's disease, hemochromatosis, age-related macular degeneration (AMD, diabetes mellitus, and cystic fibrosis. We then evaluate our approach in two ways: (1 a direct comparison with the state of the art using benchmark datasets; (2 a validation study comparing the results of our approach with entries in a popular human-curated database (UniProt for each of the previously mentioned diseases. In the benchmark comparison, our full approach achieves a 28% improvement in F1-measure (from 0.62 to 0.79 over the state-of-the-art results. For the validation study with UniProt Knowledgebase (KB, we present a thorough analysis of the results and errors. Across all diseases, our approach returned 272 triplets

  11. submitter BioSharing: curated and crowd-sourced metadata standards, databases and data policies in the life sciences

    CERN Document Server

    McQuilton, Peter; Rocca-Serra, Philippe; Thurston, Milo; Lister, Allyson; Maguire, Eamonn; Sansone, Susanna-Assunta

    2016-01-01

    BioSharing (http://www.biosharing.org) is a manually curated, searchable portal of three linked registries. These resources cover standards (terminologies, formats and models, and reporting guidelines), databases, and data policies in the life sciences, broadly encompassing the biological, environmental and biomedical sciences. Launched in 2011 and built by the same core team as the successful MIBBI portal, BioSharing harnesses community curation to collate and cross-reference resources across the life sciences from around the world. BioSharing makes these resources findable and accessible (the core of the FAIR principle). Every record is designed to be interlinked, providing a detailed description not only on the resource itself, but also on its relations with other life science infrastructures. Serving a variety of stakeholders, BioSharing cultivates a growing community, to which it offers diverse benefits. It is a resource for funding bodies and journal publishers to navigate the metadata landscape of the ...

  12. Estimating the annotation error rate of curated GO database sequence annotations

    Directory of Open Access Journals (Sweden)

    Brown Alfred L

    2007-05-01

    Full Text Available Abstract Background Annotations that describe the function of sequences are enormously important to researchers during laboratory investigations and when making computational inferences. However, there has been little investigation into the data quality of sequence function annotations. Here we have developed a new method of estimating the error rate of curated sequence annotations, and applied this to the Gene Ontology (GO sequence database (GOSeqLite. This method involved artificially adding errors to sequence annotations at known rates, and used regression to model the impact on the precision of annotations based on BLAST matched sequences. Results We estimated the error rate of curated GO sequence annotations in the GOSeqLite database (March 2006 at between 28% and 30%. Annotations made without use of sequence similarity based methods (non-ISS had an estimated error rate of between 13% and 18%. Annotations made with the use of sequence similarity methodology (ISS had an estimated error rate of 49%. Conclusion While the overall error rate is reasonably low, it would be prudent to treat all ISS annotations with caution. Electronic annotators that use ISS annotations as the basis of predictions are likely to have higher false prediction rates, and for this reason designers of these systems should consider avoiding ISS annotations where possible. Electronic annotators that use ISS annotations to make predictions should be viewed sceptically. We recommend that curators thoroughly review ISS annotations before accepting them as valid. Overall, users of curated sequence annotations from the GO database should feel assured that they are using a comparatively high quality source of information.

  13. Text mining facilitates database curation - extraction of mutation-disease associations from Bio-medical literature.

    Science.gov (United States)

    Ravikumar, Komandur Elayavilli; Wagholikar, Kavishwar B; Li, Dingcheng; Kocher, Jean-Pierre; Liu, Hongfang

    2015-06-06

    Advances in the next generation sequencing technology has accelerated the pace of individualized medicine (IM), which aims to incorporate genetic/genomic information into medicine. One immediate need in interpreting sequencing data is the assembly of information about genetic variants and their corresponding associations with other entities (e.g., diseases or medications). Even with dedicated effort to capture such information in biological databases, much of this information remains 'locked' in the unstructured text of biomedical publications. There is a substantial lag between the publication and the subsequent abstraction of such information into databases. Multiple text mining systems have been developed, but most of them focus on the sentence level association extraction with performance evaluation based on gold standard text annotations specifically prepared for text mining systems. We developed and evaluated a text mining system, MutD, which extracts protein mutation-disease associations from MEDLINE abstracts by incorporating discourse level analysis, using a benchmark data set extracted from curated database records. MutD achieves an F-measure of 64.3% for reconstructing protein mutation disease associations in curated database records. Discourse level analysis component of MutD contributed to a gain of more than 10% in F-measure when compared against the sentence level association extraction. Our error analysis indicates that 23 of the 64 precision errors are true associations that were not captured by database curators and 68 of the 113 recall errors are caused by the absence of associated disease entities in the abstract. After adjusting for the defects in the curated database, the revised F-measure of MutD in association detection reaches 81.5%. Our quantitative analysis reveals that MutD can effectively extract protein mutation disease associations when benchmarking based on curated database records. The analysis also demonstrates that incorporating

  14. TreeFam: a curated database of phylogenetic trees of animal gene families

    DEFF Research Database (Denmark)

    Li, Heng; Coghlan, Avril; Ruan, Jue

    2006-01-01

    TreeFam is a database of phylogenetic trees of gene families found in animals. It aims to develop a curated resource that presents the accurate evolutionary history of all animal gene families, as well as reliable ortholog and paralog assignments. Curated families are being added progressively......, based on seed alignments and trees in a similar fashion to Pfam. Release 1.1 of TreeFam contains curated trees for 690 families and automatically generated trees for another 11 646 families. These represent over 128 000 genes from nine fully sequenced animal genomes and over 45 000 other animal proteins...

  15. The curation of genetic variants: difficulties and possible solutions.

    Science.gov (United States)

    Pandey, Kapil Raj; Maden, Narendra; Poudel, Barsha; Pradhananga, Sailendra; Sharma, Amit Kumar

    2012-12-01

    The curation of genetic variants from biomedical articles is required for various clinical and research purposes. Nowadays, establishment of variant databases that include overall information about variants is becoming quite popular. These databases have immense utility, serving as a user-friendly information storehouse of variants for information seekers. While manual curation is the gold standard method for curation of variants, it can turn out to be time-consuming on a large scale thus necessitating the need for automation. Curation of variants described in biomedical literature may not be straightforward mainly due to various nomenclature and expression issues. Though current trends in paper writing on variants is inclined to the standard nomenclature such that variants can easily be retrieved, we have a massive store of variants in the literature that are present as non-standard names and the online search engines that are predominantly used may not be capable of finding them. For effective curation of variants, knowledge about the overall process of curation, nature and types of difficulties in curation, and ways to tackle the difficulties during the task are crucial. Only by effective curation, can variants be correctly interpreted. This paper presents the process and difficulties of curation of genetic variants with possible solutions and suggestions from our work experience in the field including literature support. The paper also highlights aspects of interpretation of genetic variants and the importance of writing papers on variants following standard and retrievable methods. Copyright © 2012. Published by Elsevier Ltd.

  16. Directly e-mailing authors of newly published papers encourages community curation

    Science.gov (United States)

    Bunt, Stephanie M.; Grumbling, Gary B.; Field, Helen I.; Marygold, Steven J.; Brown, Nicholas H.; Millburn, Gillian H.

    2012-01-01

    Much of the data within Model Organism Databases (MODs) comes from manual curation of the primary research literature. Given limited funding and an increasing density of published material, a significant challenge facing all MODs is how to efficiently and effectively prioritize the most relevant research papers for detailed curation. Here, we report recent improvements to the triaging process used by FlyBase. We describe an automated method to directly e-mail corresponding authors of new papers, requesting that they list the genes studied and indicate (‘flag’) the types of data described in the paper using an online tool. Based on the author-assigned flags, papers are then prioritized for detailed curation and channelled to appropriate curator teams for full data extraction. The overall response rate has been 44% and the flagging of data types by authors is sufficiently accurate for effective prioritization of papers. In summary, we have established a sustainable community curation program, with the result that FlyBase curators now spend less time triaging and can devote more effort to the specialized task of detailed data extraction. Database URL: http://flybase.org/ PMID:22554788

  17. The SIB Swiss Institute of Bioinformatics' resources: focus on curated databases

    OpenAIRE

    Bultet, Lisandra Aguilar; Aguilar Rodriguez, Jose; Ahrens, Christian H; Ahrne, Erik Lennart; Ai, Ni; Aimo, Lucila; Akalin, Altuna; Aleksiev, Tyanko; Alocci, Davide; Altenhoff, Adrian; Alves, Isabel; Ambrosini, Giovanna; Pedone, Pascale Anderle; Angelina, Paolo; Anisimova, Maria

    2016-01-01

    The SIB Swiss Institute of Bioinformatics (www.isb-sib.ch) provides world-class bioinformatics databases, software tools, services and training to the international life science community in academia and industry. These solutions allow life scientists to turn the exponentially growing amount of data into knowledge. Here, we provide an overview of SIB's resources and competence areas, with a strong focus on curated databases and SIB's most popular and widely used resources. In particular, SIB'...

  18. A hybrid human and machine resource curation pipeline for the Neuroscience Information Framework.

    Science.gov (United States)

    Bandrowski, A E; Cachat, J; Li, Y; Müller, H M; Sternberg, P W; Ciccarese, P; Clark, T; Marenco, L; Wang, R; Astakhov, V; Grethe, J S; Martone, M E

    2012-01-01

    The breadth of information resources available to researchers on the Internet continues to expand, particularly in light of recently implemented data-sharing policies required by funding agencies. However, the nature of dense, multifaceted neuroscience data and the design of contemporary search engine systems makes efficient, reliable and relevant discovery of such information a significant challenge. This challenge is specifically pertinent for online databases, whose dynamic content is 'hidden' from search engines. The Neuroscience Information Framework (NIF; http://www.neuinfo.org) was funded by the NIH Blueprint for Neuroscience Research to address the problem of finding and utilizing neuroscience-relevant resources such as software tools, data sets, experimental animals and antibodies across the Internet. From the outset, NIF sought to provide an accounting of available resources, whereas developing technical solutions to finding, accessing and utilizing them. The curators therefore, are tasked with identifying and registering resources, examining data, writing configuration files to index and display data and keeping the contents current. In the initial phases of the project, all aspects of the registration and curation processes were manual. However, as the number of resources grew, manual curation became impractical. This report describes our experiences and successes with developing automated resource discovery and semiautomated type characterization with text-mining scripts that facilitate curation team efforts to discover, integrate and display new content. We also describe the DISCO framework, a suite of automated web services that significantly reduce manual curation efforts to periodically check for resource updates. Lastly, we discuss DOMEO, a semi-automated annotation tool that improves the discovery and curation of resources that are not necessarily website-based (i.e. reagents, software tools). Although the ultimate goal of automation was to

  19. Lnc2Meth: a manually curated database of regulatory relationships between long non-coding RNAs and DNA methylation associated with human disease.

    Science.gov (United States)

    Zhi, Hui; Li, Xin; Wang, Peng; Gao, Yue; Gao, Baoqing; Zhou, Dianshuang; Zhang, Yan; Guo, Maoni; Yue, Ming; Shen, Weitao; Ning, Shangwei; Jin, Lianhong; Li, Xia

    2018-01-04

    Lnc2Meth (http://www.bio-bigdata.com/Lnc2Meth/), an interactive resource to identify regulatory relationships between human long non-coding RNAs (lncRNAs) and DNA methylation, is not only a manually curated collection and annotation of experimentally supported lncRNAs-DNA methylation associations but also a platform that effectively integrates tools for calculating and identifying the differentially methylated lncRNAs and protein-coding genes (PCGs) in diverse human diseases. The resource provides: (i) advanced search possibilities, e.g. retrieval of the database by searching the lncRNA symbol of interest, DNA methylation patterns, regulatory mechanisms and disease types; (ii) abundant computationally calculated DNA methylation array profiles for the lncRNAs and PCGs; (iii) the prognostic values for each hit transcript calculated from the patients clinical data; (iv) a genome browser to display the DNA methylation landscape of the lncRNA transcripts for a specific type of disease; (v) tools to re-annotate probes to lncRNA loci and identify the differential methylation patterns for lncRNAs and PCGs with user-supplied external datasets; (vi) an R package (LncDM) to complete the differentially methylated lncRNAs identification and visualization with local computers. Lnc2Meth provides a timely and valuable resource that can be applied to significantly expand our understanding of the regulatory relationships between lncRNAs and DNA methylation in various human diseases. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. A curated database of cyanobacterial strains relevant for modern taxonomy and phylogenetic studies

    OpenAIRE

    Ramos, Vitor; Morais, Jo?o; Vasconcelos, Vitor M.

    2017-01-01

    The dataset herein described lays the groundwork for an online database of relevant cyanobacterial strains, named CyanoType (http://lege.ciimar.up.pt/cyanotype). It is a database that includes categorized cyanobacterial strains useful for taxonomic, phylogenetic or genomic purposes, with associated information obtained by means of a literature-based curation. The dataset lists 371 strains and represents the first version of the database (CyanoType v.1). Information for each strain includes st...

  1. Geroprotectors.org: a new, structured and curated database of current therapeutic interventions in aging and age-related disease

    Science.gov (United States)

    Moskalev, Alexey; Chernyagina, Elizaveta; de Magalhães, João Pedro; Barardo, Diogo; Thoppil, Harikrishnan; Shaposhnikov, Mikhail; Budovsky, Arie; Fraifeld, Vadim E.; Garazha, Andrew; Tsvetkov, Vasily; Bronovitsky, Evgeny; Bogomolov, Vladislav; Scerbacov, Alexei; Kuryan, Oleg; Gurinovich, Roman; Jellen, Leslie C.; Kennedy, Brian; Mamoshina, Polina; Dobrovolskaya, Evgeniya; Aliper, Alex; Kaminsky, Dmitry; Zhavoronkov, Alex

    2015-01-01

    As the level of interest in aging research increases, there is a growing number of geroprotectors, or therapeutic interventions that aim to extend the healthy lifespan and repair or reduce aging-related damage in model organisms and, eventually, in humans. There is a clear need for a manually-curated database of geroprotectors to compile and index their effects on aging and age-related diseases and link these effects to relevant studies and multiple biochemical and drug databases. Here, we introduce the first such resource, Geroprotectors (http://geroprotectors.org). Geroprotectors is a public, rapidly explorable database that catalogs over 250 experiments involving over 200 known or candidate geroprotectors that extend lifespan in model organisms. Each compound has a comprehensive profile complete with biochemistry, mechanisms, and lifespan effects in various model organisms, along with information ranging from chemical structure, side effects, and toxicity to FDA drug status. These are presented in a visually intuitive, efficient framework fit for casual browsing or in-depth research alike. Data are linked to the source studies or databases, providing quick and convenient access to original data. The Geroprotectors database facilitates cross-study, cross-organism, and cross-discipline analysis and saves countless hours of inefficient literature and web searching. Geroprotectors is a one-stop, knowledge-sharing, time-saving resource for researchers seeking healthy aging solutions. PMID:26342919

  2. Dynamic delivery of the National Transit Database Sampling Manual.

    Science.gov (United States)

    2013-02-01

    This project improves the National Transit Database (NTD) Sampling Manual and develops an Internet-based, WordPress-powered interactive Web tool to deliver the new NTD Sampling Manual dynamically. The new manual adds guidance and a tool for transit a...

  3. AtlasT4SS: a curated database for type IV secretion systems.

    Science.gov (United States)

    Souza, Rangel C; del Rosario Quispe Saji, Guadalupe; Costa, Maiana O C; Netto, Diogo S; Lima, Nicholas C B; Klein, Cecília C; Vasconcelos, Ana Tereza R; Nicolás, Marisa F

    2012-08-09

    The type IV secretion system (T4SS) can be classified as a large family of macromolecule transporter systems, divided into three recognized sub-families, according to the well-known functions. The major sub-family is the conjugation system, which allows transfer of genetic material, such as a nucleoprotein, via cell contact among bacteria. Also, the conjugation system can transfer genetic material from bacteria to eukaryotic cells; such is the case with the T-DNA transfer of Agrobacterium tumefaciens to host plant cells. The system of effector protein transport constitutes the second sub-family, and the third one corresponds to the DNA uptake/release system. Genome analyses have revealed numerous T4SS in Bacteria and Archaea. The purpose of this work was to organize, classify, and integrate the T4SS data into a single database, called AtlasT4SS - the first public database devoted exclusively to this prokaryotic secretion system. The AtlasT4SS is a manual curated database that describes a large number of proteins related to the type IV secretion system reported so far in Gram-negative and Gram-positive bacteria, as well as in Archaea. The database was created using the RDBMS MySQL and the Catalyst Framework based in the Perl programming language and using the Model-View-Controller (MVC) design pattern for Web. The current version holds a comprehensive collection of 1,617 T4SS proteins from 58 Bacteria (49 Gram-negative and 9 Gram-Positive), one Archaea and 11 plasmids. By applying the bi-directional best hit (BBH) relationship in pairwise genome comparison, it was possible to obtain a core set of 134 clusters of orthologous genes encoding T4SS proteins. In our database we present one way of classifying orthologous groups of T4SSs in a hierarchical classification scheme with three levels. The first level comprises four classes that are based on the organization of genetic determinants, shared homologies, and evolutionary relationships: (i) F-T4SS, (ii) P-T4SS, (iii

  4. Imitating manual curation of text-mined facts in biomedicine.

    Directory of Open Access Journals (Sweden)

    Raul Rodriguez-Esteban

    2006-09-01

    Full Text Available Text-mining algorithms make mistakes in extracting facts from natural-language texts. In biomedical applications, which rely on use of text-mined data, it is critical to assess the quality (the probability that the message is correctly extracted of individual facts--to resolve data conflicts and inconsistencies. Using a large set of almost 100,000 manually produced evaluations (most facts were independently reviewed more than once, producing independent evaluations, we implemented and tested a collection of algorithms that mimic human evaluation of facts provided by an automated information-extraction system. The performance of our best automated classifiers closely approached that of our human evaluators (ROC score close to 0.95. Our hypothesis is that, were we to use a larger number of human experts to evaluate any given sentence, we could implement an artificial-intelligence curator that would perform the classification job at least as accurately as an average individual human evaluator. We illustrated our analysis by visualizing the predicted accuracy of the text-mined relations involving the term cocaine.

  5. TRENDS: A flight test relational database user's guide and reference manual

    Science.gov (United States)

    Bondi, M. J.; Bjorkman, W. S.; Cross, J. L.

    1994-01-01

    This report is designed to be a user's guide and reference manual for users intending to access rotocraft test data via TRENDS, the relational database system which was developed as a tool for the aeronautical engineer with no programming background. This report has been written to assist novice and experienced TRENDS users. TRENDS is a complete system for retrieving, searching, and analyzing both numerical and narrative data, and for displaying time history and statistical data in graphical and numerical formats. This manual provides a 'guided tour' and a 'user's guide' for the new and intermediate-skilled users. Examples for the use of each menu item within TRENDS is provided in the Menu Reference section of the manual, including full coverage for TIMEHIST, one of the key tools. This manual is written around the XV-15 Tilt Rotor database, but does include an appendix on the UH-60 Blackhawk database. This user's guide and reference manual establishes a referrable source for the research community and augments NASA TM-101025, TRENDS: The Aeronautical Post-Test, Database Management System, Jan. 1990, written by the same authors.

  6. Integration of curated databases to identify genotype-phenotype associations

    Directory of Open Access Journals (Sweden)

    Li Jianrong

    2006-10-01

    Full Text Available Abstract Background The ability to rapidly characterize an unknown microorganism is critical in both responding to infectious disease and biodefense. To do this, we need some way of anticipating an organism's phenotype based on the molecules encoded by its genome. However, the link between molecular composition (i.e. genotype and phenotype for microbes is not obvious. While there have been several studies that address this challenge, none have yet proposed a large-scale method integrating curated biological information. Here we utilize a systematic approach to discover genotype-phenotype associations that combines phenotypic information from a biomedical informatics database, GIDEON, with the molecular information contained in National Center for Biotechnology Information's Clusters of Orthologous Groups database (NCBI COGs. Results Integrating the information in the two databases, we are able to correlate the presence or absence of a given protein in a microbe with its phenotype as measured by certain morphological characteristics or survival in a particular growth media. With a 0.8 correlation score threshold, 66% of the associations found were confirmed by the literature and at a 0.9 correlation threshold, 86% were positively verified. Conclusion Our results suggest possible phenotypic manifestations for proteins biochemically associated with sugar metabolism and electron transport. Moreover, we believe our approach can be extended to linking pathogenic phenotypes with functionally related proteins.

  7. Mycobacteriophage genome database.

    Science.gov (United States)

    Joseph, Jerrine; Rajendran, Vasanthi; Hassan, Sameer; Kumar, Vanaja

    2011-01-01

    Mycobacteriophage genome database (MGDB) is an exclusive repository of the 64 completely sequenced mycobacteriophages with annotated information. It is a comprehensive compilation of the various gene parameters captured from several databases pooled together to empower mycobacteriophage researchers. The MGDB (Version No.1.0) comprises of 6086 genes from 64 mycobacteriophages classified into 72 families based on ACLAME database. Manual curation was aided by information available from public databases which was enriched further by analysis. Its web interface allows browsing as well as querying the classification. The main objective is to collect and organize the complexity inherent to mycobacteriophage protein classification in a rational way. The other objective is to browse the existing and new genomes and describe their functional annotation. The database is available for free at http://mpgdb.ibioinformatics.org/mpgdb.php.

  8. A curated database of cyanobacterial strains relevant for modern taxonomy and phylogenetic studies.

    Science.gov (United States)

    Ramos, Vitor; Morais, João; Vasconcelos, Vitor M

    2017-04-25

    The dataset herein described lays the groundwork for an online database of relevant cyanobacterial strains, named CyanoType (http://lege.ciimar.up.pt/cyanotype). It is a database that includes categorized cyanobacterial strains useful for taxonomic, phylogenetic or genomic purposes, with associated information obtained by means of a literature-based curation. The dataset lists 371 strains and represents the first version of the database (CyanoType v.1). Information for each strain includes strain synonymy and/or co-identity, strain categorization, habitat, accession numbers for molecular data, taxonomy and nomenclature notes according to three different classification schemes, hierarchical automatic classification, phylogenetic placement according to a selection of relevant studies (including this), and important bibliographic references. The database will be updated periodically, namely by adding new strains meeting the criteria for inclusion and by revising and adding up-to-date metadata for strains already listed. A global 16S rDNA-based phylogeny is provided in order to assist users when choosing the appropriate strains for their studies.

  9. Can we replace curation with information extraction software?

    Science.gov (United States)

    Karp, Peter D

    2016-01-01

    Can we use programs for automated or semi-automated information extraction from scientific texts as practical alternatives to professional curation? I show that error rates of current information extraction programs are too high to replace professional curation today. Furthermore, current IEP programs extract single narrow slivers of information, such as individual protein interactions; they cannot extract the large breadth of information extracted by professional curators for databases such as EcoCyc. They also cannot arbitrate among conflicting statements in the literature as curators can. Therefore, funding agencies should not hobble the curation efforts of existing databases on the assumption that a problem that has stymied Artificial Intelligence researchers for more than 60 years will be solved tomorrow. Semi-automated extraction techniques appear to have significantly more potential based on a review of recent tools that enhance curator productivity. But a full cost-benefit analysis for these tools is lacking. Without such analysis it is possible to expend significant effort developing information-extraction tools that automate small parts of the overall curation workflow without achieving a significant decrease in curation costs.Database URL. © The Author(s) 2016. Published by Oxford University Press.

  10. Annotation of phenotypic diversity: decoupling data curation and ontology curation using Phenex.

    Science.gov (United States)

    Balhoff, James P; Dahdul, Wasila M; Dececchi, T Alexander; Lapp, Hilmar; Mabee, Paula M; Vision, Todd J

    2014-01-01

    Phenex (http://phenex.phenoscape.org/) is a desktop application for semantically annotating the phenotypic character matrix datasets common in evolutionary biology. Since its initial publication, we have added new features that address several major bottlenecks in the efficiency of the phenotype curation process: allowing curators during the data curation phase to provisionally request terms that are not yet available from a relevant ontology; supporting quality control against annotation guidelines to reduce later manual review and revision; and enabling the sharing of files for collaboration among curators. We decoupled data annotation from ontology development by creating an Ontology Request Broker (ORB) within Phenex. Curators can use the ORB to request a provisional term for use in data annotation; the provisional term can be automatically replaced with a permanent identifier once the term is added to an ontology. We added a set of annotation consistency checks to prevent common curation errors, reducing the need for later correction. We facilitated collaborative editing by improving the reliability of Phenex when used with online folder sharing services, via file change monitoring and continual autosave. With the addition of these new features, and in particular the Ontology Request Broker, Phenex users have been able to focus more effectively on data annotation. Phenoscape curators using Phenex have reported a smoother annotation workflow, with much reduced interruptions from ontology maintenance and file management issues.

  11. The Princeton Protein Orthology Database (P-POD): a comparative genomics analysis tool for biologists.

    OpenAIRE

    Sven Heinicke; Michael S Livstone; Charles Lu; Rose Oughtred; Fan Kang; Samuel V Angiuoli; Owen White; David Botstein; Kara Dolinski

    2007-01-01

    Many biological databases that provide comparative genomics information and tools are now available on the internet. While certainly quite useful, to our knowledge none of the existing databases combine results from multiple comparative genomics methods with manually curated information from the literature. Here we describe the Princeton Protein Orthology Database (P-POD, http://ortholog.princeton.edu), a user-friendly database system that allows users to find and visualize the phylogenetic r...

  12. Qrator: A web-based curation tool for glycan structures

    Science.gov (United States)

    Eavenson, Matthew; Kochut, Krys J; Miller, John A; Ranzinger, René; Tiemeyer, Michael; Aoki, Kazuhiro; York, William S

    2015-01-01

    Most currently available glycan structure databases use their own proprietary structure representation schema and contain numerous annotation errors. These cause problems when glycan databases are used for the annotation or mining of data generated in the laboratory. Due to the complexity of glycan structures, curating these databases is often a tedious and labor-intensive process. However, rigorously validating glycan structures can be made easier with a curation workflow that incorporates a structure-matching algorithm that compares candidate glycans to a canonical tree that embodies structural features consistent with established mechanisms for the biosynthesis of a particular class of glycans. To this end, we have implemented Qrator, a web-based application that uses a combination of external literature and database references, user annotations and canonical trees to assist and guide researchers in making informed decisions while curating glycans. Using this application, we have started the curation of large numbers of N-glycans, O-glycans and glycosphingolipids. Our curation workflow allows creating and extending canonical trees for these classes of glycans, which have subsequently been used to improve the curation workflow. PMID:25165068

  13. Toward an interactive article: integrating journals and biological databases

    Directory of Open Access Journals (Sweden)

    Marygold Steven J

    2011-05-01

    Full Text Available Abstract Background Journal articles and databases are two major modes of communication in the biological sciences, and thus integrating these critical resources is of urgent importance to increase the pace of discovery. Projects focused on bridging the gap between journals and databases have been on the rise over the last five years and have resulted in the development of automated tools that can recognize entities within a document and link those entities to a relevant database. Unfortunately, automated tools cannot resolve ambiguities that arise from one term being used to signify entities that are quite distinct from one another. Instead, resolving these ambiguities requires some manual oversight. Finding the right balance between the speed and portability of automation and the accuracy and flexibility of manual effort is a crucial goal to making text markup a successful venture. Results We have established a journal article mark-up pipeline that links GENETICS journal articles and the model organism database (MOD WormBase. This pipeline uses a lexicon built with entities from the database as a first step. The entity markup pipeline results in links from over nine classes of objects including genes, proteins, alleles, phenotypes and anatomical terms. New entities and ambiguities are discovered and resolved by a database curator through a manual quality control (QC step, along with help from authors via a web form that is provided to them by the journal. New entities discovered through this pipeline are immediately sent to an appropriate curator at the database. Ambiguous entities that do not automatically resolve to one link are resolved by hand ensuring an accurate link. This pipeline has been extended to other databases, namely Saccharomyces Genome Database (SGD and FlyBase, and has been implemented in marking up a paper with links to multiple databases. Conclusions Our semi-automated pipeline hyperlinks articles published in GENETICS to

  14. Using the Pathogen-Host Interactions database (PHI-base to investigate plant pathogen genomes and genes implicated in virulence

    Directory of Open Access Journals (Sweden)

    Martin eUrban

    2015-08-01

    Full Text Available New pathogen-host interaction mechanisms can be revealed by integrating mutant phenotype data with genetic information. PHI-base is a multi-species manually curated database combining peer-reviewed published phenotype data from plant and animal pathogens and gene/protein information in a single database.

  15. Scaling drug indication curation through crowdsourcing.

    Science.gov (United States)

    Khare, Ritu; Burger, John D; Aberdeen, John S; Tresner-Kirsch, David W; Corrales, Theodore J; Hirchman, Lynette; Lu, Zhiyong

    2015-01-01

    Motivated by the high cost of human curation of biological databases, there is an increasing interest in using computational approaches to assist human curators and accelerate the manual curation process. Towards the goal of cataloging drug indications from FDA drug labels, we recently developed LabeledIn, a human-curated drug indication resource for 250 clinical drugs. Its development required over 40 h of human effort across 20 weeks, despite using well-defined annotation guidelines. In this study, we aim to investigate the feasibility of scaling drug indication annotation through a crowdsourcing technique where an unknown network of workers can be recruited through the technical environment of Amazon Mechanical Turk (MTurk). To translate the expert-curation task of cataloging indications into human intelligence tasks (HITs) suitable for the average workers on MTurk, we first simplify the complex task such that each HIT only involves a worker making a binary judgment of whether a highlighted disease, in context of a given drug label, is an indication. In addition, this study is novel in the crowdsourcing interface design where the annotation guidelines are encoded into user options. For evaluation, we assess the ability of our proposed method to achieve high-quality annotations in a time-efficient and cost-effective manner. We posted over 3000 HITs drawn from 706 drug labels on MTurk. Within 8 h of posting, we collected 18 775 judgments from 74 workers, and achieved an aggregated accuracy of 96% on 450 control HITs (where gold-standard answers are known), at a cost of $1.75 per drug label. On the basis of these results, we conclude that our crowdsourcing approach not only results in significant cost and time saving, but also leads to accuracy comparable to that of domain experts. Published by Oxford University Press 2015. This work is written by US Government employees and is in the public domain in the US.

  16. Database on wind characteristics. Users manual

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, G.C.; Hansen, K.S.

    2001-11-01

    The main objective of IEA R and D Wind Annex XVII - Database on Wind Characteristics - is to provide wind energy planners and designers, as well as the international wind engineering community in general, with easy access to quality controlled measured wind field time series observed in a wide range of environments. The project partners are Sweden, Norway, U.S.A., The Netherlands, Japan and Denmark, with Denmark as the Operating Agent. The reporting of IEA R and D Annex XVII falls in three separate parts. Part one deals with the overall structure and philosophy behind the database (including the applied data quality control procedures), part two accounts in details for the available data in the established database bank and part three is the Users Manual describing the various ways to access and analyse the data. The present report constitutes part three of the Annex XVII reporting and contains a trough description of the available online facilities for identifying, selecting, downloading and handling measured wind field time series and resource data from 'Database on Wind Characteristics'. (au)

  17. Using random forests for assistance in the curation of G-protein coupled receptor databases.

    Science.gov (United States)

    Shkurin, Aleksei; Vellido, Alfredo

    2017-08-18

    Biology is experiencing a gradual but fast transformation from a laboratory-centred science towards a data-centred one. As such, it requires robust data engineering and the use of quantitative data analysis methods as part of database curation. This paper focuses on G protein-coupled receptors, a large and heterogeneous super-family of cell membrane proteins of interest to biology in general. One of its families, Class C, is of particular interest to pharmacology and drug design. This family is quite heterogeneous on its own, and the discrimination of its several sub-families is a challenging problem. In the absence of known crystal structure, such discrimination must rely on their primary amino acid sequences. We are interested not as much in achieving maximum sub-family discrimination accuracy using quantitative methods, but in exploring sequence misclassification behavior. Specifically, we are interested in isolating those sequences showing consistent misclassification, that is, sequences that are very often misclassified and almost always to the same wrong sub-family. Random forests are used for this analysis due to their ensemble nature, which makes them naturally suited to gauge the consistency of misclassification. This consistency is here defined through the voting scheme of their base tree classifiers. Detailed consistency results for the random forest ensemble classification were obtained for all receptors and for all data transformations of their unaligned primary sequences. Shortlists of the most consistently misclassified receptors for each subfamily and transformation, as well as an overall shortlist including those cases that were consistently misclassified across transformations, were obtained. The latter should be referred to experts for further investigation as a data curation task. The automatic discrimination of the Class C sub-families of G protein-coupled receptors from their unaligned primary sequences shows clear limits. This study has

  18. The Neotoma Paleoecology Database: An International Community-Curated Resource for Paleoecological and Paleoenvironmental Data

    Science.gov (United States)

    Williams, J. W.; Grimm, E. C.; Ashworth, A. C.; Blois, J.; Charles, D. F.; Crawford, S.; Davis, E.; Goring, S. J.; Graham, R. W.; Miller, D. A.; Smith, A. J.; Stryker, M.; Uhen, M. D.

    2017-12-01

    The Neotoma Paleoecology Database supports global change research at the intersection of geology and ecology by providing a high-quality, community-curated data repository for paleoecological data. These data are widely used to study biological responses and feedbacks to past environmental change at local to global scales. The Neotoma data model is flexible and can store multiple kinds of fossil, biogeochemical, or physical variables measured from sedimentary archives. Data additions to Neotoma are growing and include >3.5 million observations, >16,000 datasets, and >8,500 sites. Dataset types include fossil pollen, vertebrates, diatoms, ostracodes, macroinvertebrates, plant macrofossils, insects, testate amoebae, geochronological data, and the recently added organic biomarkers, stable isotopes, and specimen-level data. Neotoma data can be found and retrieved in multiple ways, including the Explorer map-based interface, a RESTful Application Programming Interface, the neotoma R package, and digital object identifiers. Neotoma has partnered with the Paleobiology Database to produce a common data portal for paleobiological data, called the Earth Life Consortium. A new embargo management is designed to allow investigators to put their data into Neotoma and then make use of Neotoma's value-added services. Neotoma's distributed scientific governance model is flexible and scalable, with many open pathways for welcoming new members, data contributors, stewards, and research communities. As the volume and variety of scientific data grow, community-curated data resources such as Neotoma have become foundational infrastructure for big data science.

  19. The Structure-Function Linkage Database.

    Science.gov (United States)

    Akiva, Eyal; Brown, Shoshana; Almonacid, Daniel E; Barber, Alan E; Custer, Ashley F; Hicks, Michael A; Huang, Conrad C; Lauck, Florian; Mashiyama, Susan T; Meng, Elaine C; Mischel, David; Morris, John H; Ojha, Sunil; Schnoes, Alexandra M; Stryke, Doug; Yunes, Jeffrey M; Ferrin, Thomas E; Holliday, Gemma L; Babbitt, Patricia C

    2014-01-01

    The Structure-Function Linkage Database (SFLD, http://sfld.rbvi.ucsf.edu/) is a manually curated classification resource describing structure-function relationships for functionally diverse enzyme superfamilies. Members of such superfamilies are diverse in their overall reactions yet share a common ancestor and some conserved active site features associated with conserved functional attributes such as a partial reaction. Thus, despite their different functions, members of these superfamilies 'look alike', making them easy to misannotate. To address this complexity and enable rational transfer of functional features to unknowns only for those members for which we have sufficient functional information, we subdivide superfamily members into subgroups using sequence information, and lastly into families, sets of enzymes known to catalyze the same reaction using the same mechanistic strategy. Browsing and searching options in the SFLD provide access to all of these levels. The SFLD offers manually curated as well as automatically classified superfamily sets, both accompanied by search and download options for all hierarchical levels. Additional information includes multiple sequence alignments, tab-separated files of functional and other attributes, and sequence similarity networks. The latter provide a new and intuitively powerful way to visualize functional trends mapped to the context of sequence similarity.

  20. Prioritizing PubMed articles for the Comparative Toxicogenomic Database utilizing semantic information.

    Science.gov (United States)

    Kim, Sun; Kim, Won; Wei, Chih-Hsuan; Lu, Zhiyong; Wilbur, W John

    2012-01-01

    The Comparative Toxicogenomics Database (CTD) contains manually curated literature that describes chemical-gene interactions, chemical-disease relationships and gene-disease relationships. Finding articles containing this information is the first and an important step to assist manual curation efficiency. However, the complex nature of named entities and their relationships make it challenging to choose relevant articles. In this article, we introduce a machine learning framework for prioritizing CTD-relevant articles based on our prior system for the protein-protein interaction article classification task in BioCreative III. To address new challenges in the CTD task, we explore a new entity identification method for genes, chemicals and diseases. In addition, latent topics are analyzed and used as a feature type to overcome the small size of the training set. Applied to the BioCreative 2012 Triage dataset, our method achieved 0.8030 mean average precision (MAP) in the official runs, resulting in the top MAP system among participants. Integrated with PubTator, a Web interface for annotating biomedical literature, the proposed system also received a positive review from the CTD curation team.

  1. Managing expectations: assessment of chemistry databases generated by automated extraction of chemical structures from patents.

    Science.gov (United States)

    Senger, Stefan; Bartek, Luca; Papadatos, George; Gaulton, Anna

    2015-12-01

    First public disclosure of new chemical entities often takes place in patents, which makes them an important source of information. However, with an ever increasing number of patent applications, manual processing and curation on such a large scale becomes even more challenging. An alternative approach better suited for this large corpus of documents is the automated extraction of chemical structures. A number of patent chemistry databases generated by using the latter approach are now available but little is known that can help to manage expectations when using them. This study aims to address this by comparing two such freely available sources, SureChEMBL and IBM SIIP (IBM Strategic Intellectual Property Insight Platform), with manually curated commercial databases. When looking at the percentage of chemical structures successfully extracted from a set of patents, using SciFinder as our reference, 59 and 51 % were also found in our comparison in SureChEMBL and IBM SIIP, respectively. When performing this comparison with compounds as starting point, i.e. establishing if for a list of compounds the databases provide the links between chemical structures and patents they appear in, we obtained similar results. SureChEMBL and IBM SIIP found 62 and 59 %, respectively, of the compound-patent pairs obtained from Reaxys. In our comparison of automatically generated vs. manually curated patent chemistry databases, the former successfully provided approximately 60 % of links between chemical structure and patents. It needs to be stressed that only a very limited number of patents and compound-patent pairs were used for our comparison. Nevertheless, our results will hopefully help to manage expectations of users of patent chemistry databases of this type and provide a useful framework for more studies like ours as well as guide future developments of the workflows used for the automated extraction of chemical structures from patents. The challenges we have encountered

  2. CAZymes Analysis Toolkit (CAT): web service for searching and analyzing carbohydrate-active enzymes in a newly sequenced organism using CAZy database.

    Science.gov (United States)

    Park, Byung H; Karpinets, Tatiana V; Syed, Mustafa H; Leuze, Michael R; Uberbacher, Edward C

    2010-12-01

    The Carbohydrate-Active Enzyme (CAZy) database provides a rich set of manually annotated enzymes that degrade, modify, or create glycosidic bonds. Despite rich and invaluable information stored in the database, software tools utilizing this information for annotation of newly sequenced genomes by CAZy families are limited. We have employed two annotation approaches to fill the gap between manually curated high-quality protein sequences collected in the CAZy database and the growing number of other protein sequences produced by genome or metagenome sequencing projects. The first approach is based on a similarity search against the entire nonredundant sequences of the CAZy database. The second approach performs annotation using links or correspondences between the CAZy families and protein family domains. The links were discovered using the association rule learning algorithm applied to sequences from the CAZy database. The approaches complement each other and in combination achieved high specificity and sensitivity when cross-evaluated with the manually curated genomes of Clostridium thermocellum ATCC 27405 and Saccharophagus degradans 2-40. The capability of the proposed framework to predict the function of unknown protein domains and of hypothetical proteins in the genome of Neurospora crassa is demonstrated. The framework is implemented as a Web service, the CAZymes Analysis Toolkit, and is available at http://cricket.ornl.gov/cgi-bin/cat.cgi.

  3. CORE: a phylogenetically-curated 16S rDNA database of the core oral microbiome.

    Directory of Open Access Journals (Sweden)

    Ann L Griffen

    2011-04-01

    Full Text Available Comparing bacterial 16S rDNA sequences to GenBank and other large public databases via BLAST often provides results of little use for identification and taxonomic assignment of the organisms of interest. The human microbiome, and in particular the oral microbiome, includes many taxa, and accurate identification of sequence data is essential for studies of these communities. For this purpose, a phylogenetically curated 16S rDNA database of the core oral microbiome, CORE, was developed. The goal was to include a comprehensive and minimally redundant representation of the bacteria that regularly reside in the human oral cavity with computationally robust classification at the level of species and genus. Clades of cultivated and uncultivated taxa were formed based on sequence analyses using multiple criteria, including maximum-likelihood-based topology and bootstrap support, genetic distance, and previous naming. A number of classification inconsistencies for previously named species, especially at the level of genus, were resolved. The performance of the CORE database for identifying clinical sequences was compared to that of three publicly available databases, GenBank nr/nt, RDP and HOMD, using a set of sequencing reads that had not been used in creation of the database. CORE offered improved performance compared to other public databases for identification of human oral bacterial 16S sequences by a number of criteria. In addition, the CORE database and phylogenetic tree provide a framework for measures of community divergence, and the focused size of the database offers advantages of efficiency for BLAST searching of large datasets. The CORE database is available as a searchable interface and for download at http://microbiome.osu.edu.

  4. Improving the Discoverability and Availability of Sample Data and Imagery in NASA's Astromaterials Curation Digital Repository Using a New Common Architecture for Sample Databases

    Science.gov (United States)

    Todd, N. S.; Evans, C.

    2015-01-01

    The Astromaterials Acquisition and Curation Office at NASA's Johnson Space Center (JSC) is the designated facility for curating all of NASA's extraterrestrial samples. The suite of collections includes the lunar samples from the Apollo missions, cosmic dust particles falling into the Earth's atmosphere, meteorites collected in Antarctica, comet and interstellar dust particles from the Stardust mission, asteroid particles from the Japanese Hayabusa mission, and solar wind atoms collected during the Genesis mission. To support planetary science research on these samples, NASA's Astromaterials Curation Office hosts the Astromaterials Curation Digital Repository, which provides descriptions of the missions and collections, and critical information about each individual sample. Our office is implementing several informatics initiatives with the goal of better serving the planetary research community. One of these initiatives aims to increase the availability and discoverability of sample data and images through the use of a newly designed common architecture for Astromaterials Curation databases.

  5. A user's manual for managing database system of tensile property

    International Nuclear Information System (INIS)

    Ryu, Woo Seok; Park, S. J.; Kim, D. H.; Jun, I.

    2003-06-01

    This manual is written for the management and maintenance of the tensile database system for managing the tensile property test data. The data base constructed the data produced from tensile property test can increase the application of test results. Also, we can get easily the basic data from database when we prepare the new experiment and can produce better result by compare the previous data. To develop the database we must analyze and design carefully application and after that, we can offer the best quality to customers various requirements. The tensile database system was developed by internet method using Java, PL/SQL, JSP(Java Server Pages) tool

  6. MicroScope—an integrated microbial resource for the curation and comparative analysis of genomic and metabolic data

    Science.gov (United States)

    Vallenet, David; Belda, Eugeni; Calteau, Alexandra; Cruveiller, Stéphane; Engelen, Stefan; Lajus, Aurélie; Le Fèvre, François; Longin, Cyrille; Mornico, Damien; Roche, David; Rouy, Zoé; Salvignol, Gregory; Scarpelli, Claude; Thil Smith, Adam Alexander; Weiman, Marion; Médigue, Claudine

    2013-01-01

    MicroScope is an integrated platform dedicated to both the methodical updating of microbial genome annotation and to comparative analysis. The resource provides data from completed and ongoing genome projects (automatic and expert annotations), together with data sources from post-genomic experiments (i.e. transcriptomics, mutant collections) allowing users to perfect and improve the understanding of gene functions. MicroScope (http://www.genoscope.cns.fr/agc/microscope) combines tools and graphical interfaces to analyse genomes and to perform the manual curation of gene annotations in a comparative context. Since its first publication in January 2006, the system (previously named MaGe for Magnifying Genomes) has been continuously extended both in terms of data content and analysis tools. The last update of MicroScope was published in 2009 in the Database journal. Today, the resource contains data for >1600 microbial genomes, of which ∼300 are manually curated and maintained by biologists (1200 personal accounts today). Expert annotations are continuously gathered in the MicroScope database (∼50 000 a year), contributing to the improvement of the quality of microbial genomes annotations. Improved data browsing and searching tools have been added, original tools useful in the context of expert annotation have been developed and integrated and the website has been significantly redesigned to be more user-friendly. Furthermore, in the context of the European project Microme (Framework Program 7 Collaborative Project), MicroScope is becoming a resource providing for the curation and analysis of both genomic and metabolic data. An increasing number of projects are related to the study of environmental bacterial (meta)genomes that are able to metabolize a large variety of chemical compounds that may be of high industrial interest. PMID:23193269

  7. National Solar Radiation Database 1991-2010 Update: User's Manual

    Energy Technology Data Exchange (ETDEWEB)

    Wilcox, S. M.

    2012-08-01

    This user's manual provides information on the updated 1991-2010 National Solar Radiation Database. Included are data format descriptions, data sources, production processes, and information about data uncertainty.

  8. Biocuration at the Saccharomyces genome database.

    Science.gov (United States)

    Skrzypek, Marek S; Nash, Robert S

    2015-08-01

    Saccharomyces Genome Database is an online resource dedicated to managing information about the biology and genetics of the model organism, yeast (Saccharomyces cerevisiae). This information is derived primarily from scientific publications through a process of human curation that involves manual extraction of data and their organization into a comprehensive system of knowledge. This system provides a foundation for further analysis of experimental data coming from research on yeast as well as other organisms. In this review we will demonstrate how biocuration and biocurators add a key component, the biological context, to our understanding of how genes, proteins, genomes and cells function and interact. We will explain the role biocurators play in sifting through the wealth of biological data to incorporate and connect key information. We will also discuss the many ways we assist researchers with their various research needs. We hope to convince the reader that manual curation is vital in converting the flood of data into organized and interconnected knowledge, and that biocurators play an essential role in the integration of scientific information into a coherent model of the cell. © 2015 Wiley Periodicals, Inc.

  9. National Solar Radiation Database 1991-2005 Update: User's Manual

    Energy Technology Data Exchange (ETDEWEB)

    Wilcox, S.

    2007-04-01

    This manual describes how to obtain and interpret the data products from the updated 1991-2005 National Solar Radiation Database (NSRDB). This is an update of the original 1961-1990 NSRDB released in 1992.

  10. The BioC-BioGRID corpus: full text articles annotated for curation of protein–protein and genetic interactions

    Science.gov (United States)

    Kim, Sun; Chatr-aryamontri, Andrew; Chang, Christie S.; Oughtred, Rose; Rust, Jennifer; Wilbur, W. John; Comeau, Donald C.; Dolinski, Kara; Tyers, Mike

    2017-01-01

    A great deal of information on the molecular genetics and biochemistry of model organisms has been reported in the scientific literature. However, this data is typically described in free text form and is not readily amenable to computational analyses. To this end, the BioGRID database systematically curates the biomedical literature for genetic and protein interaction data. This data is provided in a standardized computationally tractable format and includes structured annotation of experimental evidence. BioGRID curation necessarily involves substantial human effort by expert curators who must read each publication to extract the relevant information. Computational text-mining methods offer the potential to augment and accelerate manual curation. To facilitate the development of practical text-mining strategies, a new challenge was organized in BioCreative V for the BioC task, the collaborative Biocurator Assistant Task. This was a non-competitive, cooperative task in which the participants worked together to build BioC-compatible modules into an integrated pipeline to assist BioGRID curators. As an integral part of this task, a test collection of full text articles was developed that contained both biological entity annotations (gene/protein and organism/species) and molecular interaction annotations (protein–protein and genetic interactions (PPIs and GIs)). This collection, which we call the BioC-BioGRID corpus, was annotated by four BioGRID curators over three rounds of annotation and contains 120 full text articles curated in a dataset representing two major model organisms, namely budding yeast and human. The BioC-BioGRID corpus contains annotations for 6409 mentions of genes and their Entrez Gene IDs, 186 mentions of organism names and their NCBI Taxonomy IDs, 1867 mentions of PPIs and 701 annotations of PPI experimental evidence statements, 856 mentions of GIs and 399 annotations of GI evidence statements. The purpose, characteristics and possible future

  11. The Structure–Function Linkage Database

    Science.gov (United States)

    Akiva, Eyal; Brown, Shoshana; Almonacid, Daniel E.; Barber, Alan E.; Custer, Ashley F.; Hicks, Michael A.; Huang, Conrad C.; Lauck, Florian; Mashiyama, Susan T.; Meng, Elaine C.; Mischel, David; Morris, John H.; Ojha, Sunil; Schnoes, Alexandra M.; Stryke, Doug; Yunes, Jeffrey M.; Ferrin, Thomas E.; Holliday, Gemma L.; Babbitt, Patricia C.

    2014-01-01

    The Structure–Function Linkage Database (SFLD, http://sfld.rbvi.ucsf.edu/) is a manually curated classification resource describing structure–function relationships for functionally diverse enzyme superfamilies. Members of such superfamilies are diverse in their overall reactions yet share a common ancestor and some conserved active site features associated with conserved functional attributes such as a partial reaction. Thus, despite their different functions, members of these superfamilies ‘look alike’, making them easy to misannotate. To address this complexity and enable rational transfer of functional features to unknowns only for those members for which we have sufficient functional information, we subdivide superfamily members into subgroups using sequence information, and lastly into families, sets of enzymes known to catalyze the same reaction using the same mechanistic strategy. Browsing and searching options in the SFLD provide access to all of these levels. The SFLD offers manually curated as well as automatically classified superfamily sets, both accompanied by search and download options for all hierarchical levels. Additional information includes multiple sequence alignments, tab-separated files of functional and other attributes, and sequence similarity networks. The latter provide a new and intuitively powerful way to visualize functional trends mapped to the context of sequence similarity. PMID:24271399

  12. A user's manual for the database management system of impact property

    International Nuclear Information System (INIS)

    Ryu, Woo Seok; Park, S. J.; Kong, W. S.; Jun, I.

    2003-06-01

    This manual is written for the management and maintenance of the impact database system for managing the impact property test data. The data base constructed the data produced from impact property test can increase the application of test results. Also, we can get easily the basic data from database when we prepare the new experiment and can produce better result by compare the previous data. To develop the database we must analyze and design carefully application and after that, we can offer the best quality to customers various requirements. The impact database system was developed by internet method using jsp(Java Server pages) tool

  13. Current Challenges in Development of a Database of Three-Dimensional Chemical Structures

    Science.gov (United States)

    Maeda, Miki H.

    2015-01-01

    We are developing a database named 3DMET, a three-dimensional structure database of natural metabolites. There are two major impediments to the creation of 3D chemical structures from a set of planar structure drawings: the limited accuracy of computer programs and insufficient human resources for manual curation. We have tested some 2D–3D converters to convert 2D structure files from external databases. These automatic conversion processes yielded an excessive number of improper conversions. To ascertain the quality of the conversions, we compared IUPAC Chemical Identifier and canonical SMILES notations before and after conversion. Structures whose notations correspond to each other were regarded as a correct conversion in our present work. We found that chiral inversion is the most serious factor during the improper conversion. In the current stage of our database construction, published books or articles have been resources for additions to our database. Chemicals are usually drawn as pictures on the paper. To save human resources, an optical structure reader was introduced. The program was quite useful but some particular errors were observed during our operation. We hope our trials for producing correct 3D structures will help other developers of chemical programs and curators of chemical databases. PMID:26075200

  14. The art of curation at a biological database: Principles and application

    Directory of Open Access Journals (Sweden)

    Sarah G. Odell

    2017-09-01

    Full Text Available The variety and quantity of data being produced by biological research has grown dramatically in recent years, resulting in an expansion of our understanding of biological systems. However, this abundance of data has brought new challenges, especially in curation. The role of biocurators is in part to filter research outcomes as they are generated, not only so that information is formatted and consolidated into locations that can provide long-term data sustainability, but also to ensure that the relevant data that was captured is reliable, reusable, and accessible. In many ways, biocuration lies somewhere between an art and a science. At GrainGenes (https://wheat.pw.usda.gov;https://graingenes.org, a long-time, stably-funded centralized repository for data about wheat, barley, rye, oat, and other small grains, curators have implemented a workflow for locating, parsing, and uploading new data so that the most important, peer-reviewed, high-quality research is available to users as quickly as possible with rich links to past research outcomes. In this report, we illustrate the principles and practical considerations of curation that we follow at GrainGenes with three case studies for curating a gene, a quantitative trait locus (QTL, and genomic elements. These examples demonstrate how our work allows users, i.e., small grains geneticists and breeders, to harness high-quality small grains data at GrainGenes to help them develop plants with enhanced agronomic traits.

  15. Text Mining to Support Gene Ontology Curation and Vice Versa.

    Science.gov (United States)

    Ruch, Patrick

    2017-01-01

    In this chapter, we explain how text mining can support the curation of molecular biology databases dealing with protein functions. We also show how curated data can play a disruptive role in the developments of text mining methods. We review a decade of efforts to improve the automatic assignment of Gene Ontology (GO) descriptors, the reference ontology for the characterization of genes and gene products. To illustrate the high potential of this approach, we compare the performances of an automatic text categorizer and show a large improvement of +225 % in both precision and recall on benchmarked data. We argue that automatic text categorization functions can ultimately be embedded into a Question-Answering (QA) system to answer questions related to protein functions. Because GO descriptors can be relatively long and specific, traditional QA systems cannot answer such questions. A new type of QA system, so-called Deep QA which uses machine learning methods trained with curated contents, is thus emerging. Finally, future advances of text mining instruments are directly dependent on the availability of high-quality annotated contents at every curation step. Databases workflows must start recording explicitly all the data they curate and ideally also some of the data they do not curate.

  16. The BioC-BioGRID corpus: full text articles annotated for curation of protein-protein and genetic interactions.

    Science.gov (United States)

    Islamaj Dogan, Rezarta; Kim, Sun; Chatr-Aryamontri, Andrew; Chang, Christie S; Oughtred, Rose; Rust, Jennifer; Wilbur, W John; Comeau, Donald C; Dolinski, Kara; Tyers, Mike

    2017-01-01

    A great deal of information on the molecular genetics and biochemistry of model organisms has been reported in the scientific literature. However, this data is typically described in free text form and is not readily amenable to computational analyses. To this end, the BioGRID database systematically curates the biomedical literature for genetic and protein interaction data. This data is provided in a standardized computationally tractable format and includes structured annotation of experimental evidence. BioGRID curation necessarily involves substantial human effort by expert curators who must read each publication to extract the relevant information. Computational text-mining methods offer the potential to augment and accelerate manual curation. To facilitate the development of practical text-mining strategies, a new challenge was organized in BioCreative V for the BioC task, the collaborative Biocurator Assistant Task. This was a non-competitive, cooperative task in which the participants worked together to build BioC-compatible modules into an integrated pipeline to assist BioGRID curators. As an integral part of this task, a test collection of full text articles was developed that contained both biological entity annotations (gene/protein and organism/species) and molecular interaction annotations (protein-protein and genetic interactions (PPIs and GIs)). This collection, which we call the BioC-BioGRID corpus, was annotated by four BioGRID curators over three rounds of annotation and contains 120 full text articles curated in a dataset representing two major model organisms, namely budding yeast and human. The BioC-BioGRID corpus contains annotations for 6409 mentions of genes and their Entrez Gene IDs, 186 mentions of organism names and their NCBI Taxonomy IDs, 1867 mentions of PPIs and 701 annotations of PPI experimental evidence statements, 856 mentions of GIs and 399 annotations of GI evidence statements. The purpose, characteristics and possible future

  17. Comprehensive curation and analysis of global interaction networks in Saccharomyces cerevisiae

    Science.gov (United States)

    Reguly, Teresa; Breitkreutz, Ashton; Boucher, Lorrie; Breitkreutz, Bobby-Joe; Hon, Gary C; Myers, Chad L; Parsons, Ainslie; Friesen, Helena; Oughtred, Rose; Tong, Amy; Stark, Chris; Ho, Yuen; Botstein, David; Andrews, Brenda; Boone, Charles; Troyanskya, Olga G; Ideker, Trey; Dolinski, Kara; Batada, Nizar N; Tyers, Mike

    2006-01-01

    Background The study of complex biological networks and prediction of gene function has been enabled by high-throughput (HTP) methods for detection of genetic and protein interactions. Sparse coverage in HTP datasets may, however, distort network properties and confound predictions. Although a vast number of well substantiated interactions are recorded in the scientific literature, these data have not yet been distilled into networks that enable system-level inference. Results We describe here a comprehensive database of genetic and protein interactions, and associated experimental evidence, for the budding yeast Saccharomyces cerevisiae, as manually curated from over 31,793 abstracts and online publications. This literature-curated (LC) dataset contains 33,311 interactions, on the order of all extant HTP datasets combined. Surprisingly, HTP protein-interaction datasets currently achieve only around 14% coverage of the interactions in the literature. The LC network nevertheless shares attributes with HTP networks, including scale-free connectivity and correlations between interactions, abundance, localization, and expression. We find that essential genes or proteins are enriched for interactions with other essential genes or proteins, suggesting that the global network may be functionally unified. This interconnectivity is supported by a substantial overlap of protein and genetic interactions in the LC dataset. We show that the LC dataset considerably improves the predictive power of network-analysis approaches. The full LC dataset is available at the BioGRID () and SGD () databases. Conclusion Comprehensive datasets of biological interactions derived from the primary literature provide critical benchmarks for HTP methods, augment functional prediction, and reveal system-level attributes of biological networks. PMID:16762047

  18. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  19. INIS: Manual for online retrieval from the INIS Database on the Internet

    International Nuclear Information System (INIS)

    2000-01-01

    This manual demonstrates the different Search Forms available to retrieve relevant records using the INIS Database online retrieval system. Information on how to search, how to store, refine and retrieve searches, and how to update a literature search is given

  20. INIS: Manual for online retrieval from the INIS Database on the Internet

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-10-01

    This manual demonstrates the different Search Forms available to retrieve relevant records using the INIS Database online retrieval system. Information on how to search, how to store, refine and retrieve searches, and how to update a literature search is given.

  1. HSC-explorer: a curated database for hematopoietic stem cells.

    Science.gov (United States)

    Montrone, Corinna; Kokkaliaris, Konstantinos D; Loeffler, Dirk; Lechner, Martin; Kastenmüller, Gabi; Schroeder, Timm; Ruepp, Andreas

    2013-01-01

    HSC-Explorer (http://mips.helmholtz-muenchen.de/HSC/) is a publicly available, integrative database containing detailed information about the early steps of hematopoiesis. The resource aims at providing fast and easy access to relevant information, in particular to the complex network of interacting cell types and molecules, from the wealth of publications in the field through visualization interfaces. It provides structured information on more than 7000 experimentally validated interactions between molecules, bioprocesses and environmental factors. Information is manually derived by critical reading of the scientific literature from expert annotators. Hematopoiesis-relevant interactions are accompanied with context information such as model organisms and experimental methods for enabling assessment of reliability and relevance of experimental results. Usage of established vocabularies facilitates downstream bioinformatics applications and to convert the results into complex networks. Several predefined datasets (Selected topics) offer insights into stem cell behavior, the stem cell niche and signaling processes supporting hematopoietic stem cell maintenance. HSC-Explorer provides a versatile web-based resource for scientists entering the field of hematopoiesis enabling users to inspect the associated biological processes through interactive graphical presentation.

  2. HSC-explorer: a curated database for hematopoietic stem cells.

    Directory of Open Access Journals (Sweden)

    Corinna Montrone

    Full Text Available HSC-Explorer (http://mips.helmholtz-muenchen.de/HSC/ is a publicly available, integrative database containing detailed information about the early steps of hematopoiesis. The resource aims at providing fast and easy access to relevant information, in particular to the complex network of interacting cell types and molecules, from the wealth of publications in the field through visualization interfaces. It provides structured information on more than 7000 experimentally validated interactions between molecules, bioprocesses and environmental factors. Information is manually derived by critical reading of the scientific literature from expert annotators. Hematopoiesis-relevant interactions are accompanied with context information such as model organisms and experimental methods for enabling assessment of reliability and relevance of experimental results. Usage of established vocabularies facilitates downstream bioinformatics applications and to convert the results into complex networks. Several predefined datasets (Selected topics offer insights into stem cell behavior, the stem cell niche and signaling processes supporting hematopoietic stem cell maintenance. HSC-Explorer provides a versatile web-based resource for scientists entering the field of hematopoiesis enabling users to inspect the associated biological processes through interactive graphical presentation.

  3. The MAR databases: development and implementation of databases specific for marine metagenomics.

    Science.gov (United States)

    Klemetsen, Terje; Raknes, Inge A; Fu, Juan; Agafonov, Alexander; Balasundaram, Sudhagar V; Tartari, Giacomo; Robertsen, Espen; Willassen, Nils P

    2018-01-04

    We introduce the marine databases; MarRef, MarDB and MarCat (https://mmp.sfb.uit.no/databases/), which are publicly available resources that promote marine research and innovation. These data resources, which have been implemented in the Marine Metagenomics Portal (MMP) (https://mmp.sfb.uit.no/), are collections of richly annotated and manually curated contextual (metadata) and sequence databases representing three tiers of accuracy. While MarRef is a database for completely sequenced marine prokaryotic genomes, which represent a marine prokaryote reference genome database, MarDB includes all incomplete sequenced prokaryotic genomes regardless level of completeness. The last database, MarCat, represents a gene (protein) catalog of uncultivable (and cultivable) marine genes and proteins derived from marine metagenomics samples. The first versions of MarRef and MarDB contain 612 and 3726 records, respectively. Each record is built up of 106 metadata fields including attributes for sampling, sequencing, assembly and annotation in addition to the organism and taxonomic information. Currently, MarCat contains 1227 records with 55 metadata fields. Ontologies and controlled vocabularies are used in the contextual databases to enhance consistency. The user-friendly web interface lets the visitors browse, filter and search in the contextual databases and perform BLAST searches against the corresponding sequence databases. All contextual and sequence databases are freely accessible and downloadable from https://s1.sfb.uit.no/public/mar/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. The Genomes OnLine Database (GOLD) v.5: a metadata management system based on a four level (meta)genome project classification

    Science.gov (United States)

    Reddy, T.B.K.; Thomas, Alex D.; Stamatis, Dimitri; Bertsch, Jon; Isbandi, Michelle; Jansson, Jakob; Mallajosyula, Jyothi; Pagani, Ioanna; Lobos, Elizabeth A.; Kyrpides, Nikos C.

    2015-01-01

    The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Here we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencing projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards. PMID:25348402

  5. The Genomes OnLine Database (GOLD) v.5: a metadata management system based on a four level (meta)genome project classification

    Energy Technology Data Exchange (ETDEWEB)

    Reddy, Tatiparthi B. K. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Thomas, Alex D. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Stamatis, Dimitri [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Bertsch, Jon [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Isbandi, Michelle [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Jansson, Jakob [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Mallajosyula, Jyothi [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Pagani, Ioanna [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Lobos, Elizabeth A. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Kyrpides, Nikos C. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); King Abdulaziz Univ., Jeddah (Saudi Arabia)

    2014-10-27

    The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Within this paper, we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencing projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. Lastly, GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards.

  6. BGDB: a database of bivalent genes.

    Science.gov (United States)

    Li, Qingyan; Lian, Shuabin; Dai, Zhiming; Xiang, Qian; Dai, Xianhua

    2013-01-01

    Bivalent gene is a gene marked with both H3K4me3 and H3K27me3 epigenetic modification in the same area, and is proposed to play a pivotal role related to pluripotency in embryonic stem (ES) cells. Identification of these bivalent genes and understanding their functions are important for further research of lineage specification and embryo development. So far, lots of genome-wide histone modification data were generated in mouse and human ES cells. These valuable data make it possible to identify bivalent genes, but no comprehensive data repositories or analysis tools are available for bivalent genes currently. In this work, we develop BGDB, the database of bivalent genes. The database contains 6897 bivalent genes in human and mouse ES cells, which are manually collected from scientific literature. Each entry contains curated information, including genomic context, sequences, gene ontology and other relevant information. The web services of BGDB database were implemented with PHP + MySQL + JavaScript, and provide diverse query functions. Database URL: http://dailab.sysu.edu.cn/bgdb/

  7. Benchmarking of the 2010 BioCreative Challenge III text-mining competition by the BioGRID and MINT interaction databases

    Directory of Open Access Journals (Sweden)

    Cesareni Gianni

    2011-10-01

    Full Text Available Abstract Background The vast amount of data published in the primary biomedical literature represents a challenge for the automated extraction and codification of individual data elements. Biological databases that rely solely on manual extraction by expert curators are unable to comprehensively annotate the information dispersed across the entire biomedical literature. The development of efficient tools based on natural language processing (NLP systems is essential for the selection of relevant publications, identification of data attributes and partially automated annotation. One of the tasks of the Biocreative 2010 Challenge III was devoted to the evaluation of NLP systems developed to identify articles for curation and extraction of protein-protein interaction (PPI data. Results The Biocreative 2010 competition addressed three tasks: gene normalization, article classification and interaction method identification. The BioGRID and MINT protein interaction databases both participated in the generation of the test publication set for gene normalization, annotated the development and test sets for article classification, and curated the test set for interaction method classification. These test datasets served as a gold standard for the evaluation of data extraction algorithms. Conclusion The development of efficient tools for extraction of PPI data is a necessary step to achieve full curation of the biomedical literature. NLP systems can in the first instance facilitate expert curation by refining the list of candidate publications that contain PPI data; more ambitiously, NLP approaches may be able to directly extract relevant information from full-text articles for rapid inspection by expert curators. Close collaboration between biological databases and NLP systems developers will continue to facilitate the long-term objectives of both disciplines.

  8. Supervised Learning for Detection of Duplicates in Genomic Sequence Databases.

    Directory of Open Access Journals (Sweden)

    Qingyu Chen

    Full Text Available First identified as an issue in 1996, duplication in biological databases introduces redundancy and even leads to inconsistency when contradictory information appears. The amount of data makes purely manual de-duplication impractical, and existing automatic systems cannot detect duplicates as precisely as can experts. Supervised learning has the potential to address such problems by building automatic systems that learn from expert curation to detect duplicates precisely and efficiently. While machine learning is a mature approach in other duplicate detection contexts, it has seen only preliminary application in genomic sequence databases.We developed and evaluated a supervised duplicate detection method based on an expert curated dataset of duplicates, containing over one million pairs across five organisms derived from genomic sequence databases. We selected 22 features to represent distinct attributes of the database records, and developed a binary model and a multi-class model. Both models achieve promising performance; under cross-validation, the binary model had over 90% accuracy in each of the five organisms, while the multi-class model maintains high accuracy and is more robust in generalisation. We performed an ablation study to quantify the impact of different sequence record features, finding that features derived from meta-data, sequence identity, and alignment quality impact performance most strongly. The study demonstrates machine learning can be an effective additional tool for de-duplication of genomic sequence databases. All Data are available as described in the supplementary material.

  9. Supervised Learning for Detection of Duplicates in Genomic Sequence Databases.

    Science.gov (United States)

    Chen, Qingyu; Zobel, Justin; Zhang, Xiuzhen; Verspoor, Karin

    2016-01-01

    First identified as an issue in 1996, duplication in biological databases introduces redundancy and even leads to inconsistency when contradictory information appears. The amount of data makes purely manual de-duplication impractical, and existing automatic systems cannot detect duplicates as precisely as can experts. Supervised learning has the potential to address such problems by building automatic systems that learn from expert curation to detect duplicates precisely and efficiently. While machine learning is a mature approach in other duplicate detection contexts, it has seen only preliminary application in genomic sequence databases. We developed and evaluated a supervised duplicate detection method based on an expert curated dataset of duplicates, containing over one million pairs across five organisms derived from genomic sequence databases. We selected 22 features to represent distinct attributes of the database records, and developed a binary model and a multi-class model. Both models achieve promising performance; under cross-validation, the binary model had over 90% accuracy in each of the five organisms, while the multi-class model maintains high accuracy and is more robust in generalisation. We performed an ablation study to quantify the impact of different sequence record features, finding that features derived from meta-data, sequence identity, and alignment quality impact performance most strongly. The study demonstrates machine learning can be an effective additional tool for de-duplication of genomic sequence databases. All Data are available as described in the supplementary material.

  10. Xylella fastidiosa comparative genomic database is an information resource to explore the annotation, genomic features, and biology of different strains

    Directory of Open Access Journals (Sweden)

    Alessandro M. Varani

    2012-01-01

    Full Text Available The Xylella fastidiosa comparative genomic database is a scientific resource with the aim to provide a user-friendly interface for accessing high-quality manually curated genomic annotation and comparative sequence analysis, as well as for identifying and mapping prophage-like elements, a marked feature of Xylella genomes. Here we describe a database and tools for exploring the biology of this important plant pathogen. The hallmarks of this database are the high quality genomic annotation, the functional and comparative genomic analysis and the identification and mapping of prophage-like elements. It is available from web site http://www.xylella.lncc.br.

  11. ChimerDB 3.0: an enhanced database for fusion genes from cancer transcriptome and literature data mining.

    Science.gov (United States)

    Lee, Myunggyo; Lee, Kyubum; Yu, Namhee; Jang, Insu; Choi, Ikjung; Kim, Pora; Jang, Ye Eun; Kim, Byounggun; Kim, Sunkyu; Lee, Byungwook; Kang, Jaewoo; Lee, Sanghyuk

    2017-01-04

    Fusion gene is an important class of therapeutic targets and prognostic markers in cancer. ChimerDB is a comprehensive database of fusion genes encompassing analysis of deep sequencing data and manual curations. In this update, the database coverage was enhanced considerably by adding two new modules of The Cancer Genome Atlas (TCGA) RNA-Seq analysis and PubMed abstract mining. ChimerDB 3.0 is composed of three modules of ChimerKB, ChimerPub and ChimerSeq. ChimerKB represents a knowledgebase including 1066 fusion genes with manual curation that were compiled from public resources of fusion genes with experimental evidences. ChimerPub includes 2767 fusion genes obtained from text mining of PubMed abstracts. ChimerSeq module is designed to archive the fusion candidates from deep sequencing data. Importantly, we have analyzed RNA-Seq data of the TCGA project covering 4569 patients in 23 cancer types using two reliable programs of FusionScan and TopHat-Fusion. The new user interface supports diverse search options and graphic representation of fusion gene structure. ChimerDB 3.0 is available at http://ercsb.ewha.ac.kr/fusiongene/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Building a genome database using an object-oriented approach.

    Science.gov (United States)

    Barbasiewicz, Anna; Liu, Lin; Lang, B Franz; Burger, Gertraud

    2002-01-01

    GOBASE is a relational database that integrates data associated with mitochondria and chloroplasts. The most important data in GOBASE, i. e., molecular sequences and taxonomic information, are obtained from the public sequence data repository at the National Center for Biotechnology Information (NCBI), and are validated by our experts. Maintaining a curated genomic database comes with a towering labor cost, due to the shear volume of available genomic sequences and the plethora of annotation errors and omissions in records retrieved from public repositories. Here we describe our approach to increase automation of the database population process, thereby reducing manual intervention. As a first step, we used Unified Modeling Language (UML) to construct a list of potential errors. Each case was evaluated independently, and an expert solution was devised, and represented as a diagram. Subsequently, the UML diagrams were used as templates for writing object-oriented automation programs in the Java programming language.

  13. dinoref: A curated dinoflagellate (Dinophyceae) reference database for the 18S rRNA gene.

    Science.gov (United States)

    Mordret, Solenn; Piredda, Roberta; Vaulot, Daniel; Montresor, Marina; Kooistra, Wiebe H C F; Sarno, Diana

    2018-03-30

    Dinoflagellates are a heterogeneous group of protists present in all aquatic ecosystems where they occupy various ecological niches. They play a major role as primary producers, but many species are mixotrophic or heterotrophic. Environmental metabarcoding based on high-throughput sequencing is increasingly applied to assess diversity and abundance of planktonic organisms, and reference databases are definitely needed to taxonomically assign the huge number of sequences. We provide an updated 18S rRNA reference database of dinoflagellates: dinoref. Sequences were downloaded from genbank and filtered based on stringent quality criteria. All sequences were taxonomically curated, classified taking into account classical morphotaxonomic studies and molecular phylogenies, and linked to a series of metadata. dinoref includes 1,671 sequences representing 149 genera and 422 species. The taxonomic assignation of 468 sequences was revised. The largest number of sequences belongs to Gonyaulacales and Suessiales that include toxic and symbiotic species. dinoref provides an opportunity to test the level of taxonomic resolution of different 18S barcode markers based on a large number of sequences and species. As an example, when only the V4 region is considered, 374 of the 422 species included in dinoref can still be unambiguously identified. Clustering the V4 sequences at 98% similarity, a threshold that is commonly applied in metabarcoding studies, resulted in a considerable underestimation of species diversity. © 2018 John Wiley & Sons Ltd.

  14. Consensus coding sequence (CCDS) database: a standardized set of human and mouse protein-coding regions supported by expert curation.

    Science.gov (United States)

    Pujar, Shashikant; O'Leary, Nuala A; Farrell, Catherine M; Loveland, Jane E; Mudge, Jonathan M; Wallin, Craig; Girón, Carlos G; Diekhans, Mark; Barnes, If; Bennett, Ruth; Berry, Andrew E; Cox, Eric; Davidson, Claire; Goldfarb, Tamara; Gonzalez, Jose M; Hunt, Toby; Jackson, John; Joardar, Vinita; Kay, Mike P; Kodali, Vamsi K; Martin, Fergal J; McAndrews, Monica; McGarvey, Kelly M; Murphy, Michael; Rajput, Bhanu; Rangwala, Sanjida H; Riddick, Lillian D; Seal, Ruth L; Suner, Marie-Marthe; Webb, David; Zhu, Sophia; Aken, Bronwen L; Bruford, Elspeth A; Bult, Carol J; Frankish, Adam; Murphy, Terence; Pruitt, Kim D

    2018-01-04

    The Consensus Coding Sequence (CCDS) project provides a dataset of protein-coding regions that are identically annotated on the human and mouse reference genome assembly in genome annotations produced independently by NCBI and the Ensembl group at EMBL-EBI. This dataset is the product of an international collaboration that includes NCBI, Ensembl, HUGO Gene Nomenclature Committee, Mouse Genome Informatics and University of California, Santa Cruz. Identically annotated coding regions, which are generated using an automated pipeline and pass multiple quality assurance checks, are assigned a stable and tracked identifier (CCDS ID). Additionally, coordinated manual review by expert curators from the CCDS collaboration helps in maintaining the integrity and high quality of the dataset. The CCDS data are available through an interactive web page (https://www.ncbi.nlm.nih.gov/CCDS/CcdsBrowse.cgi) and an FTP site (ftp://ftp.ncbi.nlm.nih.gov/pub/CCDS/). In this paper, we outline the ongoing work, growth and stability of the CCDS dataset and provide updates on new collaboration members and new features added to the CCDS user interface. We also present expert curation scenarios, with specific examples highlighting the importance of an accurate reference genome assembly and the crucial role played by input from the research community. Published by Oxford University Press on behalf of Nucleic Acids Research 2017.

  15. Human Ageing Genomic Resources: new and updated databases

    Science.gov (United States)

    Tacutu, Robi; Thornton, Daniel; Johnson, Emily; Budovsky, Arie; Barardo, Diogo; Craig, Thomas; Diana, Eugene; Lehmann, Gilad; Toren, Dmitri; Wang, Jingwei; Fraifeld, Vadim E

    2018-01-01

    Abstract In spite of a growing body of research and data, human ageing remains a poorly understood process. Over 10 years ago we developed the Human Ageing Genomic Resources (HAGR), a collection of databases and tools for studying the biology and genetics of ageing. Here, we present HAGR’s main functionalities, highlighting new additions and improvements. HAGR consists of six core databases: (i) the GenAge database of ageing-related genes, in turn composed of a dataset of >300 human ageing-related genes and a dataset with >2000 genes associated with ageing or longevity in model organisms; (ii) the AnAge database of animal ageing and longevity, featuring >4000 species; (iii) the GenDR database with >200 genes associated with the life-extending effects of dietary restriction; (iv) the LongevityMap database of human genetic association studies of longevity with >500 entries; (v) the DrugAge database with >400 ageing or longevity-associated drugs or compounds; (vi) the CellAge database with >200 genes associated with cell senescence. All our databases are manually curated by experts and regularly updated to ensure a high quality data. Cross-links across our databases and to external resources help researchers locate and integrate relevant information. HAGR is freely available online (http://genomics.senescence.info/). PMID:29121237

  16. INIS: Database manual

    International Nuclear Information System (INIS)

    2003-01-01

    This document is one in a series of publications known as the INIS Reference Series. It is intended for users of INIS (International Nuclear Information System) output data on various media (FTP file, CD-ROM, e-mail file, earlier magnetic tape, cartridge, etc.). This manual provides a description of each data element including information on contents, structure and usage as well as historical overview of additions, deletions and changes of data elements and their contents that have taken place over the years. Each record contains certain control data fields (001-009), one, two or three bibliographic levels, a set of descriptors, and zero, one or more abstracts, one in English and optionally one or more in another language. In order to facilitate the description of the system, the sequence of data elements is based on the input or, as it is internally called, worksheet format which differs from the exchange format described in the manual IAEA-INIS-7. A separate section is devoted to each data element and deviations from the exchange format are indicated whenever present. As the Record Leader and the Directory are sufficiently explained in Chapter 3.1 of IAEA-INIS-7, the contents of this manual are limited to control fields and data fields; the detailed explanations are intended to supplement the basic information given in Chapter 3.2 of IAEA-INIS-7. Bibliographic levels are used to identify component parts of a publication, i.e. chapters in a book, articles in a journal issue, conference papers in a proceedings volume. All bibliographic levels contained in a record are given in a control data field. Each bibliographic level identifier appears in the subdirectory with a pointer to its position in the record

  17. Kalium: a database of potassium channel toxins from scorpion venom.

    Science.gov (United States)

    Kuzmenkov, Alexey I; Krylov, Nikolay A; Chugunov, Anton O; Grishin, Eugene V; Vassilevski, Alexander A

    2016-01-01

    Kalium (http://kaliumdb.org/) is a manually curated database that accumulates data on potassium channel toxins purified from scorpion venom (KTx). This database is an open-access resource, and provides easy access to pages of other databases of interest, such as UniProt, PDB, NCBI Taxonomy Browser, and PubMed. General achievements of Kalium are a strict and easy regulation of KTx classification based on the unified nomenclature supported by researchers in the field, removal of peptides with partial sequence and entries supported by transcriptomic information only, classification of β-family toxins, and addition of a novel λ-family. Molecules presented in the database can be processed by the Clustal Omega server using a one-click option. Molecular masses of mature peptides are calculated and available activity data are compiled for all KTx. We believe that Kalium is not only of high interest to professional toxinologists, but also of general utility to the scientific community.Database URL:http://kaliumdb.org/. © The Author(s) 2016. Published by Oxford University Press.

  18. LncRNAWiki: harnessing community knowledge in collaborative curation of human long non-coding RNAs

    KAUST Repository

    Ma, L.

    2014-11-15

    Long non-coding RNAs (lncRNAs) perform a diversity of functions in numerous important biological processes and are implicated in many human diseases. In this report we present lncRNAWiki (http://lncrna.big.ac.cn), a wiki-based platform that is open-content and publicly editable and aimed at community-based curation and collection of information on human lncRNAs. Current related databases are dependent primarily on curation by experts, making it laborious to annotate the exponentially accumulated information on lncRNAs, which inevitably requires collective efforts in community-based curation of lncRNAs. Unlike existing databases, lncRNAWiki features comprehensive integration of information on human lncRNAs obtained from multiple different resources and allows not only existing lncRNAs to be edited, updated and curated by different users but also the addition of newly identified lncRNAs by any user. It harnesses community collective knowledge in collecting, editing and annotating human lncRNAs and rewards community-curated efforts by providing explicit authorship based on quantified contributions. LncRNAWiki relies on the underling knowledge of scientific community for collective and collaborative curation of human lncRNAs and thus has the potential to serve as an up-to-date and comprehensive knowledgebase for human lncRNAs.

  19. HIM-herbal ingredients in-vivo metabolism database.

    Science.gov (United States)

    Kang, Hong; Tang, Kailin; Liu, Qi; Sun, Yi; Huang, Qi; Zhu, Ruixin; Gao, Jun; Zhang, Duanfeng; Huang, Chenggang; Cao, Zhiwei

    2013-05-31

    Herbal medicine has long been viewed as a valuable asset for potential new drug discovery and herbal ingredients' metabolites, especially the in vivo metabolites were often found to gain better pharmacological, pharmacokinetic and even better safety profiles compared to their parent compounds. However, these herbal metabolite information is still scattered and waiting to be collected. HIM database manually collected so far the most comprehensive available in-vivo metabolism information for herbal active ingredients, as well as their corresponding bioactivity, organs and/or tissues distribution, toxicity, ADME and the clinical research profile. Currently HIM contains 361 ingredients and 1104 corresponding in-vivo metabolites from 673 reputable herbs. Tools of structural similarity, substructure search and Lipinski's Rule of Five are also provided. Various links were made to PubChem, PubMed, TCM-ID (Traditional Chinese Medicine Information database) and HIT (Herbal ingredients' targets databases). A curated database HIM is set up for the in vivo metabolites information of the active ingredients for Chinese herbs, together with their corresponding bioactivity, toxicity and ADME profile. HIM is freely accessible to academic researchers at http://www.bioinformatics.org.cn/.

  20. Nuclear power plant control room crew task analysis database: SEEK system. Users manual

    International Nuclear Information System (INIS)

    Burgy, D.; Schroeder, L.

    1984-05-01

    The Crew Task Analysis SEEK Users Manual was prepared for the Office of Nuclear Regulatory Research of the US Nuclear Regulatory Commission. It is designed for use with the existing computerized Control Room Crew Task Analysis Database. The SEEK system consists of a PR1ME computer with its associated peripherals and software augmented by General Physics Corporation SEEK database management software. The SEEK software programs provide the Crew Task Database user with rapid access to any number of records desired. The software uses English-like sentences to allow the user to construct logical sorts and outputs of the task data. Given the multiple-associative nature of the database, users can directly access the data at the plant, operating sequence, task or element level - or any combination of these levels. A complete description of the crew task data contained in the database is presented in NUREG/CR-3371, Task Analysis of Nuclear Power Plant Control Room Crews (Volumes 1 and 2)

  1. PFR²: a curated database of planktonic foraminifera 18S ribosomal DNA as a resource for studies of plankton ecology, biogeography and evolution.

    Science.gov (United States)

    Morard, Raphaël; Darling, Kate F; Mahé, Frédéric; Audic, Stéphane; Ujiié, Yurika; Weiner, Agnes K M; André, Aurore; Seears, Heidi A; Wade, Christopher M; Quillévéré, Frédéric; Douady, Christophe J; Escarguel, Gilles; de Garidel-Thoron, Thibault; Siccha, Michael; Kucera, Michal; de Vargas, Colomban

    2015-11-01

    Planktonic foraminifera (Rhizaria) are ubiquitous marine pelagic protists producing calcareous shells with conspicuous morphology. They play an important role in the marine carbon cycle, and their exceptional fossil record serves as the basis for biochronostratigraphy and past climate reconstructions. A major worldwide sampling effort over the last two decades has resulted in the establishment of multiple large collections of cryopreserved individual planktonic foraminifera samples. Thousands of 18S rDNA partial sequences have been generated, representing all major known morphological taxa across their worldwide oceanic range. This comprehensive data coverage provides an opportunity to assess patterns of molecular ecology and evolution in a holistic way for an entire group of planktonic protists. We combined all available published and unpublished genetic data to build PFR(2), the Planktonic foraminifera Ribosomal Reference database. The first version of the database includes 3322 reference 18S rDNA sequences belonging to 32 of the 47 known morphospecies of extant planktonic foraminifera, collected from 460 oceanic stations. All sequences have been rigorously taxonomically curated using a six-rank annotation system fully resolved to the morphological species level and linked to a series of metadata. The PFR(2) website, available at http://pfr2.sb-roscoff.fr, allows downloading the entire database or specific sections, as well as the identification of new planktonic foraminiferal sequences. Its novel, fully documented curation process integrates advances in morphological and molecular taxonomy. It allows for an increase in its taxonomic resolution and assures that integrity is maintained by including a complete contingency tracking of annotations and assuring that the annotations remain internally consistent. © 2015 John Wiley & Sons Ltd.

  2. The Developmental Brain Disorders Database (DBDB): a curated neurogenetics knowledge base with clinical and research applications.

    Science.gov (United States)

    Mirzaa, Ghayda M; Millen, Kathleen J; Barkovich, A James; Dobyns, William B; Paciorkowski, Alex R

    2014-06-01

    The number of single genes associated with neurodevelopmental disorders has increased dramatically over the past decade. The identification of causative genes for these disorders is important to clinical outcome as it allows for accurate assessment of prognosis, genetic counseling, delineation of natural history, inclusion in clinical trials, and in some cases determines therapy. Clinicians face the challenge of correctly identifying neurodevelopmental phenotypes, recognizing syndromes, and prioritizing the best candidate genes for testing. However, there is no central repository of definitions for many phenotypes, leading to errors of diagnosis. Additionally, there is no system of levels of evidence linking genes to phenotypes, making it difficult for clinicians to know which genes are most strongly associated with a given condition. We have developed the Developmental Brain Disorders Database (DBDB: https://www.dbdb.urmc.rochester.edu/home), a publicly available, online-curated repository of genes, phenotypes, and syndromes associated with neurodevelopmental disorders. DBDB contains the first referenced ontology of developmental brain phenotypes, and uses a novel system of levels of evidence for gene-phenotype associations. It is intended to assist clinicians in arriving at the correct diagnosis, select the most appropriate genetic test for that phenotype, and improve the care of patients with developmental brain disorders. For researchers interested in the discovery of novel genes for developmental brain disorders, DBDB provides a well-curated source of important genes against which research sequencing results can be compared. Finally, DBDB allows novel observations about the landscape of the neurogenetics knowledge base. © 2014 Wiley Periodicals, Inc.

  3. Reduce manual curation by combining gene predictions from multiple annotation engines, a case study of start codon prediction.

    Directory of Open Access Journals (Sweden)

    Thomas H A Ederveen

    Full Text Available Nowadays, prokaryotic genomes are sequenced faster than the capacity to manually curate gene annotations. Automated genome annotation engines provide users a straight-forward and complete solution for predicting ORF coordinates and function. For many labs, the use of AGEs is therefore essential to decrease the time necessary for annotating a given prokaryotic genome. However, it is not uncommon for AGEs to provide different and sometimes conflicting predictions. Combining multiple AGEs might allow for more accurate predictions. Here we analyzed the ab initio open reading frame (ORF calling performance of different AGEs based on curated genome annotations of eight strains from different bacterial species with GC% ranging from 35-52%. We present a case study which demonstrates a novel way of comparative genome annotation, using combinations of AGEs in a pre-defined order (or path to predict ORF start codons. The order of AGE combinations is from high to low specificity, where the specificity is based on the eight genome annotations. For each AGE combination we are able to derive a so-called projected confidence value, which is the average specificity of ORF start codon prediction based on the eight genomes. The projected confidence enables estimating likeliness of a correct prediction for a particular ORF start codon by a particular AGE combination, pinpointing ORFs notoriously difficult to predict start codons. We correctly predict start codons for 90.5±4.8% of the genes in a genome (based on the eight genomes with an accuracy of 81.1±7.6%. Our consensus-path methodology allows a marked improvement over majority voting (9.7±4.4% and with an optimal path ORF start prediction sensitivity is gained while maintaining a high specificity.

  4. Curating NASA's Past, Present, and Future Astromaterial Sample Collections

    Science.gov (United States)

    Zeigler, R. A.; Allton, J. H.; Evans, C. A.; Fries, M. D.; McCubbin, F. M.; Nakamura-Messenger, K.; Righter, K.; Zolensky, M.; Stansbery, E. K.

    2016-01-01

    The Astromaterials Acquisition and Curation Office at NASA Johnson Space Center (hereafter JSC curation) is responsible for curating all of NASA's extraterrestrial samples. JSC presently curates 9 different astromaterials collections in seven different clean-room suites: (1) Apollo Samples (ISO (International Standards Organization) class 6 + 7); (2) Antarctic Meteorites (ISO 6 + 7); (3) Cosmic Dust Particles (ISO 5); (4) Microparticle Impact Collection (ISO 7; formerly called Space-Exposed Hardware); (5) Genesis Solar Wind Atoms (ISO 4); (6) Stardust Comet Particles (ISO 5); (7) Stardust Interstellar Particles (ISO 5); (8) Hayabusa Asteroid Particles (ISO 5); (9) OSIRIS-REx Spacecraft Coupons and Witness Plates (ISO 7). Additional cleanrooms are currently being planned to house samples from two new collections, Hayabusa 2 (2021) and OSIRIS-REx (2023). In addition to the labs that house the samples, we maintain a wide variety of infra-structure facilities required to support the clean rooms: HEPA-filtered air-handling systems, ultrapure dry gaseous nitrogen systems, an ultrapure water system, and cleaning facilities to provide clean tools and equipment for the labs. We also have sample preparation facilities for making thin sections, microtome sections, and even focused ion-beam sections. We routinely monitor the cleanliness of our clean rooms and infrastructure systems, including measurements of inorganic or organic contamination, weekly airborne particle counts, compositional and isotopic monitoring of liquid N2 deliveries, and daily UPW system monitoring. In addition to the physical maintenance of the samples, we track within our databases the current and ever changing characteristics (weight, location, etc.) of more than 250,000 individually numbered samples across our various collections, as well as more than 100,000 images, and countless "analog" records that record the sample processing records of each individual sample. JSC Curation is co-located with JSC

  5. HEROD: a human ethnic and regional specific omics database.

    Science.gov (United States)

    Zeng, Xian; Tao, Lin; Zhang, Peng; Qin, Chu; Chen, Shangying; He, Weidong; Tan, Ying; Xia Liu, Hong; Yang, Sheng Yong; Chen, Zhe; Jiang, Yu Yang; Chen, Yu Zong

    2017-10-15

    Genetic and gene expression variations within and between populations and across geographical regions have substantial effects on the biological phenotypes, diseases, and therapeutic response. The development of precision medicines can be facilitated by the OMICS studies of the patients of specific ethnicity and geographic region. However, there is an inadequate facility for broadly and conveniently accessing the ethnic and regional specific OMICS data. Here, we introduced a new free database, HEROD, a human ethnic and regional specific OMICS database. Its first version contains the gene expression data of 53 070 patients of 169 diseases in seven ethnic populations from 193 cities/regions in 49 nations curated from the Gene Expression Omnibus (GEO), the ArrayExpress Archive of Functional Genomics Data (ArrayExpress), the Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC). Geographic region information of curated patients was mainly manually extracted from referenced publications of each original study. These data can be accessed and downloaded via keyword search, World map search, and menu-bar search of disease name, the international classification of disease code, geographical region, location of sample collection, ethnic population, gender, age, sample source organ, patient type (patient or healthy), sample type (disease or normal tissue) and assay type on the web interface. The HEROD database is freely accessible at http://bidd2.nus.edu.sg/herod/index.php. The database and web interface are implemented in MySQL, PHP and HTML with all major browsers supported. phacyz@nus.edu.sg. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  6. H2DB: a heritability database across multiple species by annotating trait-associated genomic loci.

    Science.gov (United States)

    Kaminuma, Eli; Fujisawa, Takatomo; Tanizawa, Yasuhiro; Sakamoto, Naoko; Kurata, Nori; Shimizu, Tokurou; Nakamura, Yasukazu

    2013-01-01

    H2DB (http://tga.nig.ac.jp/h2db/), an annotation database of genetic heritability estimates for humans and other species, has been developed as a knowledge database to connect trait-associated genomic loci. Heritability estimates have been investigated for individual species, particularly in human twin studies and plant/animal breeding studies. However, there appears to be no comprehensive heritability database for both humans and other species. Here, we introduce an annotation database for genetic heritabilities of various species that was annotated by manually curating online public resources in PUBMED abstracts and journal contents. The proposed heritability database contains attribute information for trait descriptions, experimental conditions, trait-associated genomic loci and broad- and narrow-sense heritability specifications. Annotated trait-associated genomic loci, for which most are single-nucleotide polymorphisms derived from genome-wide association studies, may be valuable resources for experimental scientists. In addition, we assigned phenotype ontologies to the annotated traits for the purposes of discussing heritability distributions based on phenotypic classifications.

  7. PhytoREF: a reference database of the plastidial 16S rRNA gene of photosynthetic eukaryotes with curated taxonomy.

    Science.gov (United States)

    Decelle, Johan; Romac, Sarah; Stern, Rowena F; Bendif, El Mahdi; Zingone, Adriana; Audic, Stéphane; Guiry, Michael D; Guillou, Laure; Tessier, Désiré; Le Gall, Florence; Gourvil, Priscillia; Dos Santos, Adriana L; Probert, Ian; Vaulot, Daniel; de Vargas, Colomban; Christen, Richard

    2015-11-01

    Photosynthetic eukaryotes have a critical role as the main producers in most ecosystems of the biosphere. The ongoing environmental metabarcoding revolution opens the perspective for holistic ecosystems biological studies of these organisms, in particular the unicellular microalgae that often lack distinctive morphological characters and have complex life cycles. To interpret environmental sequences, metabarcoding necessarily relies on taxonomically curated databases containing reference sequences of the targeted gene (or barcode) from identified organisms. To date, no such reference framework exists for photosynthetic eukaryotes. In this study, we built the PhytoREF database that contains 6490 plastidial 16S rDNA reference sequences that originate from a large diversity of eukaryotes representing all known major photosynthetic lineages. We compiled 3333 amplicon sequences available from public databases and 879 sequences extracted from plastidial genomes, and generated 411 novel sequences from cultured marine microalgal strains belonging to different eukaryotic lineages. A total of 1867 environmental Sanger 16S rDNA sequences were also included in the database. Stringent quality filtering and a phylogeny-based taxonomic classification were applied for each 16S rDNA sequence. The database mainly focuses on marine microalgae, but sequences from land plants (representing half of the PhytoREF sequences) and freshwater taxa were also included to broaden the applicability of PhytoREF to different aquatic and terrestrial habitats. PhytoREF, accessible via a web interface (http://phytoref.fr), is a new resource in molecular ecology to foster the discovery, assessment and monitoring of the diversity of photosynthetic eukaryotes using high-throughput sequencing. © 2015 John Wiley & Sons Ltd.

  8. CEBS: a comprehensive annotated database of toxicological data

    Science.gov (United States)

    Lea, Isabel A.; Gong, Hui; Paleja, Anand; Rashid, Asif; Fostel, Jennifer

    2017-01-01

    The Chemical Effects in Biological Systems database (CEBS) is a comprehensive and unique toxicology resource that compiles individual and summary animal data from the National Toxicology Program (NTP) testing program and other depositors into a single electronic repository. CEBS has undergone significant updates in recent years and currently contains over 11 000 test articles (exposure agents) and over 8000 studies including all available NTP carcinogenicity, short-term toxicity and genetic toxicity studies. Study data provided to CEBS are manually curated, accessioned and subject to quality assurance review prior to release to ensure high quality. The CEBS database has two main components: data collection and data delivery. To accommodate the breadth of data produced by NTP, the CEBS data collection component is an integrated relational design that allows the flexibility to capture any type of electronic data (to date). The data delivery component of the database comprises a series of dedicated user interface tables containing pre-processed data that support each component of the user interface. The user interface has been updated to include a series of nine Guided Search tools that allow access to NTP summary and conclusion data and larger non-NTP datasets. The CEBS database can be accessed online at http://www.niehs.nih.gov/research/resources/databases/cebs/. PMID:27899660

  9. PHI-base: a new interface and further additions for the multi-species pathogen–host interactions database

    Science.gov (United States)

    Urban, Martin; Cuzick, Alayne; Rutherford, Kim; Irvine, Alistair; Pedro, Helder; Pant, Rashmi; Sadanadan, Vidyendra; Khamari, Lokanath; Billal, Santoshkumar; Mohanty, Sagar; Hammond-Kosack, Kim E.

    2017-01-01

    The pathogen–host interactions database (PHI-base) is available at www.phi-base.org. PHI-base contains expertly curated molecular and biological information on genes proven to affect the outcome of pathogen–host interactions reported in peer reviewed research articles. In addition, literature that indicates specific gene alterations that did not affect the disease interaction phenotype are curated to provide complete datasets for comparative purposes. Viruses are not included. Here we describe a revised PHI-base Version 4 data platform with improved search, filtering and extended data display functions. A PHIB-BLAST search function is provided and a link to PHI-Canto, a tool for authors to directly curate their own published data into PHI-base. The new release of PHI-base Version 4.2 (October 2016) has an increased data content containing information from 2219 manually curated references. The data provide information on 4460 genes from 264 pathogens tested on 176 hosts in 8046 interactions. Prokaryotic and eukaryotic pathogens are represented in almost equal numbers. Host species belong ∼70% to plants and 30% to other species of medical and/or environmental importance. Additional data types included into PHI-base 4 are the direct targets of pathogen effector proteins in experimental and natural host organisms. The curation problems encountered and the future directions of the PHI-base project are briefly discussed. PMID:27915230

  10. LeishCyc: a biochemical pathways database for Leishmania major

    Directory of Open Access Journals (Sweden)

    Doyle Maria A

    2009-06-01

    Full Text Available Abstract Background Leishmania spp. are sandfly transmitted protozoan parasites that cause a spectrum of diseases in more than 12 million people worldwide. Much research is now focusing on how these parasites adapt to the distinct nutrient environments they encounter in the digestive tract of the sandfly vector and the phagolysosome compartment of mammalian macrophages. While data mining and annotation of the genomes of three Leishmania species has provided an initial inventory of predicted metabolic components and associated pathways, resources for integrating this information into metabolic networks and incorporating data from transcript, protein, and metabolite profiling studies is currently lacking. The development of a reliable, expertly curated, and widely available model of Leishmania metabolic networks is required to facilitate systems analysis, as well as discovery and prioritization of new drug targets for this important human pathogen. Description The LeishCyc database was initially built from the genome sequence of Leishmania major (v5.2, based on the annotation published by the Wellcome Trust Sanger Institute. LeishCyc was manually curated to remove errors, correct automated predictions, and add information from the literature. The ongoing curation is based on public sources, literature searches, and our own experimental and bioinformatics studies. In a number of instances we have improved on the original genome annotation, and, in some ambiguous cases, collected relevant information from the literature in order to help clarify gene or protein annotation in the future. All genes in LeishCyc are linked to the corresponding entry in GeneDB (Wellcome Trust Sanger Institute. Conclusion The LeishCyc database describes Leishmania major genes, gene products, metabolites, their relationships and biochemical organization into metabolic pathways. LeishCyc provides a systematic approach to organizing the evolving information about Leishmania

  11. The immune epitope database: a historical retrospective of the first decade.

    Science.gov (United States)

    Salimi, Nima; Fleri, Ward; Peters, Bjoern; Sette, Alessandro

    2012-10-01

    As the amount of biomedical information available in the literature continues to increase, databases that aggregate this information continue to grow in importance and scope. The population of databases can occur either through fully automated text mining approaches or through manual curation by human subject experts. We here report our experiences in populating the National Institute of Allergy and Infectious Diseases sponsored Immune Epitope Database and Analysis Resource (IEDB, http://iedb.org), which was created in 2003, and as of 2012 captures the epitope information from approximately 99% of all papers published to date that describe immune epitopes (with the exception of cancer and HIV data). This was achieved using a hybrid model based on automated document categorization and extensive human expert involvement. This task required automated scanning of over 22 million PubMed abstracts followed by classification and curation of over 13 000 references, including over 7000 infectious disease-related manuscripts, over 1000 allergy-related manuscripts, roughly 4000 related to autoimmunity, and 1000 transplant/alloantigen-related manuscripts. The IEDB curation involves an unprecedented level of detail, capturing for each paper the actual experiments performed for each different epitope structure. Key to enabling this process was the extensive use of ontologies to ensure rigorous and consistent data representation as well as interoperability with other bioinformatics resources, including the Protein Data Bank, Chemical Entities of Biological Interest, and the NIAID Bioinformatics Resource Centers. A growing fraction of the IEDB data derives from direct submissions by research groups engaged in epitope discovery, and is being facilitated by the implementation of novel data submission tools. The present explosion of information contained in biological databases demands effective query and display capabilities to optimize the user experience. Accordingly, the

  12. HIPdb: a database of experimentally validated HIV inhibiting peptides.

    Science.gov (United States)

    Qureshi, Abid; Thakur, Nishant; Kumar, Manoj

    2013-01-01

    Besides antiretroviral drugs, peptides have also demonstrated potential to inhibit the Human immunodeficiency virus (HIV). For example, T20 has been discovered to effectively block the HIV entry and was approved by the FDA as a novel anti-HIV peptide (AHP). We have collated all experimental information on AHPs at a single platform. HIPdb is a manually curated database of experimentally verified HIV inhibiting peptides targeting various steps or proteins involved in the life cycle of HIV e.g. fusion, integration, reverse transcription etc. This database provides experimental information of 981 peptides. These are of varying length obtained from natural as well as synthetic sources and tested on different cell lines. Important fields included are peptide sequence, length, source, target, cell line, inhibition/IC(50), assay and reference. The database provides user friendly browse, search, sort and filter options. It also contains useful services like BLAST and 'Map' for alignment with user provided sequences. In addition, predicted structure and physicochemical properties of the peptides are also included. HIPdb database is freely available at http://crdd.osdd.net/servers/hipdb. Comprehensive information of this database will be helpful in selecting/designing effective anti-HIV peptides. Thus it may prove a useful resource to researchers for peptide based therapeutics development.

  13. How should the completeness and quality of curated nanomaterial data be evaluated?

    Science.gov (United States)

    Marchese Robinson, Richard L.; Lynch, Iseult; Peijnenburg, Willie; Rumble, John; Klaessig, Fred; Marquardt, Clarissa; Rauscher, Hubert; Puzyn, Tomasz; Purian, Ronit; Åberg, Christoffer; Karcher, Sandra; Vriens, Hanne; Hoet, Peter; Hoover, Mark D.; Hendren, Christine Ogilvie; Harper, Stacey L.

    2016-05-01

    Nanotechnology is of increasing significance. Curation of nanomaterial data into electronic databases offers opportunities to better understand and predict nanomaterials' behaviour. This supports innovation in, and regulation of, nanotechnology. It is commonly understood that curated data need to be sufficiently complete and of sufficient quality to serve their intended purpose. However, assessing data completeness and quality is non-trivial in general and is arguably especially difficult in the nanoscience area, given its highly multidisciplinary nature. The current article, part of the Nanomaterial Data Curation Initiative series, addresses how to assess the completeness and quality of (curated) nanomaterial data. In order to address this key challenge, a variety of related issues are discussed: the meaning and importance of data completeness and quality, existing approaches to their assessment and the key challenges associated with evaluating the completeness and quality of curated nanomaterial data. Considerations which are specific to the nanoscience area and lessons which can be learned from other relevant scientific disciplines are considered. Hence, the scope of this discussion ranges from physicochemical characterisation requirements for nanomaterials and interference of nanomaterials with nanotoxicology assays to broader issues such as minimum information checklists, toxicology data quality schemes and computational approaches that facilitate evaluation of the completeness and quality of (curated) data. This discussion is informed by a literature review and a survey of key nanomaterial data curation stakeholders. Finally, drawing upon this discussion, recommendations are presented concerning the central question: how should the completeness and quality of curated nanomaterial data be evaluated?Nanotechnology is of increasing significance. Curation of nanomaterial data into electronic databases offers opportunities to better understand and predict

  14. Database Description - tRNADB-CE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us tRNAD...B-CE Database Description General information of database Database name tRNADB-CE Alter...CC BY-SA Detail Background and funding Name: MEXT Integrated Database Project Reference(s) Article title: tRNAD... 2009 Jan;37(Database issue):D163-8. External Links: Article title: tRNADB-CE 2011: tRNA gene database curat...n Download License Update History of This Database Site Policy | Contact Us Database Description - tRNADB-CE | LSDB Archive ...

  15. Enhanced annotations and features for comparing thousands of Pseudomonas genomes in the Pseudomonas genome database.

    Science.gov (United States)

    Winsor, Geoffrey L; Griffiths, Emma J; Lo, Raymond; Dhillon, Bhavjinder K; Shay, Julie A; Brinkman, Fiona S L

    2016-01-04

    The Pseudomonas Genome Database (http://www.pseudomonas.com) is well known for the application of community-based annotation approaches for producing a high-quality Pseudomonas aeruginosa PAO1 genome annotation, and facilitating whole-genome comparative analyses with other Pseudomonas strains. To aid analysis of potentially thousands of complete and draft genome assemblies, this database and analysis platform was upgraded to integrate curated genome annotations and isolate metadata with enhanced tools for larger scale comparative analysis and visualization. Manually curated gene annotations are supplemented with improved computational analyses that help identify putative drug targets and vaccine candidates or assist with evolutionary studies by identifying orthologs, pathogen-associated genes and genomic islands. The database schema has been updated to integrate isolate metadata that will facilitate more powerful analysis of genomes across datasets in the future. We continue to place an emphasis on providing high-quality updates to gene annotations through regular review of the scientific literature and using community-based approaches including a major new Pseudomonas community initiative for the assignment of high-quality gene ontology terms to genes. As we further expand from thousands of genomes, we plan to provide enhancements that will aid data visualization and analysis arising from whole-genome comparative studies including more pan-genome and population-based approaches. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. Correcting Inconsistencies and Errors in Bacterial Genome Metadata Using an Automated Curation Tool in Excel (AutoCurE).

    Science.gov (United States)

    Schmedes, Sarah E; King, Jonathan L; Budowle, Bruce

    2015-01-01

    Whole-genome data are invaluable for large-scale comparative genomic studies. Current sequencing technologies have made it feasible to sequence entire bacterial genomes with relative ease and time with a substantially reduced cost per nucleotide, hence cost per genome. More than 3,000 bacterial genomes have been sequenced and are available at the finished status. Publically available genomes can be readily downloaded; however, there are challenges to verify the specific supporting data contained within the download and to identify errors and inconsistencies that may be present within the organizational data content and metadata. AutoCurE, an automated tool for bacterial genome database curation in Excel, was developed to facilitate local database curation of supporting data that accompany downloaded genomes from the National Center for Biotechnology Information. AutoCurE provides an automated approach to curate local genomic databases by flagging inconsistencies or errors by comparing the downloaded supporting data to the genome reports to verify genome name, RefSeq accession numbers, the presence of archaea, BioProject/UIDs, and sequence file descriptions. Flags are generated for nine metadata fields if there are inconsistencies between the downloaded genomes and genomes reports and if erroneous or missing data are evident. AutoCurE is an easy-to-use tool for local database curation for large-scale genome data prior to downstream analyses.

  17. GEAR: A database of Genomic Elements Associated with drug Resistance

    Science.gov (United States)

    Wang, Yin-Ying; Chen, Wei-Hua; Xiao, Pei-Pei; Xie, Wen-Bin; Luo, Qibin; Bork, Peer; Zhao, Xing-Ming

    2017-01-01

    Drug resistance is becoming a serious problem that leads to the failure of standard treatments, which is generally developed because of genetic mutations of certain molecules. Here, we present GEAR (A database of Genomic Elements Associated with drug Resistance) that aims to provide comprehensive information about genomic elements (including genes, single-nucleotide polymorphisms and microRNAs) that are responsible for drug resistance. Right now, GEAR contains 1631 associations between 201 human drugs and 758 genes, 106 associations between 29 human drugs and 66 miRNAs, and 44 associations between 17 human drugs and 22 SNPs. These relationships are firstly extracted from primary literature with text mining and then manually curated. The drug resistome deposited in GEAR provides insights into the genetic factors underlying drug resistance. In addition, new indications and potential drug combinations can be identified based on the resistome. The GEAR database can be freely accessed through http://gear.comp-sysbio.org. PMID:28294141

  18. Planform: an application and database of graph-encoded planarian regenerative experiments.

    Science.gov (United States)

    Lobo, Daniel; Malone, Taylor J; Levin, Michael

    2013-04-15

    Understanding the mechanisms governing the regeneration capabilities of many organisms is a fundamental interest in biology and medicine. An ever-increasing number of manipulation and molecular experiments are attempting to discover a comprehensive model for regeneration, with the planarian flatworm being one of the most important model species. Despite much effort, no comprehensive, constructive, mechanistic models exist yet, and it is now clear that computational tools are needed to mine this huge dataset. However, until now, there is no database of regenerative experiments, and the current genotype-phenotype ontologies and databases are based on textual descriptions, which are not understandable by computers. To overcome these difficulties, we present here Planform (Planarian formalization), a manually curated database and software tool for planarian regenerative experiments, based on a mathematical graph formalism. The database contains more than a thousand experiments from the main publications in the planarian literature. The software tool provides the user with a graphical interface to easily interact with and mine the database. The presented system is a valuable resource for the regeneration community and, more importantly, will pave the way for the application of novel artificial intelligence tools to extract knowledge from this dataset. The database and software tool are freely available at http://planform.daniel-lobo.com.

  19. PHI-base: a new interface and further additions for the multi-species pathogen-host interactions database.

    Science.gov (United States)

    Urban, Martin; Cuzick, Alayne; Rutherford, Kim; Irvine, Alistair; Pedro, Helder; Pant, Rashmi; Sadanadan, Vidyendra; Khamari, Lokanath; Billal, Santoshkumar; Mohanty, Sagar; Hammond-Kosack, Kim E

    2017-01-04

    The pathogen-host interactions database (PHI-base) is available at www.phi-base.org PHI-base contains expertly curated molecular and biological information on genes proven to affect the outcome of pathogen-host interactions reported in peer reviewed research articles. In addition, literature that indicates specific gene alterations that did not affect the disease interaction phenotype are curated to provide complete datasets for comparative purposes. Viruses are not included. Here we describe a revised PHI-base Version 4 data platform with improved search, filtering and extended data display functions. A PHIB-BLAST search function is provided and a link to PHI-Canto, a tool for authors to directly curate their own published data into PHI-base. The new release of PHI-base Version 4.2 (October 2016) has an increased data content containing information from 2219 manually curated references. The data provide information on 4460 genes from 264 pathogens tested on 176 hosts in 8046 interactions. Prokaryotic and eukaryotic pathogens are represented in almost equal numbers. Host species belong ∼70% to plants and 30% to other species of medical and/or environmental importance. Additional data types included into PHI-base 4 are the direct targets of pathogen effector proteins in experimental and natural host organisms. The curation problems encountered and the future directions of the PHI-base project are briefly discussed. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. JASPAR 2014: an extensively expanded and updated open-access database of transcription factor binding profiles.

    Science.gov (United States)

    Mathelier, Anthony; Zhao, Xiaobei; Zhang, Allen W; Parcy, François; Worsley-Hunt, Rebecca; Arenillas, David J; Buchman, Sorana; Chen, Chih-yu; Chou, Alice; Ienasescu, Hans; Lim, Jonathan; Shyr, Casper; Tan, Ge; Zhou, Michelle; Lenhard, Boris; Sandelin, Albin; Wasserman, Wyeth W

    2014-01-01

    JASPAR (http://jaspar.genereg.net) is the largest open-access database of matrix-based nucleotide profiles describing the binding preference of transcription factors from multiple species. The fifth major release greatly expands the heart of JASPAR-the JASPAR CORE subcollection, which contains curated, non-redundant profiles-with 135 new curated profiles (74 in vertebrates, 8 in Drosophila melanogaster, 10 in Caenorhabditis elegans and 43 in Arabidopsis thaliana; a 30% increase in total) and 43 older updated profiles (36 in vertebrates, 3 in D. melanogaster and 4 in A. thaliana; a 9% update in total). The new and updated profiles are mainly derived from published chromatin immunoprecipitation-seq experimental datasets. In addition, the web interface has been enhanced with advanced capabilities in browsing, searching and subsetting. Finally, the new JASPAR release is accompanied by a new BioPython package, a new R tool package and a new R/Bioconductor data package to facilitate access for both manual and automated methods.

  1. User's manual (UM) for the enhanced logistics intratheater support tool (ELIST) database utility segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    This document is the User's Manual (UM) for the Enhanced Logistics Intratheater Support Tool (ELIST) Database Utility Segment. It tells how to use its features to administer ELIST database user accounts

  2. BioModels Database: a repository of mathematical models of biological processes.

    Science.gov (United States)

    Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas

    2013-01-01

    BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months.

  3. NPACT: Naturally Occurring Plant-based Anti-cancer Compound-Activity-Target database.

    Science.gov (United States)

    Mangal, Manu; Sagar, Parul; Singh, Harinder; Raghava, Gajendra P S; Agarwal, Subhash M

    2013-01-01

    Plant-derived molecules have been highly valued by biomedical researchers and pharmaceutical companies for developing drugs, as they are thought to be optimized during evolution. Therefore, we have collected and compiled a central resource Naturally Occurring Plant-based Anti-cancer Compound-Activity-Target database (NPACT, http://crdd.osdd.net/raghava/npact/) that gathers the information related to experimentally validated plant-derived natural compounds exhibiting anti-cancerous activity (in vitro and in vivo), to complement the other databases. It currently contains 1574 compound entries, and each record provides information on their structure, manually curated published data on in vitro and in vivo experiments along with reference for users referral, inhibitory values (IC(50)/ED(50)/EC(50)/GI(50)), properties (physical, elemental and topological), cancer types, cell lines, protein targets, commercial suppliers and drug likeness of compounds. NPACT can easily be browsed or queried using various options, and an online similarity tool has also been made available. Further, to facilitate retrieval of existing data, each record is hyperlinked to similar databases like SuperNatural, Herbal Ingredients' Targets, Comparative Toxicogenomics Database, PubChem and NCI-60 GI(50) data.

  4. NPACT: Naturally Occurring Plant-based Anti-cancer Compound-Activity-Target database

    Science.gov (United States)

    Mangal, Manu; Sagar, Parul; Singh, Harinder; Raghava, Gajendra P. S.; Agarwal, Subhash M.

    2013-01-01

    Plant-derived molecules have been highly valued by biomedical researchers and pharmaceutical companies for developing drugs, as they are thought to be optimized during evolution. Therefore, we have collected and compiled a central resource Naturally Occurring Plant-based Anti-cancer Compound-Activity-Target database (NPACT, http://crdd.osdd.net/raghava/npact/) that gathers the information related to experimentally validated plant-derived natural compounds exhibiting anti-cancerous activity (in vitro and in vivo), to complement the other databases. It currently contains 1574 compound entries, and each record provides information on their structure, manually curated published data on in vitro and in vivo experiments along with reference for users referral, inhibitory values (IC50/ED50/EC50/GI50), properties (physical, elemental and topological), cancer types, cell lines, protein targets, commercial suppliers and drug likeness of compounds. NPACT can easily be browsed or queried using various options, and an online similarity tool has also been made available. Further, to facilitate retrieval of existing data, each record is hyperlinked to similar databases like SuperNatural, Herbal Ingredients’ Targets, Comparative Toxicogenomics Database, PubChem and NCI-60 GI50 data. PMID:23203877

  5. Case Study III: The Construction of a Nanotoxicity Database - The MOD-ENP-TOX Experience.

    Science.gov (United States)

    Vriens, Hanne; Mertens, Dominik; Regret, Renaud; Lin, Pinpin; Locquet, Jean-Pierre; Hoet, Peter

    2017-01-01

    The amount of experimental studies on the toxicity of nanomaterials is growing fast. Interpretation and comparison of these studies is a complex issue due to the high amount of variables possibly determining the toxicity of nanomaterials.Qualitative databases providing a structured combination, integration and quality evaluation of the existing data could reveal insights that cannot be seen from different studies alone. A few database initiatives are under development but in practice very little data is publicly available and collaboration between physicists, toxicologists, computer scientists and modellers is needed to further develop databases, standards and analysis tools.In this case study the process of building a database on the in vitro toxicity of amorphous silica nanoparticles (NPs) is described in detail. Experimental data were systematically collected from peer reviewed papers, manually curated and stored in a standardised format. The result is a database in ISA-Tab-Nano including 68 peer reviewed papers on the toxicity of 148 amorphous silica NPs. Both the physicochemical characterization of the particles and their biological effect (described in 230 in vitro assays) were stored in the database. A scoring system was elaborated in order to evaluate the reliability of the stored data.

  6. HCVpro: Hepatitis C virus protein interaction database

    KAUST Repository

    Kwofie, Samuel K.

    2011-12-01

    It is essential to catalog characterized hepatitis C virus (HCV) protein-protein interaction (PPI) data and the associated plethora of vital functional information to augment the search for therapies, vaccines and diagnostic biomarkers. In furtherance of these goals, we have developed the hepatitis C virus protein interaction database (HCVpro) by integrating manually verified hepatitis C virus-virus and virus-human protein interactions curated from literature and databases. HCVpro is a comprehensive and integrated HCV-specific knowledgebase housing consolidated information on PPIs, functional genomics and molecular data obtained from a variety of virus databases (VirHostNet, VirusMint, HCVdb and euHCVdb), and from BIND and other relevant biology repositories. HCVpro is further populated with information on hepatocellular carcinoma (HCC) related genes that are mapped onto their encoded cellular proteins. Incorporated proteins have been mapped onto Gene Ontologies, canonical pathways, Online Mendelian Inheritance in Man (OMIM) and extensively cross-referenced to other essential annotations. The database is enriched with exhaustive reviews on structure and functions of HCV proteins, current state of drug and vaccine development and links to recommended journal articles. Users can query the database using specific protein identifiers (IDs), chromosomal locations of a gene, interaction detection methods, indexed PubMed sources as well as HCVpro, BIND and VirusMint IDs. The use of HCVpro is free and the resource can be accessed via http://apps.sanbi.ac.za/hcvpro/ or http://cbrc.kaust.edu.sa/hcvpro/. © 2011 Elsevier B.V.

  7. Manual of plant producers and services in environmental protection. Database in the field of environmental protection

    International Nuclear Information System (INIS)

    Serve, C.

    1992-01-01

    On the basis of an enquiry, the Stuttgart Chamber of Industry and Commerce produced a database of the services offered by regional and supraregional companies in the field of environmental protection. The data are presented in this manual, classified as follows: noise protection systems; sanitation systems and services; other systems and services. (orig.) [de

  8. From field to database : a user-oriented approche to promote cyber-curating of scientific drilling cores

    Science.gov (United States)

    Pignol, C.; Arnaud, F.; Godinho, E.; Galabertier, B.; Caillo, A.; Billy, I.; Augustin, L.; Calzas, M.; Rousseau, D. D.; Crosta, X.

    2016-12-01

    Managing scientific data is probably one the most challenging issues in modern science. In plaeosciences the question is made even more sensitive with the need of preserving and managing high value fragile geological samples: cores. Large international scientific programs, such as IODP or ICDP led intense effort to solve this problem and proposed detailed high standard work- and dataflows thorough core handling and curating. However many paleoscience results derived from small-scale research programs in which data and sample management is too often managed only locally - when it is… In this paper we present a national effort leads in France to develop an integrated system to curate ice and sediment cores. Under the umbrella of the national excellence equipment program CLIMCOR, we launched a reflexion about core curating and the management of associated fieldwork data. Our aim was then to conserve all data from fieldwork in an integrated cyber-environment which will evolve toward laboratory-acquired data storage in a near future. To do so, our demarche was conducted through an intimate relationship with field operators as well laboratory core curators in order to propose user-oriented solutions. The national core curating initiative proposes a single web portal in which all teams can store their fieldwork data. This portal is used as a national hub to attribute IGSNs. For legacy samples, this requires the establishment of a dedicated core list with associated metadata. However, for forthcoming core data, we developed a mobile application to capture technical and scientific data directly on the field. This application is linked with a unique coring-tools library and is adapted to most coring devices (gravity, drilling, percussion etc.) including multiple sections and holes coring operations. Those field data can be uploaded automatically to the national portal, but also referenced through international standards (IGSN and INSPIRE) and displayed in international

  9. MET network in PubMed: a text-mined network visualization and curation system.

    Science.gov (United States)

    Dai, Hong-Jie; Su, Chu-Hsien; Lai, Po-Ting; Huang, Ming-Siang; Jonnagaddala, Jitendra; Rose Jue, Toni; Rao, Shruti; Chou, Hui-Jou; Milacic, Marija; Singh, Onkar; Syed-Abdul, Shabbir; Hsu, Wen-Lian

    2016-01-01

    Metastasis is the dissemination of a cancer/tumor from one organ to another, and it is the most dangerous stage during cancer progression, causing more than 90% of cancer deaths. Improving the understanding of the complicated cellular mechanisms underlying metastasis requires investigations of the signaling pathways. To this end, we developed a METastasis (MET) network visualization and curation tool to assist metastasis researchers retrieve network information of interest while browsing through the large volume of studies in PubMed. MET can recognize relations among genes, cancers, tissues and organs of metastasis mentioned in the literature through text-mining techniques, and then produce a visualization of all mined relations in a metastasis network. To facilitate the curation process, MET is developed as a browser extension that allows curators to review and edit concepts and relations related to metastasis directly in PubMed. PubMed users can also view the metastatic networks integrated from the large collection of research papers directly through MET. For the BioCreative 2015 interactive track (IAT), a curation task was proposed to curate metastatic networks among PubMed abstracts. Six curators participated in the proposed task and a post-IAT task, curating 963 unique metastatic relations from 174 PubMed abstracts using MET.Database URL: http://btm.tmu.edu.tw/metastasisway. © The Author(s) 2016. Published by Oxford University Press.

  10. Data Curation

    Science.gov (United States)

    Mallon, Melissa, Ed.

    2012-01-01

    In their Top Trends of 2012, the Association of College and Research Libraries (ACRL) named data curation as one of the issues to watch in academic libraries in the near future (ACRL, 2012, p. 312). Data curation can be summarized as "the active and ongoing management of data through its life cycle of interest and usefulness to scholarship,…

  11. The Candidate Cancer Gene Database: a database of cancer driver genes from forward genetic screens in mice.

    Science.gov (United States)

    Abbott, Kenneth L; Nyre, Erik T; Abrahante, Juan; Ho, Yen-Yi; Isaksson Vogel, Rachel; Starr, Timothy K

    2015-01-01

    Identification of cancer driver gene mutations is crucial for advancing cancer therapeutics. Due to the overwhelming number of passenger mutations in the human tumor genome, it is difficult to pinpoint causative driver genes. Using transposon mutagenesis in mice many laboratories have conducted forward genetic screens and identified thousands of candidate driver genes that are highly relevant to human cancer. Unfortunately, this information is difficult to access and utilize because it is scattered across multiple publications using different mouse genome builds and strength metrics. To improve access to these findings and facilitate meta-analyses, we developed the Candidate Cancer Gene Database (CCGD, http://ccgd-starrlab.oit.umn.edu/). The CCGD is a manually curated database containing a unified description of all identified candidate driver genes and the genomic location of transposon common insertion sites (CISs) from all currently published transposon-based screens. To demonstrate relevance to human cancer, we performed a modified gene set enrichment analysis using KEGG pathways and show that human cancer pathways are highly enriched in the database. We also used hierarchical clustering to identify pathways enriched in blood cancers compared to solid cancers. The CCGD is a novel resource available to scientists interested in the identification of genetic drivers of cancer. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. DB-PABP: a database of polyanion-binding proteins.

    Science.gov (United States)

    Fang, Jianwen; Dong, Yinghua; Salamat-Miller, Nazila; Middaugh, C Russell

    2008-01-01

    The interactions between polyanions (PAs) and polyanion-binding proteins (PABPs) have been found to play significant roles in many essential biological processes including intracellular organization, transport and protein folding. Furthermore, many neurodegenerative disease-related proteins are PABPs. Thus, a better understanding of PA/PABP interactions may not only enhance our understandings of biological systems but also provide new clues to these deadly diseases. The literature in this field is widely scattered, suggesting the need for a comprehensive and searchable database of PABPs. The DB-PABP is a comprehensive, manually curated and searchable database of experimentally characterized PABPs. It is freely available and can be accessed online at http://pabp.bcf.ku.edu/DB_PABP/. The DB-PABP was implemented as a MySQL relational database. An interactive web interface was created using Java Server Pages (JSP). The search page of the database is organized into a main search form and a section for utilities. The main search form enables custom searches via four menus: protein names, polyanion names, the source species of the proteins and the methods used to discover the interactions. Available utilities include a commonality matrix, a function of listing PABPs by the number of interacting polyanions and a string search for author surnames. The DB-PABP is maintained at the University of Kansas. We encourage users to provide feedback and submit new data and references.

  13. Kin-Driver: a database of driver mutations in protein kinases.

    Science.gov (United States)

    Simonetti, Franco L; Tornador, Cristian; Nabau-Moretó, Nuria; Molina-Vila, Miguel A; Marino-Buslje, Cristina

    2014-01-01

    Somatic mutations in protein kinases (PKs) are frequent driver events in many human tumors, while germ-line mutations are associated with hereditary diseases. Here we present Kin-driver, the first database that compiles driver mutations in PKs with experimental evidence demonstrating their functional role. Kin-driver is a manual expert-curated database that pays special attention to activating mutations (AMs) and can serve as a validation set to develop new generation tools focused on the prediction of gain-of-function driver mutations. It also offers an easy and intuitive environment to facilitate the visualization and analysis of mutations in PKs. Because all mutations are mapped onto a multiple sequence alignment, analogue positions between kinases can be identified and tentative new mutations can be proposed for studying by transferring annotation. Finally, our database can also be of use to clinical and translational laboratories, helping them to identify uncommon AMs that can correlate with response to new antitumor drugs. The website was developed using PHP and JavaScript, which are supported by all major browsers; the database was built using MySQL server. Kin-driver is available at: http://kin-driver.leloir.org.ar/ © The Author(s) 2014. Published by Oxford University Press.

  14. A framework for organizing cancer-related variations from existing databases, publications and NGS data using a High-performance Integrated Virtual Environment (HIVE).

    Science.gov (United States)

    Wu, Tsung-Jung; Shamsaddini, Amirhossein; Pan, Yang; Smith, Krista; Crichton, Daniel J; Simonyan, Vahan; Mazumder, Raja

    2014-01-01

    Years of sequence feature curation by UniProtKB/Swiss-Prot, PIR-PSD, NCBI-CDD, RefSeq and other database biocurators has led to a rich repository of information on functional sites of genes and proteins. This information along with variation-related annotation can be used to scan human short sequence reads from next-generation sequencing (NGS) pipelines for presence of non-synonymous single-nucleotide variations (nsSNVs) that affect functional sites. This and similar workflows are becoming more important because thousands of NGS data sets are being made available through projects such as The Cancer Genome Atlas (TCGA), and researchers want to evaluate their biomarkers in genomic data. BioMuta, an integrated sequence feature database, provides a framework for automated and manual curation and integration of cancer-related sequence features so that they can be used in NGS analysis pipelines. Sequence feature information in BioMuta is collected from the Catalogue of Somatic Mutations in Cancer (COSMIC), ClinVar, UniProtKB and through biocuration of information available from publications. Additionally, nsSNVs identified through automated analysis of NGS data from TCGA are also included in the database. Because of the petabytes of data and information present in NGS primary repositories, a platform HIVE (High-performance Integrated Virtual Environment) for storing, analyzing, computing and curating NGS data and associated metadata has been developed. Using HIVE, 31 979 nsSNVs were identified in TCGA-derived NGS data from breast cancer patients. All variations identified through this process are stored in a Curated Short Read archive, and the nsSNVs from the tumor samples are included in BioMuta. Currently, BioMuta has 26 cancer types with 13 896 small-scale and 308 986 large-scale study-derived variations. Integration of variation data allows identifications of novel or common nsSNVs that can be prioritized in validation studies. Database URL: BioMuta: http

  15. The DCC Curation Lifecycle Model

    Directory of Open Access Journals (Sweden)

    Sarah Higgins

    2008-08-01

    Full Text Available Lifecycle management of digital materials is necessary to ensure their continuity. The DCC Curation Lifecycle Model has been developed as a generic, curation-specific, tool which can be used, in conjunction with relevant standards, to plan curation and preservation activities to different levels of granularity. The DCC will use the model: as a training tool for data creators, data curators and data users; to organise and plan their resources; and to help organisations identify risks to their digital assets and plan management strategies for their successful curation.

  16. Biocuration workflows and text mining: overview of the BioCreative 2012 Workshop Track II.

    Science.gov (United States)

    Lu, Zhiyong; Hirschman, Lynette

    2012-01-01

    Manual curation of data from the biomedical literature is a rate-limiting factor for many expert curated databases. Despite the continuing advances in biomedical text mining and the pressing needs of biocurators for better tools, few existing text-mining tools have been successfully integrated into production literature curation systems such as those used by the expert curated databases. To close this gap and better understand all aspects of literature curation, we invited submissions of written descriptions of curation workflows from expert curated databases for the BioCreative 2012 Workshop Track II. We received seven qualified contributions, primarily from model organism databases. Based on these descriptions, we identified commonalities and differences across the workflows, the common ontologies and controlled vocabularies used and the current and desired uses of text mining for biocuration. Compared to a survey done in 2009, our 2012 results show that many more databases are now using text mining in parts of their curation workflows. In addition, the workshop participants identified text-mining aids for finding gene names and symbols (gene indexing), prioritization of documents for curation (document triage) and ontology concept assignment as those most desired by the biocurators. DATABASE URL: http://www.biocreative.org/tasks/bc-workshop-2012/workflow/.

  17. MetaboLights: An Open-Access Database Repository for Metabolomics Data.

    Science.gov (United States)

    Kale, Namrata S; Haug, Kenneth; Conesa, Pablo; Jayseelan, Kalaivani; Moreno, Pablo; Rocca-Serra, Philippe; Nainala, Venkata Chandrasekhar; Spicer, Rachel A; Williams, Mark; Li, Xuefei; Salek, Reza M; Griffin, Julian L; Steinbeck, Christoph

    2016-03-24

    MetaboLights is the first general purpose, open-access database repository for cross-platform and cross-species metabolomics research at the European Bioinformatics Institute (EMBL-EBI). Based upon the open-source ISA framework, MetaboLights provides Metabolomics Standard Initiative (MSI) compliant metadata and raw experimental data associated with metabolomics experiments. Users can upload their study datasets into the MetaboLights Repository. These studies are then automatically assigned a stable and unique identifier (e.g., MTBLS1) that can be used for publication reference. The MetaboLights Reference Layer associates metabolites with metabolomics studies in the archive and is extensively annotated with data fields such as structural and chemical information, NMR and MS spectra, target species, metabolic pathways, and reactions. The database is manually curated with no specific release schedules. MetaboLights is also recommended by journals for metabolomics data deposition. This unit provides a guide to using MetaboLights, downloading experimental data, and depositing metabolomics datasets using user-friendly submission tools. Copyright © 2016 John Wiley & Sons, Inc.

  18. TIPdb: A Database of Anticancer, Antiplatelet, and Antituberculosis Phytochemicals from Indigenous Plants in Taiwan

    Directory of Open Access Journals (Sweden)

    Ying-Chi Lin

    2013-01-01

    Full Text Available The unique geographic features of Taiwan are attributed to the rich indigenous and endemic plant species in Taiwan. These plants serve as resourceful bank for biologically active phytochemicals. Given that these plant-derived chemicals are prototypes of potential drugs for diseases, databases connecting the chemical structures and pharmacological activities may facilitate drug development. To enhance the utility of the data, it is desirable to develop a database of chemical compounds and corresponding activities from indigenous plants in Taiwan. A database of anticancer, antiplatelet, and antituberculosis phytochemicals from indigenous plants in Taiwan was constructed. The database, TIPdb, is composed of a standardized format of published anticancer, antiplatelet, and antituberculosis phytochemicals from indigenous plants in Taiwan. A browse function was implemented for users to browse the database in a taxonomy-based manner. Search functions can be utilized to filter records of interest by botanical name, part, chemical class, or compound name. The structured and searchable database TIPdb was constructed to serve as a comprehensive and standardized resource for anticancer, antiplatelet, and antituberculosis compounds search. The manually curated chemical structures and activities provide a great opportunity to develop quantitative structure-activity relationship models for the high-throughput screening of potential anticancer, antiplatelet, and antituberculosis drugs.

  19. TIPdb: a database of anticancer, antiplatelet, and antituberculosis phytochemicals from indigenous plants in Taiwan.

    Science.gov (United States)

    Lin, Ying-Chi; Wang, Chia-Chi; Chen, Ih-Sheng; Jheng, Jhao-Liang; Li, Jih-Heng; Tung, Chun-Wei

    2013-01-01

    The unique geographic features of Taiwan are attributed to the rich indigenous and endemic plant species in Taiwan. These plants serve as resourceful bank for biologically active phytochemicals. Given that these plant-derived chemicals are prototypes of potential drugs for diseases, databases connecting the chemical structures and pharmacological activities may facilitate drug development. To enhance the utility of the data, it is desirable to develop a database of chemical compounds and corresponding activities from indigenous plants in Taiwan. A database of anticancer, antiplatelet, and antituberculosis phytochemicals from indigenous plants in Taiwan was constructed. The database, TIPdb, is composed of a standardized format of published anticancer, antiplatelet, and antituberculosis phytochemicals from indigenous plants in Taiwan. A browse function was implemented for users to browse the database in a taxonomy-based manner. Search functions can be utilized to filter records of interest by botanical name, part, chemical class, or compound name. The structured and searchable database TIPdb was constructed to serve as a comprehensive and standardized resource for anticancer, antiplatelet, and antituberculosis compounds search. The manually curated chemical structures and activities provide a great opportunity to develop quantitative structure-activity relationship models for the high-throughput screening of potential anticancer, antiplatelet, and antituberculosis drugs.

  20. METHODS OF CONTENTS CURATOR

    Directory of Open Access Journals (Sweden)

    V. Kukharenko

    2013-03-01

    Full Text Available Content curated - a new activity (started in 2008 qualified network users with process large amounts of information to represent her social network users. To prepare content curators developed 7 weeks distance course, which examines the functions, methods and tools curator. Courses showed a significant relationship success learning on the availability of advanced personal learning environment and the ability to process and analyze information.

  1. Textpresso Central: a customizable platform for searching, text mining, viewing, and curating biomedical literature.

    Science.gov (United States)

    Müller, H-M; Van Auken, K M; Li, Y; Sternberg, P W

    2018-03-09

    The biomedical literature continues to grow at a rapid pace, making the challenge of knowledge retrieval and extraction ever greater. Tools that provide a means to search and mine the full text of literature thus represent an important way by which the efficiency of these processes can be improved. We describe the next generation of the Textpresso information retrieval system, Textpresso Central (TPC). TPC builds on the strengths of the original system by expanding the full text corpus to include the PubMed Central Open Access Subset (PMC OA), as well as the WormBase C. elegans bibliography. In addition, TPC allows users to create a customized corpus by uploading and processing documents of their choosing. TPC is UIMA compliant, to facilitate compatibility with external processing modules, and takes advantage of Lucene indexing and search technology for efficient handling of millions of full text documents. Like Textpresso, TPC searches can be performed using keywords and/or categories (semantically related groups of terms), but to provide better context for interpreting and validating queries, search results may now be viewed as highlighted passages in the context of full text. To facilitate biocuration efforts, TPC also allows users to select text spans from the full text and annotate them, create customized curation forms for any data type, and send resulting annotations to external curation databases. As an example of such a curation form, we describe integration of TPC with the Noctua curation tool developed by the Gene Ontology (GO) Consortium. Textpresso Central is an online literature search and curation platform that enables biocurators and biomedical researchers to search and mine the full text of literature by integrating keyword and category searches with viewing search results in the context of the full text. It also allows users to create customized curation interfaces, use those interfaces to make annotations linked to supporting evidence statements

  2. ProCarDB: a database of bacterial carotenoids.

    Science.gov (United States)

    Nupur, L N U; Vats, Asheema; Dhanda, Sandeep Kumar; Raghava, Gajendra P S; Pinnaka, Anil Kumar; Kumar, Ashwani

    2016-05-26

    Carotenoids have important functions in bacteria, ranging from harvesting light energy to neutralizing oxidants and acting as virulence factors. However, information pertaining to the carotenoids is scattered throughout the literature. Furthermore, information about the genes/proteins involved in the biosynthesis of carotenoids has tremendously increased in the post-genomic era. A web server providing the information about microbial carotenoids in a structured manner is required and will be a valuable resource for the scientific community working with microbial carotenoids. Here, we have created a manually curated, open access, comprehensive compilation of bacterial carotenoids named as ProCarDB- Prokaryotic Carotenoid Database. ProCarDB includes 304 unique carotenoids arising from 50 biosynthetic pathways distributed among 611 prokaryotes. ProCarDB provides important information on carotenoids, such as 2D and 3D structures, molecular weight, molecular formula, SMILES, InChI, InChIKey, IUPAC name, KEGG Id, PubChem Id, and ChEBI Id. The database also provides NMR data, UV-vis absorption data, IR data, MS data and HPLC data that play key roles in the identification of carotenoids. An important feature of this database is the extension of biosynthetic pathways from the literature and through the presence of the genes/enzymes in different organisms. The information contained in the database was mined from published literature and databases such as KEGG, PubChem, ChEBI, LipidBank, LPSN, and Uniprot. The database integrates user-friendly browsing and searching with carotenoid analysis tools to help the user. We believe that this database will serve as a major information centre for researchers working on bacterial carotenoids.

  3. Data Albums: An Event Driven Search, Aggregation and Curation Tool for Earth Science

    Science.gov (United States)

    Ramachandran, Rahul; Kulkarni, Ajinkya; Maskey, Manil; Bakare, Rohan; Basyal, Sabin; Li, Xiang; Flynn, Shannon

    2014-01-01

    Approaches used in Earth science research such as case study analysis and climatology studies involve discovering and gathering diverse data sets and information to support the research goals. To gather relevant data and information for case studies and climatology analysis is both tedious and time consuming. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. In cases where researchers are interested in studying a significant event, they have to manually assemble a variety of datasets relevant to it by searching the different distributed data systems. This paper presents a specialized search, aggregation and curation tool for Earth science to address these challenges. The search rool automatically creates curated 'Data Albums', aggregated collections of information related to a specific event, containing links to relevant data files [granules] from different instruments, tools and services for visualization and analysis, and information about the event contained in news reports, images or videos to supplement research analysis. Curation in the tool is driven via an ontology based relevancy ranking algorithm to filter out non relevant information and data.

  4. ALFRED: An Allele Frequency Database for Microevolutionary Studies

    Directory of Open Access Journals (Sweden)

    Kenneth K Kidd

    2005-01-01

    Full Text Available Many kinds of microevolutionary studies require data on multiple polymorphisms in multiple populations. Increasingly, and especially for human populations, multiple research groups collect relevant data and those data are dispersed widely in the literature. ALFRED has been designed to hold data from many sources and make them available over the web. Data are assembled from multiple sources, curated, and entered into the database. Multiple links to other resources are also established by the curators. A variety of search options are available and additional geographic based interfaces are being developed. The database can serve the human anthropologic genetic community by identifying what loci are already typed on many populations thereby helping to focus efforts on a common set of markers. The database can also serve as a model for databases handling similar DNA polymorphism data for other species.

  5. Protein-Protein Interaction Databases

    DEFF Research Database (Denmark)

    Szklarczyk, Damian; Jensen, Lars Juhl

    2015-01-01

    Years of meticulous curation of scientific literature and increasingly reliable computational predictions have resulted in creation of vast databases of protein interaction data. Over the years, these repositories have become a basic framework in which experiments are analyzed and new directions...

  6. HIVsirDB: a database of HIV inhibiting siRNAs.

    Directory of Open Access Journals (Sweden)

    Atul Tyagi

    Full Text Available Human immunodeficiency virus (HIV is responsible for millions of deaths every year. The current treatment involves the use of multiple antiretroviral agents that may harm patients due to their toxic nature. RNA interference (RNAi is a potent candidate for the future treatment of HIV, uses short interfering RNA (siRNA/shRNA for silencing HIV genes. In this study, attempts have been made to create a database HIVsirDB of siRNAs responsible for silencing HIV genes.HIVsirDB is a manually curated database of HIV inhibiting siRNAs that provides comprehensive information about each siRNA or shRNA. Information was collected and compiled from literature and public resources. This database contains around 750 siRNAs that includes 75 partially complementary siRNAs differing by one or more bases with the target sites and over 100 escape mutant sequences. HIVsirDB structure contains sixteen fields including siRNA sequence, HIV strain, targeted genome region, efficacy and conservation of target sequences. In order to facilitate user, many tools have been integrated in this database that includes; i siRNAmap for mapping siRNAs on target sequence, ii HIVsirblast for BLAST search against database, iii siRNAalign for aligning siRNAs.HIVsirDB is a freely accessible database of siRNAs which can silence or degrade HIV genes. It covers 26 types of HIV strains and 28 cell types. This database will be very useful for developing models for predicting efficacy of HIV inhibiting siRNAs. In summary this is a useful resource for researchers working in the field of siRNA based HIV therapy. HIVsirDB database is accessible at http://crdd.osdd.net/raghava/hivsir/.

  7. The immune epitope database (IEDB) 3.0

    Science.gov (United States)

    Vita, Randi; Overton, James A.; Greenbaum, Jason A.; Ponomarenko, Julia; Clark, Jason D.; Cantrell, Jason R.; Wheeler, Daniel K.; Gabbard, Joseph L.; Hix, Deborah; Sette, Alessandro; Peters, Bjoern

    2015-01-01

    The IEDB, www.iedb.org, contains information on immune epitopes—the molecular targets of adaptive immune responses—curated from the published literature and submitted by National Institutes of Health funded epitope discovery efforts. From 2004 to 2012 the IEDB curation of journal articles published since 1960 has caught up to the present day, with >95% of relevant published literature manually curated amounting to more than 15 000 journal articles and more than 704 000 experiments to date. The revised curation target since 2012 has been to make recent research findings quickly available in the IEDB and thereby ensure that it continues to be an up-to-date resource. Having gathered a comprehensive dataset in the IEDB, a complete redesign of the query and reporting interface has been performed in the IEDB 3.0 release to improve how end users can access this information in an intuitive and biologically accurate manner. We here present this most recent release of the IEDB and describe the user testing procedures as well as the use of external ontologies that have enabled it. PMID:25300482

  8. Ambiguity of non-systematic chemical identifiers within and between small-molecule databases.

    Science.gov (United States)

    Akhondi, Saber A; Muresan, Sorel; Williams, Antony J; Kors, Jan A

    2015-01-01

    A wide range of chemical compound databases are currently available for pharmaceutical research. To retrieve compound information, including structures, researchers can query these chemical databases using non-systematic identifiers. These are source-dependent identifiers (e.g., brand names, generic names), which are usually assigned to the compound at the point of registration. The correctness of non-systematic identifiers (i.e., whether an identifier matches the associated structure) can only be assessed manually, which is cumbersome, but it is possible to automatically check their ambiguity (i.e., whether an identifier matches more than one structure). In this study we have quantified the ambiguity of non-systematic identifiers within and between eight widely used chemical databases. We also studied the effect of chemical structure standardization on reducing the ambiguity of non-systematic identifiers. The ambiguity of non-systematic identifiers within databases varied from 0.1 to 15.2 % (median 2.5 %). Standardization reduced the ambiguity only to a small extent for most databases. A wide range of ambiguity existed for non-systematic identifiers that are shared between databases (17.7-60.2 %, median of 40.3 %). Removing stereochemistry information provided the largest reduction in ambiguity across databases (median reduction 13.7 percentage points). Ambiguity of non-systematic identifiers within chemical databases is generally low, but ambiguity of non-systematic identifiers that are shared between databases, is high. Chemical structure standardization reduces the ambiguity to a limited extent. Our findings can help to improve database integration, curation, and maintenance.

  9. THPdb: Database of FDA-approved peptide and protein therapeutics.

    Directory of Open Access Journals (Sweden)

    Salman Sadullah Usmani

    Full Text Available THPdb (http://crdd.osdd.net/raghava/thpdb/ is a manually curated repository of Food and Drug Administration (FDA approved therapeutic peptides and proteins. The information in THPdb has been compiled from 985 research publications, 70 patents and other resources like DrugBank. The current version of the database holds a total of 852 entries, providing comprehensive information on 239 US-FDA approved therapeutic peptides and proteins and their 380 drug variants. The information on each peptide and protein includes their sequences, chemical properties, composition, disease area, mode of activity, physical appearance, category or pharmacological class, pharmacodynamics, route of administration, toxicity, target of activity, etc. In addition, we have annotated the structure of most of the protein and peptides. A number of user-friendly tools have been integrated to facilitate easy browsing and data analysis. To assist scientific community, a web interface and mobile App have also been developed.

  10. KALIMER design database development and operation manual

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Hahn, Do Hee; Lee, Yong Bum; Chang, Won Pyo

    2000-12-01

    KALIMER Design Database is developed to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applications. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), 3D CAD database, Team Cooperation System, and Reserved Documents. Results Database is a research results database for mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is a schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment

  11. KALIMER design database development and operation manual

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Hahn, Do Hee; Lee, Yong Bum; Chang, Won Pyo

    2000-12-01

    KALIMER Design Database is developed to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applications. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), 3D CAD database, Team Cooperation System, and Reserved Documents. Results Database is a research results database for mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is a schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment.

  12. Gene Name Thesaurus - Gene Name Thesaurus | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available 08/lsdba.nbdc00966-001 Description of data contents Curators who have expertize in biological research edit ...onym information fields in various gene/genome databases. 2. The curators who have expertise in biological research

  13. Annotation error in public databases: misannotation of molecular function in enzyme superfamilies.

    Directory of Open Access Journals (Sweden)

    Alexandra M Schnoes

    2009-12-01

    Full Text Available Due to the rapid release of new data from genome sequencing projects, the majority of protein sequences in public databases have not been experimentally characterized; rather, sequences are annotated using computational analysis. The level of misannotation and the types of misannotation in large public databases are currently unknown and have not been analyzed in depth. We have investigated the misannotation levels for molecular function in four public protein sequence databases (UniProtKB/Swiss-Prot, GenBank NR, UniProtKB/TrEMBL, and KEGG for a model set of 37 enzyme families for which extensive experimental information is available. The manually curated database Swiss-Prot shows the lowest annotation error levels (close to 0% for most families; the two other protein sequence databases (GenBank NR and TrEMBL and the protein sequences in the KEGG pathways database exhibit similar and surprisingly high levels of misannotation that average 5%-63% across the six superfamilies studied. For 10 of the 37 families examined, the level of misannotation in one or more of these databases is >80%. Examination of the NR database over time shows that misannotation has increased from 1993 to 2005. The types of misannotation that were found fall into several categories, most associated with "overprediction" of molecular function. These results suggest that misannotation in enzyme superfamilies containing multiple families that catalyze different reactions is a larger problem than has been recognized. Strategies are suggested for addressing some of the systematic problems contributing to these high levels of misannotation.

  14. Annotation error in public databases: misannotation of molecular function in enzyme superfamilies.

    Science.gov (United States)

    Schnoes, Alexandra M; Brown, Shoshana D; Dodevski, Igor; Babbitt, Patricia C

    2009-12-01

    Due to the rapid release of new data from genome sequencing projects, the majority of protein sequences in public databases have not been experimentally characterized; rather, sequences are annotated using computational analysis. The level of misannotation and the types of misannotation in large public databases are currently unknown and have not been analyzed in depth. We have investigated the misannotation levels for molecular function in four public protein sequence databases (UniProtKB/Swiss-Prot, GenBank NR, UniProtKB/TrEMBL, and KEGG) for a model set of 37 enzyme families for which extensive experimental information is available. The manually curated database Swiss-Prot shows the lowest annotation error levels (close to 0% for most families); the two other protein sequence databases (GenBank NR and TrEMBL) and the protein sequences in the KEGG pathways database exhibit similar and surprisingly high levels of misannotation that average 5%-63% across the six superfamilies studied. For 10 of the 37 families examined, the level of misannotation in one or more of these databases is >80%. Examination of the NR database over time shows that misannotation has increased from 1993 to 2005. The types of misannotation that were found fall into several categories, most associated with "overprediction" of molecular function. These results suggest that misannotation in enzyme superfamilies containing multiple families that catalyze different reactions is a larger problem than has been recognized. Strategies are suggested for addressing some of the systematic problems contributing to these high levels of misannotation.

  15. Curation and Computational Design of Bioenergy-Related Metabolic Pathways

    Energy Technology Data Exchange (ETDEWEB)

    Karp, Peter D. [SRI International, Menlo Park, CA (United States)

    2014-09-12

    Pathway Tools is a systems-biology software package written by SRI International (SRI) that produces Pathway/Genome Databases (PGDBs) for organisms with a sequenced genome. Pathway Tools also provides a wide range of capabilities for analyzing predicted metabolic networks and user-generated omics data. More than 5,000 academic, industrial, and government groups have licensed Pathway Tools. This user community includes researchers at all three DOE bioenergy centers, as well as academic and industrial metabolic engineering (ME) groups. An integral part of the Pathway Tools software is MetaCyc, a large, multiorganism database of metabolic pathways and enzymes that SRI and its academic collaborators manually curate. This project included two main goals: I. Enhance the MetaCyc content of bioenergy-related enzymes and pathways. II. Develop computational tools for engineering metabolic pathways that satisfy specified design goals, in particular for bioenergy-related pathways. In part I, SRI proposed to significantly expand the coverage of bioenergy-related metabolic information in MetaCyc, followed by the generation of organism-specific PGDBs for all energy-relevant organisms sequenced at the DOE Joint Genome Institute (JGI). Part I objectives included: 1: Expand the content of MetaCyc to include bioenergy-related enzymes and pathways. 2: Enhance the Pathway Tools software to enable display of complex polymer degradation processes. 3: Create new PGDBs for the energy-related organisms sequenced by JGI, update existing PGDBs with new MetaCyc content, and make these data available to JBEI via the BioCyc website. In part II, SRI proposed to develop an efficient computational tool for the engineering of metabolic pathways. Part II objectives included: 4: Develop computational tools for generating metabolic pathways that satisfy specified design goals, enabling users to specify parameters such as starting and ending compounds, and preferred or disallowed intermediate compounds

  16. Data Curation for the Exploitation of Large Earth Observation Products Databases - The MEA system

    Science.gov (United States)

    Mantovani, Simone; Natali, Stefano; Barboni, Damiano; Cavicchi, Mario; Della Vecchia, Andrea

    2014-05-01

    National Space Agencies under the umbrella of the European Space Agency are performing a strong activity to handle and provide solutions to Big Data and related knowledge (metadata, software tools and services) management and exploitation. The continuously increasing amount of long-term and of historic data in EO facilities in the form of online datasets and archives, the incoming satellite observation platforms that will generate an impressive amount of new data and the new EU approach on the data distribution policy make necessary to address technologies for the long-term management of these data sets, including their consolidation, preservation, distribution, continuation and curation across multiple missions. The management of long EO data time series of continuing or historic missions - with more than 20 years of data available already today - requires technical solutions and technologies which differ considerably from the ones exploited by existing systems. Several tools, both open source and commercial, are already providing technologies to handle data and metadata preparation, access and visualization via OGC standard interfaces. This study aims at describing the Multi-sensor Evolution Analysis (MEA) system and the Data Curation concept as approached and implemented within the ASIM and EarthServer projects, funded by the European Space Agency and the European Commission, respectively.

  17. Sharing and community curation of mass spectrometry data with GNPS

    Science.gov (United States)

    Nguyen, Don Duy; Watrous, Jeramie; Kapono, Clifford A; Luzzatto-Knaan, Tal; Porto, Carla; Bouslimani, Amina; Melnik, Alexey V; Meehan, Michael J; Liu, Wei-Ting; Crüsemann, Max; Boudreau, Paul D; Esquenazi, Eduardo; Sandoval-Calderón, Mario; Kersten, Roland D; Pace, Laura A; Quinn, Robert A; Duncan, Katherine R; Hsu, Cheng-Chih; Floros, Dimitrios J; Gavilan, Ronnie G; Kleigrewe, Karin; Northen, Trent; Dutton, Rachel J; Parrot, Delphine; Carlson, Erin E; Aigle, Bertrand; Michelsen, Charlotte F; Jelsbak, Lars; Sohlenkamp, Christian; Pevzner, Pavel; Edlund, Anna; McLean, Jeffrey; Piel, Jörn; Murphy, Brian T; Gerwick, Lena; Liaw, Chih-Chuang; Yang, Yu-Liang; Humpf, Hans-Ulrich; Maansson, Maria; Keyzers, Robert A; Sims, Amy C; Johnson, Andrew R.; Sidebottom, Ashley M; Sedio, Brian E; Klitgaard, Andreas; Larson, Charles B; P., Cristopher A Boya; Torres-Mendoza, Daniel; Gonzalez, David J; Silva, Denise B; Marques, Lucas M; Demarque, Daniel P; Pociute, Egle; O'Neill, Ellis C; Briand, Enora; Helfrich, Eric J. N.; Granatosky, Eve A; Glukhov, Evgenia; Ryffel, Florian; Houson, Hailey; Mohimani, Hosein; Kharbush, Jenan J; Zeng, Yi; Vorholt, Julia A; Kurita, Kenji L; Charusanti, Pep; McPhail, Kerry L; Nielsen, Kristian Fog; Vuong, Lisa; Elfeki, Maryam; Traxler, Matthew F; Engene, Niclas; Koyama, Nobuhiro; Vining, Oliver B; Baric, Ralph; Silva, Ricardo R; Mascuch, Samantha J; Tomasi, Sophie; Jenkins, Stefan; Macherla, Venkat; Hoffman, Thomas; Agarwal, Vinayak; Williams, Philip G; Dai, Jingqui; Neupane, Ram; Gurr, Joshua; Rodríguez, Andrés M. C.; Lamsa, Anne; Zhang, Chen; Dorrestein, Kathleen; Duggan, Brendan M; Almaliti, Jehad; Allard, Pierre-Marie; Phapale, Prasad; Nothias, Louis-Felix; Alexandrov, Theodore; Litaudon, Marc; Wolfender, Jean-Luc; Kyle, Jennifer E; Metz, Thomas O; Peryea, Tyler; Nguyen, Dac-Trung; VanLeer, Danielle; Shinn, Paul; Jadhav, Ajit; Müller, Rolf; Waters, Katrina M; Shi, Wenyuan; Liu, Xueting; Zhang, Lixin; Knight, Rob; Jensen, Paul R; Palsson, Bernhard O; Pogliano, Kit; Linington, Roger G; Gutiérrez, Marcelino; Lopes, Norberto P; Gerwick, William H; Moore, Bradley S; Dorrestein, Pieter C; Bandeira, Nuno

    2017-01-01

    The potential of the diverse chemistries present in natural products (NP) for biotechnology and medicine remains untapped because NP databases are not searchable with raw data and the NP community has no way to share data other than in published papers. Although mass spectrometry techniques are well-suited to high-throughput characterization of natural products, there is a pressing need for an infrastructure to enable sharing and curation of data. We present Global Natural Products Social molecular networking (GNPS, http://gnps.ucsd.edu), an open-access knowledge base for community wide organization and sharing of raw, processed or identified tandem mass (MS/MS) spectrometry data. In GNPS crowdsourced curation of freely available community-wide reference MS libraries will underpin improved annotations. Data-driven social-networking should facilitate identification of spectra and foster collaborations. We also introduce the concept of ‘living data’ through continuous reanalysis of deposited data. PMID:27504778

  18. The Degradome database: mammalian proteases and diseases of proteolysis.

    Science.gov (United States)

    Quesada, Víctor; Ordóñez, Gonzalo R; Sánchez, Luis M; Puente, Xose S; López-Otín, Carlos

    2009-01-01

    The degradome is defined as the complete set of proteases present in an organism. The recent availability of whole genomic sequences from multiple organisms has led us to predict the contents of the degradomes of several mammalian species. To ensure the fidelity of these predictions, our methods have included manual curation of individual sequences and, when necessary, direct cloning and sequencing experiments. The results of these studies in human, chimpanzee, mouse and rat have been incorporated into the Degradome database, which can be accessed through a web interface at http://degradome.uniovi.es. The annotations about each individual protease can be retrieved by browsing catalytic classes and families or by searching specific terms. This web site also provides detailed information about genetic diseases of proteolysis, a growing field of great importance for multiple users. Finally, the user can find additional information about protease structures, protease inhibitors, ancillary domains of proteases and differences between mammalian degradomes.

  19. Follicle Online: an integrated database of follicle assembly, development and ovulation.

    Science.gov (United States)

    Hua, Juan; Xu, Bo; Yang, Yifan; Ban, Rongjun; Iqbal, Furhan; Cooke, Howard J; Zhang, Yuanwei; Shi, Qinghua

    2015-01-01

    Folliculogenesis is an important part of ovarian function as it provides the oocytes for female reproductive life. Characterizing genes/proteins involved in folliculogenesis is fundamental for understanding the mechanisms associated with this biological function and to cure the diseases associated with folliculogenesis. A large number of genes/proteins associated with folliculogenesis have been identified from different species. However, no dedicated public resource is currently available for folliculogenesis-related genes/proteins that are validated by experiments. Here, we are reporting a database 'Follicle Online' that provides the experimentally validated gene/protein map of the folliculogenesis in a number of species. Follicle Online is a web-based database system for storing and retrieving folliculogenesis-related experimental data. It provides detailed information for 580 genes/proteins (from 23 model organisms, including Homo sapiens, Mus musculus, Rattus norvegicus, Mesocricetus auratus, Bos Taurus, Drosophila and Xenopus laevis) that have been reported to be involved in folliculogenesis, POF (premature ovarian failure) and PCOS (polycystic ovary syndrome). The literature was manually curated from more than 43,000 published articles (till 1 March 2014). The Follicle Online database is implemented in PHP + MySQL + JavaScript and this user-friendly web application provides access to the stored data. In summary, we have developed a centralized database that provides users with comprehensive information about genes/proteins involved in folliculogenesis. This database can be accessed freely and all the stored data can be viewed without any registration. Database URL: http://mcg.ustc.edu.cn/sdap1/follicle/index.php © The Author(s) 2015. Published by Oxford University Press.

  20. tagtog: interactive and text-mining-assisted annotation of gene mentions in PLOS full-text articles.

    Science.gov (United States)

    Cejuela, Juan Miguel; McQuilton, Peter; Ponting, Laura; Marygold, Steven J; Stefancsik, Raymund; Millburn, Gillian H; Rost, Burkhard

    2014-01-01

    The breadth and depth of biomedical literature are increasing year upon year. To keep abreast of these increases, FlyBase, a database for Drosophila genomic and genetic information, is constantly exploring new ways to mine the published literature to increase the efficiency and accuracy of manual curation and to automate some aspects, such as triaging and entity extraction. Toward this end, we present the 'tagtog' system, a web-based annotation framework that can be used to mark up biological entities (such as genes) and concepts (such as Gene Ontology terms) in full-text articles. tagtog leverages manual user annotation in combination with automatic machine-learned annotation to provide accurate identification of gene symbols and gene names. As part of the BioCreative IV Interactive Annotation Task, FlyBase has used tagtog to identify and extract mentions of Drosophila melanogaster gene symbols and names in full-text biomedical articles from the PLOS stable of journals. We show here the results of three experiments with different sized corpora and assess gene recognition performance and curation speed. We conclude that tagtog-named entity recognition improves with a larger corpus and that tagtog-assisted curation is quicker than manual curation. DATABASE URL: www.tagtog.net, www.flybase.org.

  1. Improving transcriptome construction in non-model organisms: integrating manual and automated gene definition in Emiliania huxleyi.

    Science.gov (United States)

    Feldmesser, Ester; Rosenwasser, Shilo; Vardi, Assaf; Ben-Dor, Shifra

    2014-02-22

    The advent of Next Generation Sequencing technologies and corresponding bioinformatics tools allows the definition of transcriptomes in non-model organisms. Non-model organisms are of great ecological and biotechnological significance, and consequently the understanding of their unique metabolic pathways is essential. Several methods that integrate de novo assembly with genome-based assembly have been proposed. Yet, there are many open challenges in defining genes, particularly where genomes are not available or incomplete. Despite the large numbers of transcriptome assemblies that have been performed, quality control of the transcript building process, particularly on the protein level, is rarely performed if ever. To test and improve the quality of the automated transcriptome reconstruction, we used manually defined and curated genes, several of them experimentally validated. Several approaches to transcript construction were utilized, based on the available data: a draft genome, high quality RNAseq reads, and ESTs. In order to maximize the contribution of the various data, we integrated methods including de novo and genome based assembly, as well as EST clustering. After each step a set of manually curated genes was used for quality assessment of the transcripts. The interplay between the automated pipeline and the quality control indicated which additional processes were required to improve the transcriptome reconstruction. We discovered that E. huxleyi has a very high percentage of non-canonical splice junctions, and relatively high rates of intron retention, which caused unique issues with the currently available tools. While individual tools missed genes and artificially joined overlapping transcripts, combining the results of several tools improved the completeness and quality considerably. The final collection, created from the integration of several quality control and improvement rounds, was compared to the manually defined set both on the DNA and

  2. Structuring osteosarcoma knowledge: an osteosarcoma-gene association database based on literature mining and manual annotation.

    Science.gov (United States)

    Poos, Kathrin; Smida, Jan; Nathrath, Michaela; Maugg, Doris; Baumhoer, Daniel; Neumann, Anna; Korsching, Eberhard

    2014-01-01

    Osteosarcoma (OS) is the most common primary bone cancer exhibiting high genomic instability. This genomic instability affects multiple genes and microRNAs to a varying extent depending on patient and tumor subtype. Massive research is ongoing to identify genes including their gene products and microRNAs that correlate with disease progression and might be used as biomarkers for OS. However, the genomic complexity hampers the identification of reliable biomarkers. Up to now, clinico-pathological factors are the key determinants to guide prognosis and therapeutic treatments. Each day, new studies about OS are published and complicate the acquisition of information to support biomarker discovery and therapeutic improvements. Thus, it is necessary to provide a structured and annotated view on the current OS knowledge that is quick and easily accessible to researchers of the field. Therefore, we developed a publicly available database and Web interface that serves as resource for OS-associated genes and microRNAs. Genes and microRNAs were collected using an automated dictionary-based gene recognition procedure followed by manual review and annotation by experts of the field. In total, 911 genes and 81 microRNAs related to 1331 PubMed abstracts were collected (last update: 29 October 2013). Users can evaluate genes and microRNAs according to their potential prognostic and therapeutic impact, the experimental procedures, the sample types, the biological contexts and microRNA target gene interactions. Additionally, a pathway enrichment analysis of the collected genes highlights different aspects of OS progression. OS requires pathways commonly deregulated in cancer but also features OS-specific alterations like deregulated osteoclast differentiation. To our knowledge, this is the first effort of an OS database containing manual reviewed and annotated up-to-date OS knowledge. It might be a useful resource especially for the bone tumor research community, as specific

  3. Human transporter database: comprehensive knowledge and discovery tools in the human transporter genes.

    Directory of Open Access Journals (Sweden)

    Adam Y Ye

    Full Text Available Transporters are essential in homeostatic exchange of endogenous and exogenous substances at the systematic, organic, cellular, and subcellular levels. Gene mutations of transporters are often related to pharmacogenetics traits. Recent developments in high throughput technologies on genomics, transcriptomics and proteomics allow in depth studies of transporter genes in normal cellular processes and diverse disease conditions. The flood of high throughput data have resulted in urgent need for an updated knowledgebase with curated, organized, and annotated human transporters in an easily accessible way. Using a pipeline with the combination of automated keywords query, sequence similarity search and manual curation on transporters, we collected 1,555 human non-redundant transporter genes to develop the Human Transporter Database (HTD (http://htd.cbi.pku.edu.cn. Based on the extensive annotations, global properties of the transporter genes were illustrated, such as expression patterns and polymorphisms in relationships with their ligands. We noted that the human transporters were enriched in many fundamental biological processes such as oxidative phosphorylation and cardiac muscle contraction, and significantly associated with Mendelian and complex diseases such as epilepsy and sudden infant death syndrome. Overall, HTD provides a well-organized interface to facilitate research communities to search detailed molecular and genetic information of transporters for development of personalized medicine.

  4. Constructing Data Curation Profiles

    Directory of Open Access Journals (Sweden)

    Michael Witt

    2009-12-01

    Full Text Available This paper presents a brief literature review and then introduces the methods, design, and construction of the Data Curation Profile, an instrument that can be used to provide detailed information on particular data forms that might be curated by an academic library. These data forms are presented in the context of the related sub-disciplinary research area, and they provide the flow of the research process from which these data are generated. The profiles also represent the needs for data curation from the perspective of the data producers, using their own language. As such, they support the exploration of data curation across different research domains in real and practical terms. With the sponsorship of the Institute of Museum and Library Services, investigators from Purdue University and the University of Illinois interviewed 19 faculty subjects to identify needs for discovery, access, preservation, and reuse of their research data. For each subject, a profile was constructed that includes information about his or her general research, data forms and stages, value of data, data ingest, intellectual property, organization and description of data, tools, interoperability, impact and prestige, data management, and preservation. Each profile also presents a specific dataset supplied by the subject to serve as a concrete example. The Data Curation Profiles are being published to a public wiki for questions and discussion, and a blank template will be disseminated with guidelines for others to create and share their own profiles. This study was conducted primarily from the viewpoint of librarians interacting with faculty researchers; however, it is expected that these findings will complement a wide variety of data curation research and practice outside of librarianship and the university environment.

  5. Ginseng Genome Database: an open-access platform for genomics of Panax ginseng.

    Science.gov (United States)

    Jayakodi, Murukarthick; Choi, Beom-Soon; Lee, Sang-Choon; Kim, Nam-Hoon; Park, Jee Young; Jang, Woojong; Lakshmanan, Meiyappan; Mohan, Shobhana V G; Lee, Dong-Yup; Yang, Tae-Jin

    2018-04-12

    The ginseng (Panax ginseng C.A. Meyer) is a perennial herbaceous plant that has been used in traditional oriental medicine for thousands of years. Ginsenosides, which have significant pharmacological effects on human health, are the foremost bioactive constituents in this plant. Having realized the importance of this plant to humans, an integrated omics resource becomes indispensable to facilitate genomic research, molecular breeding and pharmacological study of this herb. The first draft genome sequences of P. ginseng cultivar "Chunpoong" were reported recently. Here, using the draft genome, transcriptome, and functional annotation datasets of P. ginseng, we have constructed the Ginseng Genome Database http://ginsengdb.snu.ac.kr /, the first open-access platform to provide comprehensive genomic resources of P. ginseng. The current version of this database provides the most up-to-date draft genome sequence (of approximately 3000 Mbp of scaffold sequences) along with the structural and functional annotations for 59,352 genes and digital expression of genes based on transcriptome data from different tissues, growth stages and treatments. In addition, tools for visualization and the genomic data from various analyses are provided. All data in the database were manually curated and integrated within a user-friendly query page. This database provides valuable resources for a range of research fields related to P. ginseng and other species belonging to the Apiales order as well as for plant research communities in general. Ginseng genome database can be accessed at http://ginsengdb.snu.ac.kr /.

  6. Human Ageing Genomic Resources: Integrated databases and tools for the biology and genetics of ageing

    Science.gov (United States)

    Tacutu, Robi; Craig, Thomas; Budovsky, Arie; Wuttke, Daniel; Lehmann, Gilad; Taranukha, Dmitri; Costa, Joana; Fraifeld, Vadim E.; de Magalhães, João Pedro

    2013-01-01

    The Human Ageing Genomic Resources (HAGR, http://genomics.senescence.info) is a freely available online collection of research databases and tools for the biology and genetics of ageing. HAGR features now several databases with high-quality manually curated data: (i) GenAge, a database of genes associated with ageing in humans and model organisms; (ii) AnAge, an extensive collection of longevity records and complementary traits for >4000 vertebrate species; and (iii) GenDR, a newly incorporated database, containing both gene mutations that interfere with dietary restriction-mediated lifespan extension and consistent gene expression changes induced by dietary restriction. Since its creation about 10 years ago, major efforts have been undertaken to maintain the quality of data in HAGR, while further continuing to develop, improve and extend it. This article briefly describes the content of HAGR and details the major updates since its previous publications, in terms of both structure and content. The completely redesigned interface, more intuitive and more integrative of HAGR resources, is also presented. Altogether, we hope that through its improvements, the current version of HAGR will continue to provide users with the most comprehensive and accessible resources available today in the field of biogerontology. PMID:23193293

  7. DEPOT database: Reference manual and user's guide

    International Nuclear Information System (INIS)

    Clancey, P.; Logg, C.

    1991-03-01

    DEPOT has been developed to provide tracking for the Stanford Linear Collider (SLC) control system equipment. For each piece of equipment entered into the database, complete location, service, maintenance, modification, certification, and radiation exposure histories can be maintained. To facilitate data entry accuracy, efficiency, and consistency, barcoding technology has been used extensively. DEPOT has been an important tool in improving the reliability of the microsystems controlling SLC. This document describes the components of the DEPOT database, the elements in the database records, and the use of the supporting programs for entering data, searching the database, and producing reports from the information

  8. The Pathogen-Host Interactions database (PHI-base): additions and future developments.

    Science.gov (United States)

    Urban, Martin; Pant, Rashmi; Raghunath, Arathi; Irvine, Alistair G; Pedro, Helder; Hammond-Kosack, Kim E

    2015-01-01

    Rapidly evolving pathogens cause a diverse array of diseases and epidemics that threaten crop yield, food security as well as human, animal and ecosystem health. To combat infection greater comparative knowledge is required on the pathogenic process in multiple species. The Pathogen-Host Interactions database (PHI-base) catalogues experimentally verified pathogenicity, virulence and effector genes from bacterial, fungal and protist pathogens. Mutant phenotypes are associated with gene information. The included pathogens infect a wide range of hosts including humans, animals, plants, insects, fish and other fungi. The current version, PHI-base 3.6, available at http://www.phi-base.org, stores information on 2875 genes, 4102 interactions, 110 host species, 160 pathogenic species (103 plant, 3 fungal and 54 animal infecting species) and 181 diseases drawn from 1243 references. Phenotypic and gene function information has been obtained by manual curation of the peer-reviewed literature. A controlled vocabulary consisting of nine high-level phenotype terms permits comparisons and data analysis across the taxonomic space. PHI-base phenotypes were mapped via their associated gene information to reference genomes available in Ensembl Genomes. Virulence genes and hotspots can be visualized directly in genome browsers. Future plans for PHI-base include development of tools facilitating community-led curation and inclusion of the corresponding host target(s). © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Affiliation to the work market after curative treatment of head-and-neck cancer: a population-based study from the DAHANCA database.

    Science.gov (United States)

    Kjær, Trille; Bøje, Charlotte Rotbøl; Olsen, Maja Halgren; Overgaard, Jens; Johansen, Jørgen; Ibfelt, Else; Steding-Jessen, Marianne; Johansen, Christoffer; Dalton, Susanne O

    2013-02-01

    Survivors of squamous cell carcinoma of the head and neck (HNSCC) are more severely affected in regard to affiliation to the work market than other cancer survivors. Few studies have investigated associations between socioeconomic and disease-related factors and work market affiliation after curative treatment of HNSCC. We investigated the factors for early retirement pension due to disability and unemployment in patients who had been available for work one year before diagnosis. In a nationwide, population-based cohort study, data on 2436 HNSCC patients treated curatively in 1992-2008 were obtained from the Danish Head and Neck Cancer Group database and linked to Danish administrative population-based registries to obtain demographic and socioeconomic variables. We used multivariate logistic regression models to assess associations between socioeconomic factors (education, income and cohabitating status), cancer-specific variables such as tumour site and stage, comorbidity, early retirement pension and unemployment, with adjustment for age, gender and year of diagnosis. Short education [odds ratio (OR) 4.8; 95% confidence interval (CI) 2.2-10.4], low income (OR 3.2; 95% CI 1.8-5.8), living alone (OR 3.0; 95% CI 2.1-4.4) and having a Charlson comorbidity index score of 3 or more (OR 5.9; 95% CI 3.1-11) were significantly associated with early retirement overall and in all site groups. For the subgroup of patients who were employed before diagnosis, the risk pattern was similar. Tumour stage was not associated with early retirement or unemployment. Cancer-related factors were less strongly associated with early retirement and unemployment than socioeconomic factors and comorbidity. Clinicians treating HNSCC patients should be aware of the socioeconomic factors related to work market affiliation in order to provide more intensive social support or targeted rehabilitation for this patient group.

  10. Morbidity of curative cancer surgery and suicide risk.

    Science.gov (United States)

    Jayakrishnan, Thejus T; Sekigami, Yurie; Rajeev, Rahul; Gamblin, T Clark; Turaga, Kiran K

    2017-11-01

    Curative cancer operations lead to debility and loss of autonomy in a population vulnerable to suicide death. The extent to which operative intervention impacts suicide risk is not well studied. To examine the effects of morbidity of curative cancer surgeries and prognosis of disease on the risk of suicide in patients with solid tumors. Retrospective cohort study using Surveillance, Epidemiology, and End Results data from 2004 to 2011; multilevel systematic review. General US population. Participants were 482 781 patients diagnosed with malignant neoplasm between 2004 and 2011 who underwent curative cancer surgeries. Death by suicide or self-inflicted injury. Among 482 781 patients that underwent curative cancer surgery, 231 committed suicide (16.58/100 000 person-years [95% confidence interval, CI, 14.54-18.82]). Factors significantly associated with suicide risk included male sex (incidence rate [IR], 27.62; 95% CI, 23.82-31.86) and age >65 years (IR, 22.54; 95% CI, 18.84-26.76). When stratified by 30-day overall postoperative morbidity, a significantly higher incidence of suicide was found for high-morbidity surgeries (IR, 33.30; 95% CI, 26.50-41.33) vs moderate morbidity (IR, 24.27; 95% CI, 18.92-30.69) and low morbidity (IR, 9.81; 95% CI, 7.90-12.04). Unit increase in morbidity was significantly associated with death by suicide (odds ratio, 1.01; 95% CI, 1.00-1.03; P = .02) and decreased suicide-specific survival (hazards ratio, 1.02; 95% CI, 1.00-1.03, P = .01) in prognosis-adjusted models. In this sample of cancer patients in the Surveillance, Epidemiology, and End Results database, patients that undergo high-morbidity surgeries appear most vulnerable to death by suicide. The identification of this high-risk cohort should motivate health care providers and particularly surgeons to adopt screening measures during the postoperative follow-up period for these patients. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Using distant supervised learning to identify protein subcellular localizations from full-text scientific articles.

    Science.gov (United States)

    Zheng, Wu; Blake, Catherine

    2015-10-01

    Databases of curated biomedical knowledge, such as the protein-locations reflected in the UniProtKB database, provide an accurate and useful resource to researchers and decision makers. Our goal is to augment the manual efforts currently used to curate knowledge bases with automated approaches that leverage the increased availability of full-text scientific articles. This paper describes experiments that use distant supervised learning to identify protein subcellular localizations, which are important to understand protein function and to identify candidate drug targets. Experiments consider Swiss-Prot, the manually annotated subset of the UniProtKB protein knowledge base, and 43,000 full-text articles from the Journal of Biological Chemistry that contain just under 11.5 million sentences. The system achieves 0.81 precision and 0.49 recall at sentence level and an accuracy of 57% on held-out instances in a test set. Moreover, the approach identifies 8210 instances that are not in the UniProtKB knowledge base. Manual inspection of the 50 most likely relations showed that 41 (82%) were valid. These results have immediate benefit to researchers interested in protein function, and suggest that distant supervision should be explored to complement other manual data curation efforts. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Curative effects of small incision cataract surgery versus phacoemulsification: a Meta-analysis

    Directory of Open Access Journals (Sweden)

    Chang-Jian Yang

    2013-08-01

    Full Text Available AIM: To evaluate the curative efficacy of small incision cataract surgery(SICSversus phacoemulsification(Phaco.METHODS: A computerized literature search was carried out in Chinese Biomedical Database(CBM, Wanfang Data, VIP and Chinese National Knowledge Infrastructure(CNKIto collect articles published between 1989-2013 concerning the curative efficacy of SICS versus Phaco. The studies were assessed in terms of clinical case-control criteria. Meta-analysis were performed to assess the visual acuity, the complications rates between SICS and Phaco 90 days after surgery. Treatment effects were measured as risk difference(RDbetween SICS and Phaco. Fixed and random effect models were employed to combine results after a heterogeneity test. RESULTS:A total of 8 studies were included in our Meta-analysis. At 90 days postoperative time, there were no significant differences between the two groups at the visual acuity >0.5(P=0.14; and no significant differences on the complications rates of corneal astigmatism, corneal edema, posterior capsular rupture and anterior iris reaction(P>0.05.CONCLUSION: These results suggest that there is no different on the curative effects of SICS and Phaco for cataract.

  13. Digital Management and Curation of the National Rock and Ore Collections at NMNH, Smithsonian

    Science.gov (United States)

    Cottrell, E.; Andrews, B.; Sorensen, S. S.; Hale, L. J.

    2011-12-01

    The National Museum of Natural History, Smithsonian Institution, is home to the world's largest curated rock collection. The collection houses 160,680 physical rock and ore specimen lots ("samples"), all of which already have a digital record that can be accessed by the public through a searchable web interface (http://collections.mnh.si.edu/search/ms/). In addition, there are 66 accessions pending that when catalogued will add approximately 60,000 specimen lots. NMNH's collections are digitally managed on the KE EMu° platform which has emerged as the premier system for managing collections in natural history museums worldwide. In 2010 the Smithsonian released an ambitious 5 year Digitization Strategic Plan. In Mineral Sciences, new digitization efforts in the next five years will focus on integrating various digital resources for volcanic specimens. EMu sample records will link to the corresponding records for physical eruption information housed within the database of Smithsonian's Global Volcanism Program (GVP). Linkages are also planned between our digital records and geochemical databases (like EarthChem or PetDB) maintained by third parties. We anticipate that these linkages will increase the use of NMNH collections as well as engender new scholarly directions for research. Another large project the museum is currently undertaking involves the integration of the functionality of in-house designed Transaction Management software with the EMu database. This will allow access to the details (borrower, quantity, date, and purpose) of all loans of a given specimen through its catalogue record. We hope this will enable cross-referencing and fertilization of research ideas while avoiding duplicate efforts. While these digitization efforts are critical, we propose that the greatest challenge to sample curation is not posed by digitization and that a global sample registry alone will not ensure that samples are available for reuse. We suggest instead that the ability

  14. Curation Micro-Services: A Pipeline Metaphor for Repositories

    OpenAIRE

    Abrams, Stephen; Cruse, Patricia; Kunze, John; Minor, David

    2010-01-01

    The effective long-term curation of digital content requires expert analysis, policy setting, and decision making, and a robust technical infrastructure that can effect and enforce curation policies and implement appropriate curation activities. Since the number, size, and diversity of content under curation management will undoubtedly continue to grow over time, and the state of curation understanding and best practices relative to that content will undergo a similar constant evolution, one ...

  15. InverPep: A database of invertebrate antimicrobial peptides.

    Science.gov (United States)

    Gómez, Esteban A; Giraldo, Paula; Orduz, Sergio

    2017-03-01

    The aim of this work was to construct InverPep, a database specialised in experimentally validated antimicrobial peptides (AMPs) from invertebrates. AMP data contained in InverPep were manually curated from other databases and the scientific literature. MySQL was integrated with the development platform Laravel; this framework allows to integrate programming in PHP with HTML and was used to design the InverPep web page's interface. InverPep contains 18 separated fields, including InverPep code, phylum and species source, peptide name, sequence, peptide length, secondary structure, molar mass, charge, isoelectric point, hydrophobicity, Boman index, aliphatic index and percentage of hydrophobic amino acids. CALCAMPI, an algorithm to calculate the physicochemical properties of multiple peptides simultaneously, was programmed in PERL language. To date, InverPep contains 702 experimentally validated AMPs from invertebrate species. All of the peptides contain information associated with their source, physicochemical properties, secondary structure, biological activity and links to external literature. Most AMPs in InverPep have a length between 10 and 50 amino acids, a positive charge, a Boman index between 0 and 2 kcal/mol, and 30-50% hydrophobic amino acids. InverPep includes 33 AMPs not reported in other databases. Besides, CALCAMPI and statistical analysis of InverPep data is presented. The InverPep database is available in English and Spanish. InverPep is a useful database to study invertebrate AMPs and its information could be used for the design of new peptides. The user-friendly interface of InverPep and its information can be freely accessed via a web-based browser at http://ciencias.medellin.unal.edu.co/gruposdeinvestigacion/prospeccionydisenobiomoleculas/InverPep/public/home_en. Copyright © 2016 International Society for Chemotherapy of Infection and Cancer. Published by Elsevier Ltd. All rights reserved.

  16. Earth System Model Development and Analysis using FRE-Curator and Live Access Servers: On-demand analysis of climate model output with data provenance.

    Science.gov (United States)

    Radhakrishnan, A.; Balaji, V.; Schweitzer, R.; Nikonov, S.; O'Brien, K.; Vahlenkamp, H.; Burger, E. F.

    2016-12-01

    There are distinct phases in the development cycle of an Earth system model. During the model development phase, scientists make changes to code and parameters and require rapid access to results for evaluation. During the production phase, scientists may make an ensemble of runs with different settings, and produce large quantities of output, that must be further analyzed and quality controlled for scientific papers and submission to international projects such as the Climate Model Intercomparison Project (CMIP). During this phase, provenance is a key concern:being able to track back from outputs to inputs. We will discuss one of the paths taken at GFDL in delivering tools across this lifecycle, offering on-demand analysis of data by integrating the use of GFDL's in-house FRE-Curator, Unidata's THREDDS and NOAA PMEL's Live Access Servers (LAS).Experience over this lifecycle suggests that a major difficulty in developing analysis capabilities is only partially the scientific content, but often devoted to answering the questions "where is the data?" and "how do I get to it?". "FRE-Curator" is the name of a database-centric paradigm used at NOAA GFDL to ingest information about the model runs into an RDBMS (Curator database). The components of FRE-Curator are integrated into Flexible Runtime Environment workflow and can be invoked during climate model simulation. The front end to FRE-Curator, known as the Model Development Database Interface (MDBI) provides an in-house web-based access to GFDL experiments: metadata, analysis output and more. In order to provide on-demand visualization, MDBI uses Live Access Servers which is a highly configurable web server designed to provide flexible access to geo-referenced scientific data, that makes use of OPeNDAP. Model output saved in GFDL's tape archive, the size of the database and experiments, continuous model development initiatives with more dynamic configurations add complexity and challenges in providing an on

  17. Reactome graph database: Efficient access to complex pathway data

    Science.gov (United States)

    Korninger, Florian; Viteri, Guilherme; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D’Eustachio, Peter

    2018-01-01

    Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types. PMID:29377902

  18. Reactome graph database: Efficient access to complex pathway data.

    Directory of Open Access Journals (Sweden)

    Antonio Fabregat

    2018-01-01

    Full Text Available Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j as well as the new ContentService (REST API that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.

  19. Reactome graph database: Efficient access to complex pathway data.

    Science.gov (United States)

    Fabregat, Antonio; Korninger, Florian; Viteri, Guilherme; Sidiropoulos, Konstantinos; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D'Eustachio, Peter; Hermjakob, Henning

    2018-01-01

    Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.

  20. Saccharomyces genome database informs human biology

    OpenAIRE

    Skrzypek, Marek S; Nash, Robert S; Wong, Edith D; MacPherson, Kevin A; Hellerstedt, Sage T; Engel, Stacia R; Karra, Kalpana; Weng, Shuai; Sheppard, Travis K; Binkley, Gail; Simison, Matt; Miyasato, Stuart R; Cherry, J Michael

    2017-01-01

    Abstract The Saccharomyces Genome Database (SGD; http://www.yeastgenome.org) is an expertly curated database of literature-derived functional information for the model organism budding yeast, Saccharomyces cerevisiae. SGD constantly strives to synergize new types of experimental data and bioinformatics predictions with existing data, and to organize them into a comprehensive and up-to-date information resource. The primary mission of SGD is to facilitate research into the biology of yeast and...

  1. E3 Staff Database

    Data.gov (United States)

    US Agency for International Development — E3 Staff database is maintained by E3 PDMS (Professional Development & Management Services) office. The database is Mysql. It is manually updated by E3 staff as...

  2. Tools and Databases of the KOMICS Web Portal for Preprocessing, Mining, and Dissemination of Metabolomics Data

    Directory of Open Access Journals (Sweden)

    Nozomu Sakurai

    2014-01-01

    Full Text Available A metabolome—the collection of comprehensive quantitative data on metabolites in an organism—has been increasingly utilized for applications such as data-intensive systems biology, disease diagnostics, biomarker discovery, and assessment of food quality. A considerable number of tools and databases have been developed to date for the analysis of data generated by various combinations of chromatography and mass spectrometry. We report here a web portal named KOMICS (The Kazusa Metabolomics Portal, where the tools and databases that we developed are available for free to academic users. KOMICS includes the tools and databases for preprocessing, mining, visualization, and publication of metabolomics data. Improvements in the annotation of unknown metabolites and dissemination of comprehensive metabolomic data are the primary aims behind the development of this portal. For this purpose, PowerGet and FragmentAlign include a manual curation function for the results of metabolite feature alignments. A metadata-specific wiki-based database, Metabolonote, functions as a hub of web resources related to the submitters' work. This feature is expected to increase citation of the submitters' work, thereby promoting data publication. As an example of the practical use of KOMICS, a workflow for a study on Jatropha curcas is presented. The tools and databases available at KOMICS should contribute to enhanced production, interpretation, and utilization of metabolomic Big Data.

  3. Tools and databases of the KOMICS web portal for preprocessing, mining, and dissemination of metabolomics data.

    Science.gov (United States)

    Sakurai, Nozomu; Ara, Takeshi; Enomoto, Mitsuo; Motegi, Takeshi; Morishita, Yoshihiko; Kurabayashi, Atsushi; Iijima, Yoko; Ogata, Yoshiyuki; Nakajima, Daisuke; Suzuki, Hideyuki; Shibata, Daisuke

    2014-01-01

    A metabolome--the collection of comprehensive quantitative data on metabolites in an organism--has been increasingly utilized for applications such as data-intensive systems biology, disease diagnostics, biomarker discovery, and assessment of food quality. A considerable number of tools and databases have been developed to date for the analysis of data generated by various combinations of chromatography and mass spectrometry. We report here a web portal named KOMICS (The Kazusa Metabolomics Portal), where the tools and databases that we developed are available for free to academic users. KOMICS includes the tools and databases for preprocessing, mining, visualization, and publication of metabolomics data. Improvements in the annotation of unknown metabolites and dissemination of comprehensive metabolomic data are the primary aims behind the development of this portal. For this purpose, PowerGet and FragmentAlign include a manual curation function for the results of metabolite feature alignments. A metadata-specific wiki-based database, Metabolonote, functions as a hub of web resources related to the submitters' work. This feature is expected to increase citation of the submitters' work, thereby promoting data publication. As an example of the practical use of KOMICS, a workflow for a study on Jatropha curcas is presented. The tools and databases available at KOMICS should contribute to enhanced production, interpretation, and utilization of metabolomic Big Data.

  4. Agile Data Curation Case Studies Leading to the Identification and Development of Data Curation Design Patterns

    Science.gov (United States)

    Benedict, K. K.; Lenhardt, W. C.; Young, J. W.; Gordon, L. C.; Hughes, S.; Santhana Vannan, S. K.

    2017-12-01

    The planning for and development of efficient workflows for the creation, reuse, sharing, documentation, publication and preservation of research data is a general challenge that research teams of all sizes face. In response to: requirements from funding agencies for full-lifecycle data management plans that will result in well documented, preserved, and shared research data products increasing requirements from publishers for shared data in conjunction with submitted papers interdisciplinary research team's needs for efficient data sharing within projects, and increasing reuse of research data for replication and new, unanticipated research, policy development, and public use alternative strategies to traditional data life cycle approaches must be developed and shared that enable research teams to meet these requirements while meeting the core science objectives of their projects within the available resources. In support of achieving these goals, the concept of Agile Data Curation has been developed in which there have been parallel activities in support of 1) identifying a set of shared values and principles that underlie the objectives of agile data curation, 2) soliciting case studies from the Earth science and other research communities that illustrate aspects of what the contributors consider agile data curation methods and practices, and 3) identifying or developing design patterns that are high-level abstractions from successful data curation practice that are related to common data curation problems for which common solution strategies may be employed. This paper provides a collection of case studies that have been contributed by the Earth science community, and an initial analysis of those case studies to map them to emerging shared data curation problems and their potential solutions. Following the initial analysis of these problems and potential solutions, existing design patterns from software engineering and related disciplines are identified as a

  5. Somatic cancer variant curation and harmonization through consensus minimum variant level data

    Directory of Open Access Journals (Sweden)

    Deborah I. Ritter

    2016-11-01

    Full Text Available Abstract Background To truly achieve personalized medicine in oncology, it is critical to catalog and curate cancer sequence variants for their clinical relevance. The Somatic Working Group (WG of the Clinical Genome Resource (ClinGen, in cooperation with ClinVar and multiple cancer variant curation stakeholders, has developed a consensus set of minimal variant level data (MVLD. MVLD is a framework of standardized data elements to curate cancer variants for clinical utility. With implementation of MVLD standards, and in a working partnership with ClinVar, we aim to streamline the somatic variant curation efforts in the community and reduce redundancy and time burden for the interpretation of cancer variants in clinical practice. Methods We developed MVLD through a consensus approach by i reviewing clinical actionability interpretations from institutions participating in the WG, ii conducting extensive literature search of clinical somatic interpretation schemas, and iii survey of cancer variant web portals. A forthcoming guideline on cancer variant interpretation, from the Association of Molecular Pathology (AMP, can be incorporated into MVLD. Results Along with harmonizing standardized terminology for allele interpretive and descriptive fields that are collected by many databases, the MVLD includes unique fields for cancer variants such as Biomarker Class, Therapeutic Context and Effect. In addition, MVLD includes recommendations for controlled semantics and ontologies. The Somatic WG is collaborating with ClinVar to evaluate MVLD use for somatic variant submissions. ClinVar is an open and centralized repository where sequencing laboratories can report summary-level variant data with clinical significance, and ClinVar accepts cancer variant data. Conclusions We expect the use of the MVLD to streamline clinical interpretation of cancer variants, enhance interoperability among multiple redundant curation efforts, and increase submission of

  6. ExtraTrain: a database of Extragenic regions and Transcriptional information in prokaryotic organisms

    Science.gov (United States)

    Pareja, Eduardo; Pareja-Tobes, Pablo; Manrique, Marina; Pareja-Tobes, Eduardo; Bonal, Javier; Tobes, Raquel

    2006-01-01

    Background Transcriptional regulation processes are the principal mechanisms of adaptation in prokaryotes. In these processes, the regulatory proteins and the regulatory DNA signals located in extragenic regions are the key elements involved. As all extragenic spaces are putative regulatory regions, ExtraTrain covers all extragenic regions of available genomes and regulatory proteins from bacteria and archaea included in the UniProt database. Description ExtraTrain provides integrated and easily manageable information for 679816 extragenic regions and for the genes delimiting each of them. In addition ExtraTrain supplies a tool to explore extragenic regions, named Palinsight, oriented to detect and search palindromic patterns. This interactive visual tool is totally integrated in the database, allowing the search for regulatory signals in user defined sets of extragenic regions. The 26046 regulatory proteins included in ExtraTrain belong to the families AraC/XylS, ArsR, AsnC, Cold shock domain, CRP-FNR, DeoR, GntR, IclR, LacI, LuxR, LysR, MarR, MerR, NtrC/Fis, OmpR and TetR. The database follows the InterPro criteria to define these families. The information about regulators includes manually curated sets of references specifically associated to regulator entries. In order to achieve a sustainable and maintainable knowledge database ExtraTrain is a platform open to the contribution of knowledge by the scientific community providing a system for the incorporation of textual knowledge. Conclusion ExtraTrain is a new database for exploring Extragenic regions and Transcriptional information in bacteria and archaea. ExtraTrain database is available at . PMID:16539733

  7. WikiPathways: a multifaceted pathway database bridging metabolomics to other omics research.

    Science.gov (United States)

    Slenter, Denise N; Kutmon, Martina; Hanspers, Kristina; Riutta, Anders; Windsor, Jacob; Nunes, Nuno; Mélius, Jonathan; Cirillo, Elisa; Coort, Susan L; Digles, Daniela; Ehrhart, Friederike; Giesbertz, Pieter; Kalafati, Marianthi; Martens, Marvin; Miller, Ryan; Nishida, Kozo; Rieswijk, Linda; Waagmeester, Andra; Eijssen, Lars M T; Evelo, Chris T; Pico, Alexander R; Willighagen, Egon L

    2018-01-04

    WikiPathways (wikipathways.org) captures the collective knowledge represented in biological pathways. By providing a database in a curated, machine readable way, omics data analysis and visualization is enabled. WikiPathways and other pathway databases are used to analyze experimental data by research groups in many fields. Due to the open and collaborative nature of the WikiPathways platform, our content keeps growing and is getting more accurate, making WikiPathways a reliable and rich pathway database. Previously, however, the focus was primarily on genes and proteins, leaving many metabolites with only limited annotation. Recent curation efforts focused on improving the annotation of metabolism and metabolic pathways by associating unmapped metabolites with database identifiers and providing more detailed interaction knowledge. Here, we report the outcomes of the continued growth and curation efforts, such as a doubling of the number of annotated metabolite nodes in WikiPathways. Furthermore, we introduce an OpenAPI documentation of our web services and the FAIR (Findable, Accessible, Interoperable and Reusable) annotation of resources to increase the interoperability of the knowledge encoded in these pathways and experimental omics data. New search options, monthly downloads, more links to metabolite databases, and new portals make pathway knowledge more effortlessly accessible to individual researchers and research communities. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Curating NASA's Past, Present, and Future Extraterrestrial Sample Collections

    Science.gov (United States)

    McCubbin, F. M.; Allton, J. H.; Evans, C. A.; Fries, M. D.; Nakamura-Messenger, K.; Righter, K.; Zeigler, R. A.; Zolensky, M.; Stansbery, E. K.

    2016-01-01

    The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "...curation of all extra-terrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "...documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the past, present, and future activities of the NASA Curation Office.

  9. The Danish Bladder Cancer Database

    Directory of Open Access Journals (Sweden)

    Hansen E

    2016-10-01

    Full Text Available Erik Hansen,1–3 Heidi Larsson,4 Mette Nørgaard,4 Peter Thind,3,5 Jørgen Bjerggaard Jensen1–3 1Department of Urology, Hospital of West Jutland-Holstebro, Holstebro, 2Department of Urology, Aarhus University Hospital, Aarhus, 3The Danish Bladder Cancer Database Group, 4Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, 5Department of Urology, Copenhagen University Hospital, Copenhagen, Denmark Aim of database: The aim of the Danish Bladder Cancer Database (DaBlaCa-data is to monitor the treatment of all patients diagnosed with invasive bladder cancer (BC in Denmark. Study population: All patients diagnosed with BC in Denmark from 2012 onward were included in the study. Results presented in this paper are predominantly from the 2013 population. Main variables: In 2013, 970 patients were diagnosed with BC in Denmark and were included in a preliminary report from the database. A total of 458 (47% patients were diagnosed with non-muscle-invasive BC (non-MIBC and 512 (53% were diagnosed with muscle-invasive BC (MIBC. A total of 300 (31% patients underwent cystectomy. Among the 135 patients diagnosed with MIBC, who were 75 years of age or younger, 67 (50% received neoadjuvent chemotherapy prior to cystectomy. In 2013, a total of 147 patients were treated with curative-intended radiation therapy. Descriptive data: One-year mortality was 28% (95% confidence interval [CI]: 15–21. One-year cancer-specific mortality was 25% (95% CI: 22–27%. One-year mortality after cystectomy was 14% (95% CI: 10–18. Ninety-day mortality after cystectomy was 3% (95% CI: 1–5 in 2013. One-year mortality following curative-intended radiation therapy was 32% (95% CI: 24–39 and 1-year cancer-specific mortality was 23% (95% CI: 16–31 in 2013. Conclusion: This preliminary DaBlaCa-data report showed that the treatment of MIBC in Denmark overall meet high international academic standards. The database is able to identify Danish BC patients and

  10. Research Problems in Data Curation: Outcomes from the Data Curation Education in Research Centers Program

    Science.gov (United States)

    Palmer, C. L.; Mayernik, M. S.; Weber, N.; Baker, K. S.; Kelly, K.; Marlino, M. R.; Thompson, C. A.

    2013-12-01

    The need for data curation is being recognized in numerous institutional settings as national research funding agencies extend data archiving mandates to cover more types of research grants. Data curation, however, is not only a practical challenge. It presents many conceptual and theoretical challenges that must be investigated to design appropriate technical systems, social practices and institutions, policies, and services. This presentation reports on outcomes from an investigation of research problems in data curation conducted as part of the Data Curation Education in Research Centers (DCERC) program. DCERC is developing a new model for educating data professionals to contribute to scientific research. The program is organized around foundational courses and field experiences in research and data centers for both master's and doctoral students. The initiative is led by the Graduate School of Library and Information Science at the University of Illinois at Urbana-Champaign, in collaboration with the School of Information Sciences at the University of Tennessee, and library and data professionals at the National Center for Atmospheric Research (NCAR). At the doctoral level DCERC is educating future faculty and researchers in data curation and establishing a research agenda to advance the field. The doctoral seminar, Research Problems in Data Curation, was developed and taught in 2012 by the DCERC principal investigator and two doctoral fellows at the University of Illinois. It was designed to define the problem space of data curation, examine relevant concepts and theories related to both technical and social perspectives, and articulate research questions that are either unexplored or under theorized in the current literature. There was a particular emphasis on the Earth and environmental sciences, with guest speakers brought in from NCAR, National Snow and Ice Data Center (NSIDC), and Rensselaer Polytechnic Institute. Through the assignments, students

  11. Improving Access to NASA Earth Science Data through Collaborative Metadata Curation

    Science.gov (United States)

    Sisco, A. W.; Bugbee, K.; Shum, D.; Baynes, K.; Dixon, V.; Ramachandran, R.

    2017-12-01

    The NASA-developed Common Metadata Repository (CMR) is a high-performance metadata system that currently catalogs over 375 million Earth science metadata records. It serves as the authoritative metadata management system of NASA's Earth Observing System Data and Information System (EOSDIS), enabling NASA Earth science data to be discovered and accessed by a worldwide user community. The size of the EOSDIS data archive is steadily increasing, and the ability to manage and query this archive depends on the input of high quality metadata to the CMR. Metadata that does not provide adequate descriptive information diminishes the CMR's ability to effectively find and serve data to users. To address this issue, an innovative and collaborative review process is underway to systematically improve the completeness, consistency, and accuracy of metadata for approximately 7,000 data sets archived by NASA's twelve EOSDIS data centers, or Distributed Active Archive Centers (DAACs). The process involves automated and manual metadata assessment of both collection and granule records by a team of Earth science data specialists at NASA Marshall Space Flight Center. The team communicates results to DAAC personnel, who then make revisions and reingest improved metadata into the CMR. Implementation of this process relies on a network of interdisciplinary collaborators leveraging a variety of communication platforms and long-range planning strategies. Curating metadata at this scale and resolving metadata issues through community consensus improves the CMR's ability to serve current and future users and also introduces best practices for stewarding the next generation of Earth Observing System data. This presentation will detail the metadata curation process, its outcomes thus far, and also share the status of ongoing curation activities.

  12. DDEC: Dragon database of genes implicated in esophageal cancer

    International Nuclear Information System (INIS)

    Essack, Magbubah; Radovanovic, Aleksandar; Schaefer, Ulf; Schmeier, Sebastian; Seshadri, Sundararajan V; Christoffels, Alan; Kaur, Mandeep; Bajic, Vladimir B

    2009-01-01

    Esophageal cancer ranks eighth in order of cancer occurrence. Its lethality primarily stems from inability to detect the disease during the early organ-confined stage and the lack of effective therapies for advanced-stage disease. Moreover, the understanding of molecular processes involved in esophageal cancer is not complete, hampering the development of efficient diagnostics and therapy. Efforts made by the scientific community to improve the survival rate of esophageal cancer have resulted in a wealth of scattered information that is difficult to find and not easily amendable to data-mining. To reduce this gap and to complement available cancer related bioinformatic resources, we have developed a comprehensive database (Dragon Database of Genes Implicated in Esophageal Cancer) with esophageal cancer related information, as an integrated knowledge database aimed at representing a gateway to esophageal cancer related data. Manually curated 529 genes differentially expressed in EC are contained in the database. We extracted and analyzed the promoter regions of these genes and complemented gene-related information with transcription factors that potentially control them. We further, precompiled text-mined and data-mined reports about each of these genes to allow for easy exploration of information about associations of EC-implicated genes with other human genes and proteins, metabolites and enzymes, toxins, chemicals with pharmacological effects, disease concepts and human anatomy. The resulting database, DDEC, has a useful feature to display potential associations that are rarely reported and thus difficult to identify. Moreover, DDEC enables inspection of potentially new 'association hypotheses' generated based on the precompiled reports. We hope that this resource will serve as a useful complement to the existing public resources and as a good starting point for researchers and physicians interested in EC genetics. DDEC is freely accessible to academic

  13. TBC2health: a database of experimentally validated health-beneficial effects of tea bioactive compounds.

    Science.gov (United States)

    Zhang, Shihua; Xuan, Hongdong; Zhang, Liang; Fu, Sicong; Wang, Yijun; Yang, Hua; Tai, Yuling; Song, Youhong; Zhang, Jinsong; Ho, Chi-Tang; Li, Shaowen; Wan, Xiaochun

    2017-09-01

    Tea is one of the most consumed beverages in the world. Considerable studies show the exceptional health benefits (e.g. antioxidation, cancer prevention) of tea owing to its various bioactive components. However, data from these extensively published papers had not been made available in a central database. To lay a foundation in improving the understanding of healthy tea functions, we established a TBC2health database that currently documents 1338 relationships between 497 tea bioactive compounds and 206 diseases (or phenotypes) manually culled from over 300 published articles. Each entry in TBC2health contains comprehensive information about a bioactive relationship that can be accessed in three aspects: (i) compound information, (ii) disease (or phenotype) information and (iii) evidence and reference. Using the curated bioactive relationships, a bipartite network was reconstructed and the corresponding network (or sub-network) visualization and topological analyses are provided for users. This database has a user-friendly interface for entry browse, search and download. In addition, TBC2health provides a submission page and several useful tools (e.g. BLAST, molecular docking) to facilitate use of the database. Consequently, TBC2health can serve as a valuable bioinformatics platform for the exploration of beneficial effects of tea on human health. TBC2health is freely available at http://camellia.ahau.edu.cn/TBC2health. © The Author 2016. Published by Oxford University Press.

  14. Curating research data a handbook of current practice

    CERN Document Server

    Johnston, Lisa R

    2017-01-01

    Curating Research Data, Volume Two: A Handbook of Current Practice guides you across the data lifecycle through the practical strategies and techniques for curating research data in a digital repository setting. The data curation steps for receiving, appraising, selecting, ingesting, transforming, describing, contextualizing, disseminating, and preserving digital research data are each explored, and then supplemented with detailed case studies written by more than forty international practitioners from national, disciplinary, and institutional data repositories. The steps in this volume detail the sequential actions that you might take to curate a data set from receiving the data (Step 1) to eventual reuse (Step 8). Data curators, archivists, research data management specialists, subject librarians, institutional repository managers, and digital library staff will benefit from these current and practical approaches to data curation.

  15. Digital curation: a proposal of a semi-automatic digital object selection-based model for digital curation in Big Data environments

    Directory of Open Access Journals (Sweden)

    Moisés Lima Dutra

    2016-08-01

    Full Text Available Introduction: This work presents a new approach for Digital Curations from a Big Data perspective. Objective: The objective is to propose techniques to digital curations for selecting and evaluating digital objects that take into account volume, velocity, variety, reality, and the value of the data collected from multiple knowledge domains. Methodology: This is an exploratory research of applied nature, which addresses the research problem in a qualitative way. Heuristics allow this semi-automatic process to be done either by human curators or by software agents. Results: As a result, it was proposed a model for searching, processing, evaluating and selecting digital objects to be processed by digital curations. Conclusions: It is possible to use Big Data environments as a source of information resources for Digital Curation; besides, Big Data techniques and tools can support the search and selection process of information resources by Digital Curations.

  16. Investigating Astromaterials Curation Applications for Dexterous Robotic Arms

    Science.gov (United States)

    Snead, C. J.; Jang, J. H.; Cowden, T. R.; McCubbin, F. M.

    2018-01-01

    The Astromaterials Acquisition and Curation office at NASA Johnson Space Center is currently investigating tools and methods that will enable the curation of future astromaterials collections. Size and temperature constraints for astromaterials to be collected by current and future proposed missions will require the development of new robotic sample and tool handling capabilities. NASA Curation has investigated the application of robot arms in the past, and robotic 3-axis micromanipulators are currently in use for small particle curation in the Stardust and Cosmic Dust laboratories. While 3-axis micromanipulators have been extremely successful for activities involving the transfer of isolated particles in the 5-20 micron range (e.g. from microscope slide to epoxy bullet tip, beryllium SEM disk), their limited ranges of motion and lack of yaw, pitch, and roll degrees of freedom restrict their utility in other applications. For instance, curators removing particles from cosmic dust collectors by hand often employ scooping and rotating motions to successfully free trapped particles from the silicone oil coatings. Similar scooping and rotating motions are also employed when isolating a specific particle of interest from an aliquot of crushed meteorite. While cosmic dust curators have been remarkably successful with these kinds of particle manipulations using handheld tools, operator fatigue limits the number of particles that can be removed during a given extraction session. The challenges for curation of small particles will be exacerbated by mission requirements that samples be processed in N2 sample cabinets (i.e. gloveboxes). We have been investigating the use of compact robot arms to facilitate sample handling within gloveboxes. Six-axis robot arms potentially have applications beyond small particle manipulation. For instance, future sample return missions may involve biologically sensitive astromaterials that can be easily compromised by physical interaction with

  17. tRNA sequence data, annotation data and curation data - tRNADB-CE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us tRNAD... tRNA sequence data, annotation data and curation data - tRNADB-CE | LSDB Archive ...

  18. EpiGeNet: A Graph Database of Interdependencies Between Genetic and Epigenetic Events in Colorectal Cancer.

    Science.gov (United States)

    Balaur, Irina; Saqi, Mansoor; Barat, Ana; Lysenko, Artem; Mazein, Alexander; Rawlings, Christopher J; Ruskin, Heather J; Auffray, Charles

    2017-10-01

    The development of colorectal cancer (CRC)-the third most common cancer type-has been associated with deregulations of cellular mechanisms stimulated by both genetic and epigenetic events. StatEpigen is a manually curated and annotated database, containing information on interdependencies between genetic and epigenetic signals, and specialized currently for CRC research. Although StatEpigen provides a well-developed graphical user interface for information retrieval, advanced queries involving associations between multiple concepts can benefit from more detailed graph representation of the integrated data. This can be achieved by using a graph database (NoSQL) approach. Data were extracted from StatEpigen and imported to our newly developed EpiGeNet, a graph database for storage and querying of conditional relationships between molecular (genetic and epigenetic) events observed at different stages of colorectal oncogenesis. We illustrate the enhanced capability of EpiGeNet for exploration of different queries related to colorectal tumor progression; specifically, we demonstrate the query process for (i) stage-specific molecular events, (ii) most frequently observed genetic and epigenetic interdependencies in colon adenoma, and (iii) paths connecting key genes reported in CRC and associated events. The EpiGeNet framework offers improved capability for management and visualization of data on molecular events specific to CRC initiation and progression.

  19. Self-Rerouting and Curative Interconnect Technology (SERCUIT)

    Science.gov (United States)

    2017-12-01

    SPECIAL REPORT RDMR-CS-17-01 SELF-REROUTING AND CURATIVE INTERCONNECT TECHNOLOGY (SERCUIT) Shiv Joshi Concepts to Systems, Inc...Final 4. TITLE AND SUBTITLE Self-Rerouting and Curative Interconnect Technology (SERCUIT) 5. FUNDING NUMBERS 6. AUTHOR(S) Shiv Joshi...concepts2systems.com (p) 434-207-5189 x (f) Click to view full size Title Contract Number SELF-REROUTING AND CURATIVE INTERCONNECT TECHNOLOGY (SERCUIT) W911W6-17-C-0029

  20. The plant phenological online database (PPODB): an online database for long-term phenological data

    Science.gov (United States)

    Dierenbach, Jonas; Badeck, Franz-W.; Schaber, Jörg

    2013-09-01

    We present an online database that provides unrestricted and free access to over 16 million plant phenological observations from over 8,000 stations in Central Europe between the years 1880 and 2009. Unique features are (1) a flexible and unrestricted access to a full-fledged database, allowing for a wide range of individual queries and data retrieval, (2) historical data for Germany before 1951 ranging back to 1880, and (3) more than 480 curated long-term time series covering more than 100 years for individual phenological phases and plants combined over Natural Regions in Germany. Time series for single stations or Natural Regions can be accessed through a user-friendly graphical geo-referenced interface. The joint databases made available with the plant phenological database PPODB render accessible an important data source for further analyses of long-term changes in phenology. The database can be accessed via www.ppodb.de .

  1. NEMiD: a web-based curated microbial diversity database with geo-based plotting.

    Science.gov (United States)

    Bhattacharjee, Kaushik; Joshi, Santa Ram

    2014-01-01

    The majority of the Earth's microbes remain unknown, and that their potential utility cannot be exploited until they are discovered and characterized. They provide wide scope for the development of new strains as well as biotechnological uses. The documentation and bioprospection of microorganisms carry enormous significance considering their relevance to human welfare. This calls for an urgent need to develop a database with emphasis on the microbial diversity of the largest untapped reservoirs in the biosphere. The data annotated in the North-East India Microbial database (NEMiD) were obtained by the isolation and characterization of microbes from different parts of the Eastern Himalayan region. The database was constructed as a relational database management system (RDBMS) for data storage in MySQL in the back-end on a Linux server and implemented in an Apache/PHP environment. This database provides a base for understanding the soil microbial diversity pattern in this megabiodiversity hotspot and indicates the distribution patterns of various organisms along with identification. The NEMiD database is freely available at www.mblabnehu.info/nemid/.

  2. NEMiD: A Web-Based Curated Microbial Diversity Database with Geo-Based Plotting

    Science.gov (United States)

    Bhattacharjee, Kaushik; Joshi, Santa Ram

    2014-01-01

    The majority of the Earth's microbes remain unknown, and that their potential utility cannot be exploited until they are discovered and characterized. They provide wide scope for the development of new strains as well as biotechnological uses. The documentation and bioprospection of microorganisms carry enormous significance considering their relevance to human welfare. This calls for an urgent need to develop a database with emphasis on the microbial diversity of the largest untapped reservoirs in the biosphere. The data annotated in the North-East India Microbial database (NEMiD) were obtained by the isolation and characterization of microbes from different parts of the Eastern Himalayan region. The database was constructed as a relational database management system (RDBMS) for data storage in MySQL in the back-end on a Linux server and implemented in an Apache/PHP environment. This database provides a base for understanding the soil microbial diversity pattern in this megabiodiversity hotspot and indicates the distribution patterns of various organisms along with identification. The NEMiD database is freely available at www.mblabnehu.info/nemid/. PMID:24714636

  3. 3DSwap: Curated knowledgebase of proteins involved in 3D domain swapping

    KAUST Repository

    Shameer, Khader

    2011-09-29

    Three-dimensional domain swapping is a unique protein structural phenomenon where two or more protein chains in a protein oligomer share a common structural segment between individual chains. This phenomenon is observed in an array of protein structures in oligomeric conformation. Protein structures in swapped conformations perform diverse functional roles and are also associated with deposition diseases in humans. We have performed in-depth literature curation and structural bioinformatics analyses to develop an integrated knowledgebase of proteins involved in 3D domain swapping. The hallmark of 3D domain swapping is the presence of distinct structural segments such as the hinge and swapped regions. We have curated the literature to delineate the boundaries of these regions. In addition, we have defined several new concepts like \\'secondary major interface\\' to represent the interface properties arising as a result of 3D domain swapping, and a new quantitative measure for the \\'extent of swapping\\' in structures. The catalog of proteins reported in 3DSwap knowledgebase has been generated using an integrated structural bioinformatics workflow of database searches, literature curation, by structure visualization and sequence-structure-function analyses. The current version of the 3DSwap knowledgebase reports 293 protein structures, the analysis of such a compendium of protein structures will further the understanding molecular factors driving 3D domain swapping. The Author(s) 2011.

  4. AtomPy: an open atomic-data curation environment

    Science.gov (United States)

    Bautista, Manuel; Mendoza, Claudio; Boswell, Josiah S; Ajoku, Chukwuemeka

    2014-06-01

    We present a cloud-computing environment for atomic data curation, networking among atomic data providers and users, teaching-and-learning, and interfacing with spectral modeling software. The system is based on Google-Drive Sheets, Pandas (Python Data Analysis Library) DataFrames, and IPython Notebooks for open community-driven curation of atomic data for scientific and technological applications. The atomic model for each ionic species is contained in a multi-sheet Google-Drive workbook, where the atomic parameters from all known public sources are progressively stored. Metadata (provenance, community discussion, etc.) accompanying every entry in the database are stored through Notebooks. Education tools on the physics of atomic processes as well as their relevance to plasma and spectral modeling are based on IPython Notebooks that integrate written material, images, videos, and active computer-tool workflows. Data processing workflows and collaborative software developments are encouraged and managed through the GitHub social network. Relevant issues this platform intends to address are: (i) data quality by allowing open access to both data producers and users in order to attain completeness, accuracy, consistency, provenance and currentness; (ii) comparisons of different datasets to facilitate accuracy assessment; (iii) downloading to local data structures (i.e. Pandas DataFrames) for further manipulation and analysis by prospective users; and (iv) data preservation by avoiding the discard of outdated sets.

  5. Alfred Drury: The Artist as Curator

    Directory of Open Access Journals (Sweden)

    Ben Thomas

    2016-06-01

    Full Text Available This article presents a series of reflections on the experience of curating the exhibition ‘Alfred Drury and the New Sculpture’ in 2013. In particular, it charts the evolution of the design of the exhibition, notably its central tableau based on a photograph of the sculptor Alfred Drury’s studio in 1900. This photograph records a display of Drury’s works for visiting Australian patrons, and could be said to record evidence of the artist curating his own work. The legitimacy of deriving a curatorial approach from this photographic evidence is discussed, along with the broader problem of ‘historicizing’ approaches to curating.

  6. Advanced Curation Activities at NASA: Preparation for Upcoming Missions

    Science.gov (United States)

    Fries, M. D.; Evans, C. A.; McCubbin, F. M.; Harrington, A. D.; Regberg, A. B.; Snead, C. J.; Zeigler, R. A.

    2017-07-01

    NASA Curation cares for NASA's astromaterials and performs advanced curation so as to improve current practices and prepare for future collections. Cold curation, microbial monitoring, contamination control/knowledge and other aspects are reviewed.

  7. Stop the Bleeding: the Development of a Tool to Streamline NASA Earth Science Metadata Curation Efforts

    Science.gov (United States)

    le Roux, J.; Baker, A.; Caltagirone, S.; Bugbee, K.

    2017-12-01

    The Common Metadata Repository (CMR) is a high-performance, high-quality repository for Earth science metadata records, and serves as the primary way to search NASA's growing 17.5 petabytes of Earth science data holdings. Released in 2015, CMR has the capability to support several different metadata standards already being utilized by NASA's combined network of Earth science data providers, or Distributed Active Archive Centers (DAACs). The Analysis and Review of CMR (ARC) Team located at Marshall Space Flight Center is working to improve the quality of records already in CMR with the goal of making records optimal for search and discovery. This effort entails a combination of automated and manual review, where each NASA record in CMR is checked for completeness, accuracy, and consistency. This effort is highly collaborative in nature, requiring communication and transparency of findings amongst NASA personnel, DAACs, the CMR team and other metadata curation teams. Through the evolution of this project it has become apparent that there is a need to document and report findings, as well as track metadata improvements in a more efficient manner. The ARC team has collaborated with Element 84 in order to develop a metadata curation tool to meet these needs. In this presentation, we will provide an overview of this metadata curation tool and its current capabilities. Challenges and future plans for the tool will also be discussed.

  8. Automating data citation: the eagle-i experience.

    Science.gov (United States)

    Alawini, Abdussalam; Chen, Leshang; Davidson, Susan B; Da Silva, Natan Portilho; Silvello, Gianmaria

    2017-06-01

    Data citation is of growing concern for owners of curated databases, who wish to give credit to the contributors and curators responsible for portions of the dataset and enable the data retrieved by a query to be later examined. While several databases specify how data should be cited, they leave it to users to manually construct the citations and do not generate them automatically. We report our experiences in automating data citation for an RDF dataset called eagle-i, and discuss how to generalize this to a citation framework that can work across a variety of different types of databases (e.g. relational, XML, and RDF). We also describe how a database administrator would use this framework to automate citation for a particular dataset.

  9. The Intelligent Monitoring System: Generic Database Interface (GDI). User Manual. Revision

    Science.gov (United States)

    1994-01-03

    Summary of Lo=catos Nan* Decufptin Directory Location User Manual FrameMaker ’ source organized inlo, a book UBSW~ftbendb~doclim/user-manual named gdibk A...functions. LNSRCf1bgenrdb/srC I. Framemaker is a docment publishing tool fium Fame Technology Cororation Baseline: 21.1 3-1 anoAW ftnua ?bewd uw on 3.2

  10. DESTAF: A database of text-mined associations for reproductive toxins potentially affecting human fertility

    KAUST Repository

    Dawe, Adam Sean; Radovanovic, Aleksandar; Kaur, Mandeep; Sagar, Sunil; Seshadri, Sundararajan Vijayaraghava; Schaefer, Ulf; Kamau, Allan; Christoffels, Alan G.; Bajic, Vladimir B.

    2012-01-01

    The Dragon Exploration System for Toxicants and Fertility (DESTAF) is a publicly available resource which enables researchers to efficiently explore both known and potentially novel information and associations in the field of reproductive toxicology. To create DESTAF we used data from the literature (including over 10. 500 PubMed abstracts), several publicly available biomedical repositories, and specialized, curated dictionaries. DESTAF has an interface designed to facilitate rapid assessment of the key associations between relevant concepts, allowing for a more in-depth exploration of information based on different gene/protein-, enzyme/metabolite-, toxin/chemical-, disease- or anatomically centric perspectives. As a special feature, DESTAF allows for the creation and initial testing of potentially new association hypotheses that suggest links between biological entities identified through the database.DESTAF, along with a PDF manual, can be found at http://cbrc.kaust.edu.sa/destaf. It is free to academic and non-commercial users and will be updated quarterly. © 2011 Elsevier Inc.

  11. DataShare: Empowering Researcher Data Curation

    Directory of Open Access Journals (Sweden)

    Stephen Abrams

    2014-07-01

    Full Text Available Researchers are increasingly being asked to ensure that all products of research activity – not just traditional publications – are preserved and made widely available for study and reuse as a precondition for publication or grant funding, or to conform to disciplinary best practices. In order to conform to these requirements, scholars need effective, easy-to-use tools and services for the long-term curation of their research data. The DataShare service, developed at the University of California, is being used by researchers to: (1 prepare for curation by reviewing best practice recommendations for the acquisition or creation of digital research data; (2 select datasets using intuitive file browsing and drag-and-drop interfaces; (3 describe their data for enhanced discoverability in terms of the DataCite metadata schema; (4 preserve their data by uploading to a public access collection in the UC3 Merritt curation repository; (5 cite their data in terms of persistent and globally-resolvable DOI identifiers; (6 expose their data through registration with well-known abstracting and indexing services and major internet search engines; (7 control the dissemination of their data through enforceable data use agreements; and (8 discover and retrieve datasets of interest through a faceted search and browse environment. Since the widespread adoption of effective data management practices is highly dependent on ease of use and integration into existing individual, institutional, and disciplinary workflows, the emphasis throughout the design and implementation of DataShare is to provide the highest level of curation service with the lowest possible technical barriers to entry by individual researchers. By enabling intuitive, self-service access to data curation functions, DataShare helps to contribute to more widespread adoption of good data curation practices that are critical to open scientific inquiry, discourse, and advancement.

  12. Curating research data practical strategies for your digital repository

    CERN Document Server

    Johnston, Lisa R

    2017-01-01

    Volume One of Curating Research Data explores the variety of reasons, motivations, and drivers for why data curation services are needed in the context of academic and disciplinary data repository efforts. Twelve chapters, divided into three parts, take an in-depth look at the complex practice of data curation as it emerges around us. Part I sets the stage for data curation by describing current policies, data sharing cultures, and collaborative efforts currently underway that impact potential services. Part II brings several key issues, such as cost recovery and marketing strategy, into focus for practitioners when considering how to put data curation services in action. Finally, Part III describes the full lifecycle of data by examining the ethical and practical reuse issues that data curation practitioners must consider as we strive to prepare data for the future.

  13. System administrator's manual (SAM) for the enhanced logistics intratheater support tool (ELIST) database segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    This document is the System Administrator's Manual (SAM) for the Enhanced Logistics Intratheater Support Tool (ELIST) Database Segment. It covers errors that can arise during the segment's installation and deinstallation, and it outlines appropriate recovery actions. It also tells how to extend the database storage available to Oracle if a datastore becomes filled during the use of ELIST. The latter subject builds on some of the actions that must be performed when installing this segment, as documented in the Installation Procedures (IP) for the Enhanced Logistics Intratheater Support Tool (ELIST) Global Data Segment, Database Instance Segment, Database Fill Segment, Database Segment, Database Utility Segment, Software Segment, and Reference Data Segment (referred to in portions of this document as the ELIST IP). The information in this document is expected to be of use only rarely. Other than errors arising from the failure to follow instructions, difficulties are not expected to be encountered during the installation or deinstallation of the segment. The need to extend database storage likewise typically arises infrequently. Most administrators will only need to be aware of the help that is provided in this document and will probably not actually need to read and make use of it

  14. The Coral Trait Database, a curated database of trait information for coral species from the global oceans

    Science.gov (United States)

    Madin, Joshua S.; Anderson, Kristen D.; Andreasen, Magnus Heide; Bridge, Tom C. L.; Cairns, Stephen D.; Connolly, Sean R.; Darling, Emily S.; Diaz, Marcela; Falster, Daniel S.; Franklin, Erik C.; Gates, Ruth D.; Hoogenboom, Mia O.; Huang, Danwei; Keith, Sally A.; Kosnik, Matthew A.; Kuo, Chao-Yang; Lough, Janice M.; Lovelock, Catherine E.; Luiz, Osmar; Martinelli, Julieta; Mizerek, Toni; Pandolfi, John M.; Pochon, Xavier; Pratchett, Morgan S.; Putnam, Hollie M.; Roberts, T. Edward; Stat, Michael; Wallace, Carden C.; Widman, Elizabeth; Baird, Andrew H.

    2016-03-01

    Trait-based approaches advance ecological and evolutionary research because traits provide a strong link to an organism’s function and fitness. Trait-based research might lead to a deeper understanding of the functions of, and services provided by, ecosystems, thereby improving management, which is vital in the current era of rapid environmental change. Coral reef scientists have long collected trait data for corals; however, these are difficult to access and often under-utilized in addressing large-scale questions. We present the Coral Trait Database initiative that aims to bring together physiological, morphological, ecological, phylogenetic and biogeographic trait information into a single repository. The database houses species- and individual-level data from published field and experimental studies alongside contextual data that provide important framing for analyses. In this data descriptor, we release data for 56 traits for 1547 species, and present a collaborative platform on which other trait data are being actively federated. Our overall goal is for the Coral Trait Database to become an open-source, community-led data clearinghouse that accelerates coral reef research.

  15. The Coral Trait Database, a curated database of trait information for coral species from the global oceans.

    Science.gov (United States)

    Madin, Joshua S; Anderson, Kristen D; Andreasen, Magnus Heide; Bridge, Tom C L; Cairns, Stephen D; Connolly, Sean R; Darling, Emily S; Diaz, Marcela; Falster, Daniel S; Franklin, Erik C; Gates, Ruth D; Harmer, Aaron; Hoogenboom, Mia O; Huang, Danwei; Keith, Sally A; Kosnik, Matthew A; Kuo, Chao-Yang; Lough, Janice M; Lovelock, Catherine E; Luiz, Osmar; Martinelli, Julieta; Mizerek, Toni; Pandolfi, John M; Pochon, Xavier; Pratchett, Morgan S; Putnam, Hollie M; Roberts, T Edward; Stat, Michael; Wallace, Carden C; Widman, Elizabeth; Baird, Andrew H

    2016-03-29

    Trait-based approaches advance ecological and evolutionary research because traits provide a strong link to an organism's function and fitness. Trait-based research might lead to a deeper understanding of the functions of, and services provided by, ecosystems, thereby improving management, which is vital in the current era of rapid environmental change. Coral reef scientists have long collected trait data for corals; however, these are difficult to access and often under-utilized in addressing large-scale questions. We present the Coral Trait Database initiative that aims to bring together physiological, morphological, ecological, phylogenetic and biogeographic trait information into a single repository. The database houses species- and individual-level data from published field and experimental studies alongside contextual data that provide important framing for analyses. In this data descriptor, we release data for 56 traits for 1547 species, and present a collaborative platform on which other trait data are being actively federated. Our overall goal is for the Coral Trait Database to become an open-source, community-led data clearinghouse that accelerates coral reef research.

  16. Database on wind characteristics. Users manual

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Hansen, K.S.

    2001-01-01

    quality control procedures), part two accounts in details for the available data in the established database bank and part three is the Users Manualdescribing the various ways to access and analyse the data. The present report constitutes part three of the Annex XVII reporting and contains a trough...

  17. Modern oncologic and operative outcomes for oesophageal cancer treated with curative intent.

    LENUS (Irish Health Repository)

    Reynolds, J V

    2011-09-01

    The curative approach to oesophageal cancer carries significant risks and a cure is achieved in approximately 20 per cent. There has been a recent trend internationally to observe improved operative and oncological outcomes. This report audits modern outcomes from a high volume centre with a prospective database for the period 2004-08. 603 patients were referred and 310 (52%) were treated with curative intent. Adenocarcinoma represented 68% of the cohort, squamous cell cancer 30%. Of the 310 cases, 227 (73%) underwent surgery, 105 (46%) underwent surgery alone, and 122 (54%) had chemotherapy or combination chemotherapy and radiation therapy. The postoperative mortality rate was 1.7%. The median and 5-year survival of the 310 patients based on intention to treat was 36 months and 36%, respectively, and of the 181 patients undergoing R0 resection, 52 months and 42%, respectively. An in-hospital postoperative mortality rate of less than 2 per cent, and 5-year survival of between 35 and 42% is consistent with benchmarks from international series.

  18. International Reactor Physics Handbook Database and Analysis Tool (IDAT) - IDAT user manual

    International Nuclear Information System (INIS)

    2013-01-01

    The IRPhEP Database and Analysis Tool (IDAT) was first released in 2013 and is included on the DVD. This database and corresponding user interface allows easy access to handbook information. Selected information from each configuration was entered into IDAT, such as the measurements performed, benchmark values, calculated values and materials specifications of the benchmark. In many cases this is supplemented with calculated data such as neutron balance data, spectra data, k-eff nuclear data sensitivities, and spatial reaction rate plots. IDAT accomplishes two main objectives: 1. Allow users to search the handbook for experimental configurations that satisfy their input criteria. 2. Allow users to trend results and identify suitable benchmarks experiments for their application. IDAT provides the user with access to several categories of calculated data, including: - 1-group neutron balance data for each configuration with individual isotope contributions in the reactor system. - Flux and other reaction rates spectra in a 299-group energy scheme. Plotting capabilities were implemented into IDAT allowing the user to compare the spectra of selected configurations in the original fine energy structure or on any user-defined broader energy structure. - Sensitivity coefficients (percent changes of k-effective due to elementary change of basic nuclear data) for the major nuclides and nuclear processes in a 238-group energy structure. IDAT is actively being developed. Those approved to access the online version of the handbook will also have access to an online version of IDAT. As May 2013 marks the first release, IDAT may contain data entry errors and omissions. The handbook remains the primary source of reactor physics benchmark data. A copy of IDAT user's manual is attached to this document. A copy of the IRPhE Handbook can be obtained on request at http://www.oecd-nea.org/science/wprs/irphe/irphe-handbook/form.html

  19. Database Description - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods ... QTL list, Plant DB link & Genome analysis methods Alternative name - DOI 10.18908/lsdba.nbdc01194-01-000 Cr...ers and QTLs are curated manually from the published literature. The marker information includes marker sequences, genotyping methods... Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive ...

  20. ECOTOX Knowledgebase: New tools for data visualization and database interoperability (poster)

    Science.gov (United States)

    The ECOTOXicology knowledgebase (ECOTOX) is a comprehensive, curated database that summarizes toxicology data from single chemical exposure studies to terrestrial and aquatic organisms. The ECOTOX Knowledgebase provides risk assessors and researchers consistent information on tox...

  1. The 2002 RPA Plot Summary database users manual

    Science.gov (United States)

    Patrick D. Miles; John S. Vissage; W. Brad Smith

    2004-01-01

    Describes the structure of the RPA 2002 Plot Summary database and provides information on generating estimates of forest statistics from these data. The RPA 2002 Plot Summary database provides a consistent framework for storing forest inventory data across all ownerships across the entire United States. The data represents the best available data as of October 2001....

  2. DDEC: Dragon database of genes implicated in esophageal cancer

    KAUST Repository

    Essack, Magbubah

    2009-07-06

    Background: Esophageal cancer ranks eighth in order of cancer occurrence. Its lethality primarily stems from inability to detect the disease during the early organ-confined stage and the lack of effective therapies for advanced-stage disease. Moreover, the understanding of molecular processes involved in esophageal cancer is not complete, hampering the development of efficient diagnostics and therapy. Efforts made by the scientific community to improve the survival rate of esophageal cancer have resulted in a wealth of scattered information that is difficult to find and not easily amendable to data-mining. To reduce this gap and to complement available cancer related bioinformatic resources, we have developed a comprehensive database (Dragon Database of Genes Implicated in Esophageal Cancer) with esophageal cancer related information, as an integrated knowledge database aimed at representing a gateway to esophageal cancer related data. Description: Manually curated 529 genes differentially expressed in EC are contained in the database. We extracted and analyzed the promoter regions of these genes and complemented gene-related information with transcription factors that potentially control them. We further, precompiled text-mined and data-mined reports about each of these genes to allow for easy exploration of information about associations of EC-implicated genes with other human genes and proteins, metabolites and enzymes, toxins, chemicals with pharmacological effects, disease concepts and human anatomy. The resulting database, DDEC, has a useful feature to display potential associations that are rarely reported and thus difficult to identify. Moreover, DDEC enables inspection of potentially new \\'association hypotheses\\' generated based on the precompiled reports. Conclusion: We hope that this resource will serve as a useful complement to the existing public resources and as a good starting point for researchers and physicians interested in EC genetics. DDEC is

  3. A comprehensive curated resource for follicle stimulating hormone signaling

    Directory of Open Access Journals (Sweden)

    Sharma Jyoti

    2011-10-01

    Full Text Available Abstract Background Follicle stimulating hormone (FSH is an important hormone responsible for growth, maturation and function of the human reproductive system. FSH regulates the synthesis of steroid hormones such as estrogen and progesterone, proliferation and maturation of follicles in the ovary and spermatogenesis in the testes. FSH is a glycoprotein heterodimer that binds and acts through the FSH receptor, a G-protein coupled receptor. Although online pathway repositories provide information about G-protein coupled receptor mediated signal transduction, the signaling events initiated specifically by FSH are not cataloged in any public database in a detailed fashion. Findings We performed comprehensive curation of the published literature to identify the components of FSH signaling pathway and the molecular interactions that occur upon FSH receptor activation. Our effort yielded 64 reactions comprising 35 enzyme-substrate reactions, 11 molecular association events, 11 activation events and 7 protein translocation events that occur in response to FSH receptor activation. We also cataloged 265 genes, which were differentially expressed upon FSH stimulation in normal human reproductive tissues. Conclusions We anticipate that the information provided in this resource will provide better insights into the physiological role of FSH in reproductive biology, its signaling mediators and aid in further research in this area. The curated FSH pathway data is freely available through NetPath (http://www.netpath.org, a pathway resource developed previously by our group.

  4. National Radiobiology Archives Distributed Access user's manual

    International Nuclear Information System (INIS)

    Watson, C.; Smith, S.; Prather, J.

    1991-11-01

    This User's Manual describes installation and use of the National Radiobiology Archives (NRA) Distributed Access package. The package consists of a distributed subset of information representative of the NRA databases and database access software which provide an introduction to the scope and style of the NRA Information Systems

  5. Development of an integrated genome informatics, data management and workflow infrastructure: A toolbox for the study of complex disease genetics

    Directory of Open Access Journals (Sweden)

    Burren Oliver S

    2004-01-01

    Full Text Available Abstract The genetic dissection of complex disease remains a significant challenge. Sample-tracking and the recording, processing and storage of high-throughput laboratory data with public domain data, require integration of databases, genome informatics and genetic analyses in an easily updated and scaleable format. To find genes involved in multifactorial diseases such as type 1 diabetes (T1D, chromosome regions are defined based on functional candidate gene content, linkage information from humans and animal model mapping information. For each region, genomic information is extracted from Ensembl, converted and loaded into ACeDB for manual gene annotation. Homology information is examined using ACeDB tools and the gene structure verified. Manually curated genes are extracted from ACeDB and read into the feature database, which holds relevant local genomic feature data and an audit trail of laboratory investigations. Public domain information, manually curated genes, polymorphisms, primers, linkage and association analyses, with links to our genotyping database, are shown in Gbrowse. This system scales to include genetic, statistical, quality control (QC and biological data such as expression analyses of RNA or protein, all linked from a genomics integrative display. Our system is applicable to any genetic study of complex disease, of either large or small scale.

  6. Curating Gothic Nightmares

    Directory of Open Access Journals (Sweden)

    Heather Tilley

    2007-10-01

    Full Text Available This review takes the occasion of a workshop given by Martin Myrone, curator of Gothic Nightmares: Fuseli, Blake, and the Romantic Imagination (Tate Britain, 2006 as a starting point to reflect on the practice of curating, and its relation to questions of the verbal and the visual in contemporary art historical practice. The exhibition prompted an engagement with questions of the genre of Gothic, through a dramatic display of the differences between ‘the Gothic' in literature and ‘the Gothic' in the visual arts within eighteenth- and early nineteenth-century culture. I also address the various ways in which 'the Gothic' was interpreted and reinscribed by visitors, especially those who dressed up for the exhibition. Finally, I consider some of the show's ‘marginalia' (specifically the catalogue, exploring the ways in which these extra events and texts shaped, and continue to shape, the cultural effect of the exhibition.

  7. Data Curation in the World Data System: Proposed Framework

    Directory of Open Access Journals (Sweden)

    P Laughton

    2013-09-01

    Full Text Available The value of data in society is increasing rapidly. Organisations that work with data should have standard practices in place to ensure successful curation of data. The World Data System (WDS consists of a number of data centres responsible for curating research data sets for the scientific community. The WDS has no formal data curation framework or model in place to act as a guideline for member data centres. The objective of this research was to develop a framework for the curation of data in the WDS. A multiple-case case study was conducted. Interviews were used to gather qualitative data and analysis of the data, which led to the development of this framework. The proposed framework is largely based on the Open Archival Information System (OAIS functional model and caters for the curation of both analogue and digital data.

  8. EXTRACT

    DEFF Research Database (Denmark)

    Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra

    2016-01-01

    The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have the...... and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15-25% and helps curators to detect terms that would otherwise have been missed.Database URL: https://extract.hcmr.gr/......., organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, well documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Comparison of fully manual...

  9. Finding small molecules for the ‘next Ebola’ [v1; ref status: indexed, http://f1000r.es/542

    Directory of Open Access Journals (Sweden)

    Sean Ekins

    2015-02-01

    Full Text Available The current Ebola virus epidemic may provide some suggestions of how we can better prepare for the next pathogen outbreak. We propose several cost effective steps that could be taken that would impact the discovery and use of small molecule therapeutics including: 1. text mine the literature, 2. patent assignees and/or inventors should openly declare their relevant filings, 3. reagents and assays could be commoditized, 4. using manual curation to enhance database links, 5. engage database and curation teams, 6. consider open science approaches, 7. adapt the “box” model for shareable reference compounds, and 8. involve the physician’s perspective.

  10. KID - an algorithm for fast and efficient text mining used to automatically generate a database containing kinetic information of enzymes

    Directory of Open Access Journals (Sweden)

    Schomburg Dietmar

    2010-07-01

    Full Text Available Abstract Background The amount of available biological information is rapidly increasing and the focus of biological research has moved from single components to networks and even larger projects aiming at the analysis, modelling and simulation of biological networks as well as large scale comparison of cellular properties. It is therefore essential that biological knowledge is easily accessible. However, most information is contained in the written literature in an unstructured way, so that methods for the systematic extraction of knowledge directly from the primary literature have to be deployed. Description Here we present a text mining algorithm for the extraction of kinetic information such as KM, Ki, kcat etc. as well as associated information such as enzyme names, EC numbers, ligands, organisms, localisations, pH and temperatures. Using this rule- and dictionary-based approach, it was possible to extract 514,394 kinetic parameters of 13 categories (KM, Ki, kcat, kcat/KM, Vmax, IC50, S0.5, Kd, Ka, t1/2, pI, nH, specific activity, Vmax/KM from about 17 million PubMed abstracts and combine them with other data in the abstract. A manual verification of approx. 1,000 randomly chosen results yielded a recall between 51% and 84% and a precision ranging from 55% to 96%, depending of the category searched. The results were stored in a database and are available as "KID the KInetic Database" via the internet. Conclusions The presented algorithm delivers a considerable amount of information and therefore may aid to accelerate the research and the automated analysis required for today's systems biology approaches. The database obtained by analysing PubMed abstracts may be a valuable help in the field of chemical and biological kinetics. It is completely based upon text mining and therefore complements manually curated databases. The database is available at http://kid.tu-bs.de. The source code of the algorithm is provided under the GNU General Public

  11. Gramene database: Navigating plant comparative genomics resources

    Directory of Open Access Journals (Sweden)

    Parul Gupta

    2016-11-01

    Full Text Available Gramene (http://www.gramene.org is an online, open source, curated resource for plant comparative genomics and pathway analysis designed to support researchers working in plant genomics, breeding, evolutionary biology, system biology, and metabolic engineering. It exploits phylogenetic relationships to enrich the annotation of genomic data and provides tools to perform powerful comparative analyses across a wide spectrum of plant species. It consists of an integrated portal for querying, visualizing and analyzing data for 44 plant reference genomes, genetic variation data sets for 12 species, expression data for 16 species, curated rice pathways and orthology-based pathway projections for 66 plant species including various crops. Here we briefly describe the functions and uses of the Gramene database.

  12. DDMGD: the database of text-mined associations between genes methylated in diseases from different species

    KAUST Repository

    Raies, A. B.

    2014-11-14

    Gathering information about associations between methylated genes and diseases is important for diseases diagnosis and treatment decisions. Recent advancements in epigenetics research allow for large-scale discoveries of associations of genes methylated in diseases in different species. Searching manually for such information is not easy, as it is scattered across a large number of electronic publications and repositories. Therefore, we developed DDMGD database (http://www.cbrc.kaust.edu.sa/ddmgd/) to provide a comprehensive repository of information related to genes methylated in diseases that can be found through text mining. DDMGD\\'s scope is not limited to a particular group of genes, diseases or species. Using the text mining system DEMGD we developed earlier and additional post-processing, we extracted associations of genes methylated in different diseases from PubMed Central articles and PubMed abstracts. The accuracy of extracted associations is 82% as estimated on 2500 hand-curated entries. DDMGD provides a user-friendly interface facilitating retrieval of these associations ranked according to confidence scores. Submission of new associations to DDMGD is provided. A comparison analysis of DDMGD with several other databases focused on genes methylated in diseases shows that DDMGD is comprehensive and includes most of the recent information on genes methylated in diseases.

  13. Nuclear Energy Infrastructure Database Description and User’s Manual

    Energy Technology Data Exchange (ETDEWEB)

    Heidrich, Brenden [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-11-01

    In 2014, the Deputy Assistant Secretary for Science and Technology Innovation initiated the Nuclear Energy (NE)–Infrastructure Management Project by tasking the Nuclear Science User Facilities, formerly the Advanced Test Reactor National Scientific User Facility, to create a searchable and interactive database of all pertinent NE-supported and -related infrastructure. This database, known as the Nuclear Energy Infrastructure Database (NEID), is used for analyses to establish needs, redundancies, efficiencies, distributions, etc., to best understand the utility of NE’s infrastructure and inform the content of infrastructure calls. The Nuclear Science User Facilities developed the database by utilizing data and policy direction from a variety of reports from the U.S. Department of Energy, the National Research Council, the International Atomic Energy Agency, and various other federal and civilian resources. The NEID currently contains data on 802 research and development instruments housed in 377 facilities at 84 institutions in the United States and abroad. The effort to maintain and expand the database is ongoing. Detailed information on many facilities must be gathered from associated institutions and added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements. This document provides a short tutorial on the navigation of the NEID web portal at NSUF-Infrastructure.INL.gov.

  14. Orthology prediction methods: a quality assessment using curated protein families.

    Science.gov (United States)

    Trachana, Kalliopi; Larsson, Tomas A; Powell, Sean; Chen, Wei-Hua; Doerks, Tobias; Muller, Jean; Bork, Peer

    2011-10-01

    The increasing number of sequenced genomes has prompted the development of several automated orthology prediction methods. Tests to evaluate the accuracy of predictions and to explore biases caused by biological and technical factors are therefore required. We used 70 manually curated families to analyze the performance of five public methods in Metazoa. We analyzed the strengths and weaknesses of the methods and quantified the impact of biological and technical challenges. From the latter part of the analysis, genome annotation emerged as the largest single influencer, affecting up to 30% of the performance. Generally, most methods did well in assigning orthologous group but they failed to assign the exact number of genes for half of the groups. The publicly available benchmark set (http://eggnog.embl.de/orthobench/) should facilitate the improvement of current orthology assignment protocols, which is of utmost importance for many fields of biology and should be tackled by a broad scientific community. Copyright © 2011 WILEY Periodicals, Inc.

  15. Curating NASA's Future Extraterrestrial Sample Collections: How Do We Achieve Maximum Proficiency?

    Science.gov (United States)

    McCubbin, Francis; Evans, Cynthia; Zeigler, Ryan; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael

    2016-01-01

    The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "... documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working towards a state of maximum proficiency.

  16. The RadAssessor manual

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, Sharon L.

    2007-02-01

    THIS manual will describe the functions and capabilities that are available from the RadAssessor database and will demonstrate how to retrieve and view its information. You’ll learn how to start the database application, how to log in, how to use the common commands, and how to use the online help if you have a question or need extra guidance. RadAssessor can be viewed from any standard web browser. Therefore, you will not need to install any special software before using RadAssessor.

  17. Opening Data in the Long Tail for Community Discovery, Curation and Action Using Active and Social Curation

    Science.gov (United States)

    Hedstrom, M. L.; Kumar, P.; Myers, J.; Plale, B. A.

    2012-12-01

    In data science, the most common sequence of steps for data curation are to 1) curate data, 2) enable data discovery, and 3) provide for data reuse. The Sustainable Environments - Actionable Data (SEAD) project, funded through NSF's DataNet program, is creating an environment for sustainability scientists to discover data first, reuse data next, and curate data though an on-going process that we call Active and Social Curation. For active curation we are developing tools and services that support data discovery, data management, and data enhancement for the community while the data is still being used actively for research. We are creating an Active Content Repository, using drop box, semantic web technologies, and a Flickr-like interface for researchers to "drop" data into a repository where it will be replicated and minimally discoverable. For social curation, we are deploying a social networking tool, VIVO, which will allow researchers to discover data-publications-people (e.g. expertise) through a route that can start at any of those entry points. The other dimension of social curation is developing mechanisms to open data for community input, for example, using ranking and commenting mechanisms for data sets and a community-sourcing capability to add tags, clean up and validate data sets. SEAD's strategies and services are aimed at the sustainability science community, which faces numerous challenges including discovery of useful data, cleaning noisy observational data, synthesizing data of different types, defining appropriate models, managing and preserving their research data, and conveying holistic results to colleagues, students, decision makers, and the public. Sustainability researchers make significant use of centrally managed data from satellites and national sensor networks, national scientific and statistical agencies, and data archives. At the same time, locally collected data and custom derived data products that combine observations and

  18. DEXTER: Disease-Expression Relation Extraction from Text.

    Science.gov (United States)

    Gupta, Samir; Dingerdissen, Hayley; Ross, Karen E; Hu, Yu; Wu, Cathy H; Mazumder, Raja; Vijay-Shanker, K

    2018-01-01

    Gene expression levels affect biological processes and play a key role in many diseases. Characterizing expression profiles is useful for clinical research, and diagnostics and prognostics of diseases. There are currently several high-quality databases that capture gene expression information, obtained mostly from large-scale studies, such as microarray and next-generation sequencing technologies, in the context of disease. The scientific literature is another rich source of information on gene expression-disease relationships that not only have been captured from large-scale studies but have also been observed in thousands of small-scale studies. Expression information obtained from literature through manual curation can extend expression databases. While many of the existing databases include information from literature, they are limited by the time-consuming nature of manual curation and have difficulty keeping up with the explosion of publications in the biomedical field. In this work, we describe an automated text-mining tool, Disease-Expression Relation Extraction from Text (DEXTER) to extract information from literature on gene and microRNA expression in the context of disease. One of the motivations in developing DEXTER was to extend the BioXpress database, a cancer-focused gene expression database that includes data derived from large-scale experiments and manual curation of publications. The literature-based portion of BioXpress lags behind significantly compared to expression information obtained from large-scale studies and can benefit from our text-mined results. We have conducted two different evaluations to measure the accuracy of our text-mining tool and achieved average F-scores of 88.51 and 81.81% for the two evaluations, respectively. Also, to demonstrate the ability to extract rich expression information in different disease-related scenarios, we used DEXTER to extract information on differential expression information for 2024 genes in lung

  19. FCDD: A Database for Fruit Crops Diseases.

    Science.gov (United States)

    Chauhan, Rupal; Jasrai, Yogesh; Pandya, Himanshu; Chaudhari, Suman; Samota, Chand Mal

    2014-01-01

    Fruit Crops Diseases Database (FCDD) requires a number of biotechnology and bioinformatics tools. The FCDD is a unique bioinformatics resource that compiles information about 162 details on fruit crops diseases, diseases type, its causal organism, images, symptoms and their control. The FCDD contains 171 phytochemicals from 25 fruits, their 2D images and their 20 possible sequences. This information has been manually extracted and manually verified from numerous sources, including other electronic databases, textbooks and scientific journals. FCDD is fully searchable and supports extensive text search. The main focus of the FCDD is on providing possible information of fruit crops diseases, which will help in discovery of potential drugs from one of the common bioresource-fruits. The database was developed using MySQL. The database interface is developed in PHP, HTML and JAVA. FCDD is freely available. http://www.fruitcropsdd.com/

  20. Meeting Curation Challenges in a Neuroimaging Group

    Directory of Open Access Journals (Sweden)

    Angus Whyte

    2008-08-01

    Full Text Available The SCARP project is a series of short studies with two aims; firstly to discover more about disciplinary approaches and attitudes to digital curation through ‘immersion’ in selected cases; secondly to apply known good practice, and where possible, identify new lessons from practice in the selected discipline areas. The study summarised here is of the Neuroimaging Group in the University of Edinburgh’s Division of Psychiatry, which plays a leading role in eScience collaborations to improve the infrastructure for neuroimaging data integration and reuse. The Group also aims to address growing data storage and curation needs, given the capabilities afforded by new infrastructure. The study briefly reviews the policy context and current challenges to data integration and sharing in the neuroimaging field. It then describes how curation and preservation risks and opportunities for change were identified throughout the curation lifecycle; and their context appreciated through field study in the research site. The results are consistent with studies of neuroimaging eInfrastructure that emphasise the role of local data sharing and reuse practices. These sustain mutual awareness of datasets and experimental protocols through sharing peer to peer, and among senior researchers and students, enabling continuity in research and flexibility in project work. This “human infrastructure” is taken into account in considering next steps for curation and preservation of the Group’s datasets and a phased approach to supporting data documentation.

  1. Database on wind characteristics. Contents of database bank

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Hansen, K.S.

    2001-01-01

    for the available data in the established database bank and part three is the Users Manual describing the various ways to access and analyse the data. The present report constitutes the second part of the Annex XVII reporting. Basically, the database bank contains three categories of data, i.e. i) high sampled wind...... field time series; ii) high sampled wind turbine structural response time series; andiii) wind resource data. The main emphasis, however, is on category i). The available data, within each of the three categories, are described in details. The description embraces site characteristics, terrain type...

  2. Research Data Curation Pilots: Lessons Learned

    Directory of Open Access Journals (Sweden)

    David Minor

    2014-07-01

    Full Text Available In the spring of 2011, the UC San Diego Research Cyberinfrastructure (RCI Implementation Team invited researchers and research teams to participate in a research curation and data management pilot program. This invitation took the form of a campus-wide solicitation. More than two dozen applications were received and, after due deliberation, the RCI Oversight Committee selected five curation-intensive projects. These projects were chosen based on a number of criteria, including how they represented campus research, varieties of topics, researcher engagement, and the various services required. The pilot process began in September 2011, and will be completed in early 2014. Extensive lessons learned from the pilots are being compiled and are being used in the on-going design and implementation of the permanent Research Data Curation Program in the UC San Diego Library. In this paper, we present specific implementation details of these various services, as well as lessons learned. The program focused on many aspects of contemporary scholarship, including data creation and storage, description and metadata creation, citation and publication, and long term preservation and access. Based on the lessons learned in our processes, the Research Data Curation Program will provide a suite of services from which campus users can pick and choose, as necessary. The program will provide support for the data management requirements from national funding agencies.

  3. High School and Beyond: Twins and Siblings' File Users' Manual, User's Manual for Teacher Comment File, Friends File Users' Manual.

    Science.gov (United States)

    National Center for Education Statistics (ED), Washington, DC.

    These three users' manuals are for specific files of the High School and Beyond Study, a national longitudinal study of high school sophomores and seniors in 1980. The three files are computerized databases that are available on magnetic tape. As one component of base year data collection, information identifying twins, triplets, and some non-twin…

  4. An Emergent Micro-Services Approach to Digital Curation Infrastructure

    OpenAIRE

    Abrams, Stephen; Kunze, John; Loy, David

    2010-01-01

    In order better to meet the needs of its diverse University of California (UC) constituencies, the California Digital Library UC Curation Center is re-envisioning its approach to digital curation infrastructure by devolving function into a set of granular, independent, but interoperable micro-services. Since each of these services is small and self-contained, they are more easily developed, deployed, maintained, and enhanced; at the same time, complex curation function can emerge from the str...

  5. Wastes power generation introduction manual. Material edition; Haikibutsu hatsuden donyu manual. Shiryohen

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-07-01

    This paper collects and puts into order the materials used in preparing the manual. The materials were classified into the power generation system database related to discussion of the economic performance, case studies, technical materials, other referential materials and glossary. The database shows power generation efficiency, auxiliary power ratio, construction cost, utility cost and number of operators. The case studies present examples of economy calculations on the five recommended power generation systems at a wastes treatment capacity of 180 tons a day. Technical materials put into order the technological discussions on efficiency improvement, environmental measures (suppression of discharge of dioxins, measures for their removal, and the effects thereof), refuse derived fuel (RDF) and power plant operating techniques. The other referential materials collect laws, notifications, and guidelines related to the Welfare Ministry, laws, notifications, criteria and related to the Ministry of International Trade and Industry, and materials related to LCA, forms of power generation business entities, general wastes disposal business, and electric business bonds. The glossary explains terms required for operation and understanding of the manual. (NEDO)

  6. LitVar: a semantic search engine for linking genomic variant data in PubMed and PMC.

    Science.gov (United States)

    Allot, Alexis; Peng, Yifan; Wei, Chih-Hsuan; Lee, Kyubum; Phan, Lon; Lu, Zhiyong

    2018-05-14

    The identification and interpretation of genomic variants play a key role in the diagnosis of genetic diseases and related research. These tasks increasingly rely on accessing relevant manually curated information from domain databases (e.g. SwissProt or ClinVar). However, due to the sheer volume of medical literature and high cost of expert curation, curated variant information in existing databases are often incomplete and out-of-date. In addition, the same genetic variant can be mentioned in publications with various names (e.g. 'A146T' versus 'c.436G>A' versus 'rs121913527'). A search in PubMed using only one name usually cannot retrieve all relevant articles for the variant of interest. Hence, to help scientists, healthcare professionals, and database curators find the most up-to-date published variant research, we have developed LitVar for the search and retrieval of standardized variant information. In addition, LitVar uses advanced text mining techniques to compute and extract relationships between variants and other associated entities such as diseases and chemicals/drugs. LitVar is publicly available at https://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/LitVar.

  7. SWEETLEAD: an in silico database of approved drugs, regulated chemicals, and herbal isolates for computer-aided drug discovery.

    Directory of Open Access Journals (Sweden)

    Paul A Novick

    Full Text Available In the face of drastically rising drug discovery costs, strategies promising to reduce development timelines and expenditures are being pursued. Computer-aided virtual screening and repurposing approved drugs are two such strategies that have shown recent success. Herein, we report the creation of a highly-curated in silico database of chemical structures representing approved drugs, chemical isolates from traditional medicinal herbs, and regulated chemicals, termed the SWEETLEAD database. The motivation for SWEETLEAD stems from the observance of conflicting information in publicly available chemical databases and the lack of a highly curated database of chemical structures for the globally approved drugs. A consensus building scheme surveying information from several publicly accessible databases was employed to identify the correct structure for each chemical. Resulting structures are filtered for the active pharmaceutical ingredient, standardized, and differing formulations of the same drug were combined in the final database. The publically available release of SWEETLEAD (https://simtk.org/home/sweetlead provides an important tool to enable the successful completion of computer-aided repurposing and drug discovery campaigns.

  8. dbEM: A database of epigenetic modifiers curated from cancerous and normal genomes

    Science.gov (United States)

    Singh Nanda, Jagpreet; Kumar, Rahul; Raghava, Gajendra P. S.

    2016-01-01

    We have developed a database called dbEM (database of Epigenetic Modifiers) to maintain the genomic information of about 167 epigenetic modifiers/proteins, which are considered as potential cancer targets. In dbEM, modifiers are classified on functional basis and comprise of 48 histone methyl transferases, 33 chromatin remodelers and 31 histone demethylases. dbEM maintains the genomic information like mutations, copy number variation and gene expression in thousands of tumor samples, cancer cell lines and healthy samples. This information is obtained from public resources viz. COSMIC, CCLE and 1000-genome project. Gene essentiality data retrieved from COLT database further highlights the importance of various epigenetic proteins for cancer survival. We have also reported the sequence profiles, tertiary structures and post-translational modifications of these epigenetic proteins in cancer. It also contains information of 54 drug molecules against different epigenetic proteins. A wide range of tools have been integrated in dbEM e.g. Search, BLAST, Alignment and Profile based prediction. In our analysis, we found that epigenetic proteins DNMT3A, HDAC2, KDM6A, and TET2 are highly mutated in variety of cancers. We are confident that dbEM will be very useful in cancer research particularly in the field of epigenetic proteins based cancer therapeutics. This database is available for public at URL: http://crdd.osdd.net/raghava/dbem.

  9. JASPAR 2010: the greatly expanded open-access database of transcription factor binding profiles

    Science.gov (United States)

    Portales-Casamar, Elodie; Thongjuea, Supat; Kwon, Andrew T.; Arenillas, David; Zhao, Xiaobei; Valen, Eivind; Yusuf, Dimas; Lenhard, Boris; Wasserman, Wyeth W.; Sandelin, Albin

    2010-01-01

    JASPAR (http://jaspar.genereg.net) is the leading open-access database of matrix profiles describing the DNA-binding patterns of transcription factors (TFs) and other proteins interacting with DNA in a sequence-specific manner. Its fourth major release is the largest expansion of the core database to date: the database now holds 457 non-redundant, curated profiles. The new entries include the first batch of profiles derived from ChIP-seq and ChIP-chip whole-genome binding experiments, and 177 yeast TF binding profiles. The introduction of a yeast division brings the convenience of JASPAR to an active research community. As binding models are refined by newer data, the JASPAR database now uses versioning of matrices: in this release, 12% of the older models were updated to improved versions. Classification of TF families has been improved by adopting a new DNA-binding domain nomenclature. A curated catalog of mammalian TFs is provided, extending the use of the JASPAR profiles to additional TFs belonging to the same structural family. The changes in the database set the system ready for more rapid acquisition of new high-throughput data sources. Additionally, three new special collections provide matrix profile data produced by recent alternative high-throughput approaches. PMID:19906716

  10. WeCurate: Designing for synchronised browsing and social negotiation

    OpenAIRE

    Hazelden, Katina; Yee-King, Matthew; d'Inverno, Mark; Confalonieri, Roberto; De Jonge, Dave; Amgoud, Leila; Osman, Nardine; Prade, Henri; Sierra, Carles

    2012-01-01

    WeCurate is a shared image browser for collaboratively curating a virtual exhibition from a cultural image archive. This paper is concerned with the evaluation and iteration of a prototype UI (User Interface) design to enable this community image browsing. In WeCurate, several remote users work together with autonomic agents to browse the archive and to select, through negotiation and voting, a set of images which are of the greatest interest to the group. The UI allows users to synchronize v...

  11. Improving the Acquisition and Management of Sample Curation Data

    Science.gov (United States)

    Todd, Nancy S.; Evans, Cindy A.; Labasse, Dan

    2011-01-01

    This paper discusses the current sample documentation processes used during and after a mission, examines the challenges and special considerations needed for designing effective sample curation data systems, and looks at the results of a simulated sample result mission and the lessons learned from this simulation. In addition, it introduces a new data architecture for an integrated sample Curation data system being implemented at the NASA Astromaterials Acquisition and Curation department and discusses how it improves on existing data management systems.

  12. Practical database programming with Java

    CERN Document Server

    Bai, Ying

    2011-01-01

    "This important resource offers a detailed description about the practical considerations and applications in database programming using Java NetBeans 6.8 with authentic examples and detailed explanations. This book provides readers with a clear picture as to how to handle the database programming issues in the Java NetBeans environment. The book is ideal for classroom and professional training material. It includes a wealth of supplemental material that is available for download including Powerpoint slides, solution manuals, and sample databases"--

  13. Users manual on database of the Piping Reliability Proving Tests at the Japan Atomic Energy Research Institute

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    Japan Atomic Energy Research Institute(JAERI) conducted Piping Reliability Proving Tests from 1975 to 1992 based upon the contracts between JAERI and Science and Technology Agency of Japan under the auspices of the special account law for electric power development promotion. The purposes of those tests are to prove the structural reliability of the primary cooling piping constituting a part of the pressure boundary in the water reactor power plants. The tests with large experimental facilities had ended already in 1990. After that piping reliability analysis by the probabilistic method followed until 1992. This report describes the users manual on databases about the test results using the large experimental facilities. Objectives of the piping reliability proving tests are to prove that the primary piping of the water reactor (1) be reliable throughout the service period, (2) have no possibility of rupture, (3) bring no detrimental influence on the surrounding instrumentations or equipments near the break location. The research activities using large scale piping test facilities are described. The present report does the database about the test results pairing the former report. With these two reports, all the feature of Piping Reliability Proving Tests is made clear. Briefings of the tests are described also written in Japanese or English. (author)

  14. The CATH database

    Directory of Open Access Journals (Sweden)

    Knudsen Michael

    2010-02-01

    Full Text Available Abstract The CATH database provides hierarchical classification of protein domains based on their folding patterns. Domains are obtained from protein structures deposited in the Protein Data Bank and both domain identification and subsequent classification use manual as well as automated procedures. The accompanying website http://www.cathdb.info provides an easy-to-use entry to the classification, allowing for both browsing and downloading of data. Here, we give a brief review of the database, its corresponding website and some related tools.

  15. Manual for cataloging and indexing documents for database acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Schwartz, S.R.; Phillips, S.L.; Perra, J.J.

    1978-07-01

    The descriptive cataloging and subject indexing rules and methodology needed to process bibliographic information for GRID database storage are documented. Data elements which may appear in a bibliographic record are tabulated. Examples of coded data entry forms are included in an appendix. Examples are given of unit records in the database corresponding to one bibliographic reference. (MHR)

  16. Screen Practice in Curating

    DEFF Research Database (Denmark)

    Toft, Tanya Søndergaard

    2014-01-01

    During the past one and a half decade, a curatorial orientation towards "screen practice" has expanded the moving image and digital art into the public domain, exploring alternative artistic uses of the screen. The emergence of urban LED screens in the late 1990s provided a new venue that allowed...... for digital art to expand into public space. It also offered a political point of departure, inviting for confrontation with the Spectacle and with the politics and ideology of the screen as a mass communication medium that instrumentalized spectator positions. In this article I propose that screen practice...... to the dispositif of screen practice in curating, resulting in a medium-based curatorial discourse. With reference to the nomadic exhibition project Nordic Outbreak that I co-curated with Nina Colosi in 2013 and 2014, I suggest that the topos of the defined visual display area, frequently still known as "the screen...

  17. EXTRACT: interactive extraction of environment metadata and term suggestion for metagenomic sample annotation.

    Science.gov (United States)

    Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra; Pereira, Emiliano; Schnetzer, Julia; Arvanitidis, Christos; Jensen, Lars Juhl

    2016-01-01

    The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have therefore developed an interactive annotation tool, EXTRACT, which helps curators identify and extract standard-compliant terms for annotation of metagenomic records and other samples. Behind its web-based user interface, the system combines published methods for named entity recognition of environment, organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, well documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Comparison of fully manual and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15-25% and helps curators to detect terms that would otherwise have been missed. Database URL: https://extract.hcmr.gr/. © The Author(s) 2016. Published by Oxford University Press.

  18. A Comprehensive Curation Shows the Dynamic Evolutionary Patterns of Prokaryotic CRISPRs

    Directory of Open Access Journals (Sweden)

    Guoqin Mai

    2016-01-01

    Full Text Available Motivation. Clustered regularly interspaced short palindromic repeat (CRISPR is a genetic element with active regulation roles for foreign invasive genes in the prokaryotic genomes and has been engineered to work with the CRISPR-associated sequence (Cas gene Cas9 as one of the modern genome editing technologies. Due to inconsistent definitions, the existing CRISPR detection programs seem to have missed some weak CRISPR signals. Results. This study manually curates all the currently annotated CRISPR elements in the prokaryotic genomes and proposes 95 updates to the annotations. A new definition is proposed to cover all the CRISPRs. The comprehensive comparison of CRISPR numbers on the taxonomic levels of both domains and genus shows high variations for closely related species even in the same genus. The detailed investigation of how CRISPRs are evolutionarily manipulated in the 8 completely sequenced species in the genus Thermoanaerobacter demonstrates that transposons act as a frequent tool for splitting long CRISPRs into shorter ones along a long evolutionary history.

  19. The Mitochondrial Protein Atlas: A Database of Experimentally Verified Information on the Human Mitochondrial Proteome.

    Science.gov (United States)

    Godin, Noa; Eichler, Jerry

    2017-09-01

    Given its central role in various biological systems, as well as its involvement in numerous pathologies, the mitochondrion is one of the best-studied organelles. However, although the mitochondrial genome has been extensively investigated, protein-level information remains partial, and in many cases, hypothetical. The Mitochondrial Protein Atlas (MPA; URL: lifeserv.bgu.ac.il/wb/jeichler/MPA ) is a database that provides a complete, manually curated inventory of only experimentally validated human mitochondrial proteins. The MPA presently contains 911 unique protein entries, each of which is associated with at least one experimentally validated and referenced mitochondrial localization. The MPA also contains experimentally validated and referenced information defining function, structure, involvement in pathologies, interactions with other MPA proteins, as well as the method(s) of analysis used in each instance. Connections to relevant external data sources are offered for each entry, including links to NCBI Gene, PubMed, and Protein Data Bank. The MPA offers a prototype for other information sources that allow for a distinction between what has been confirmed and what remains to be verified experimentally.

  20. Database Changes (Post-Publication). ERIC Processing Manual, Section X.

    Science.gov (United States)

    Brandhorst, Ted, Ed.

    The purpose of this section is to specify the procedure for making changes to the ERIC database after the data involved have been announced in the abstract journals RIE or CIJE. As a matter of general ERIC policy, a document or journal article is not re-announced or re-entered into the database as a new accession for the purpose of accomplishing a…

  1. System administrator's manual (SAM) for the enhanced logistics intratheater support tool (ELIST) database instance segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    This document is the System Administrator's Manual (SAM) for the Enhanced Logistics Intratheater Support Tool (ELIST) Database Instance Segment. It covers errors that can arise during the segment's installation and deinstallation, and it outlines appropriate recovery actions. It also tells how to change the password for the SYSTEM account of the database instance after the instance is created, and it discusses the creation of a suitable database instance for ELIST by means other than the installation of the segment. The latter subject is covered in more depth than its introductory discussion in the Installation Procedures (IP) for the Enhanced Logistics Intratheater Support Tool (ELIST) Global Data Segment, Database Instance Segment, Database Fill Segment, Database Segment, Database Utility Segment, Software Segment, and Reference Data Segment (referred to in portions of this document as the ELIST IP). The information in this document is expected to be of use only rarely. Other than errors arising from the failure to follow instructions, difficulties are not expected to be encountered during the installation or deinstallation of the segment. By the same token, the need to create a database instance for ELIST by means other than the installation of the segment is expected to be the exception, rather than the rule. Most administrators will only need to be aware of the help that is provided in this document and will probably not actually need to read and make use of it

  2. Data Curation Network: How Do We Compare? A Snapshot of Six Academic Library Institutions’ Data Repository and Curation Services

    Directory of Open Access Journals (Sweden)

    Lisa R. Johnston

    2017-02-01

    Full Text Available Objective: Many academic and research institutions are exploring opportunities to better support researchers in sharing their data. As partners in the Data Curation Network project, our six institutions developed a comparison of the current levels of support provided for researchers to meet their data sharing goals through library-based data repository and curation services. Methods: Each institutional lead provided a written summary of their services based on a previously developed structure, followed by group discussion and refinement of descriptions. Service areas assessed include the repository services for data, technologies used, policies, and staffing in place. Conclusions: Through this process we aim to better define the current levels of support offered by our institutions as a first step toward meeting our project's overarching goal to develop a shared staffing model for data curation across multiple institutions.

  3. The Danish Bladder Cancer Database

    DEFF Research Database (Denmark)

    Hansen, Erik; Larsson, Heidi Jeanet; Nørgaard, Mette

    2016-01-01

    AIM OF DATABASE: The aim of the Danish Bladder Cancer Database (DaBlaCa-data) is to monitor the treatment of all patients diagnosed with invasive bladder cancer (BC) in Denmark. STUDY POPULATION: All patients diagnosed with BC in Denmark from 2012 onward were included in the study. Results......-intended radiation therapy. DESCRIPTIVE DATA: One-year mortality was 28% (95% confidence interval [CI]: 15-21). One-year cancer-specific mortality was 25% (95% CI: 22-27%). One-year mortality after cystectomy was 14% (95% CI: 10-18). Ninety-day mortality after cystectomy was 3% (95% CI: 1-5) in 2013. One......-year mortality following curative-intended radiation therapy was 32% (95% CI: 24-39) and 1-year cancer-specific mortality was 23% (95% CI: 16-31) in 2013. CONCLUSION: This preliminary DaBlaCa-data report showed that the treatment of MIBC in Denmark overall meet high international academic standards. The database...

  4. The STRING database in 2017

    DEFF Research Database (Denmark)

    Szklarczyk, Damian; Morris, John H; Cook, Helen

    2017-01-01

    A system-wide understanding of cellular function requires knowledge of all functional interactions between the expressed proteins. The STRING database aims to collect and integrate this information, by consolidating known and predicted protein-protein association data for a large number of organi......A system-wide understanding of cellular function requires knowledge of all functional interactions between the expressed proteins. The STRING database aims to collect and integrate this information, by consolidating known and predicted protein-protein association data for a large number...... of organisms. The associations in STRING include direct (physical) interactions, as well as indirect (functional) interactions, as long as both are specific and biologically meaningful. Apart from collecting and reassessing available experimental data on protein-protein interactions, and importing known...... pathways and protein complexes from curated databases, interaction predictions are derived from the following sources: (i) systematic co-expression analysis, (ii) detection of shared selective signals across genomes, (iii) automated text-mining of the scientific literature and (iv) computational transfer...

  5. Digital curation theory and practice

    CERN Document Server

    Hedges, Mark

    2016-01-01

    Digital curation is a multi-skilled profession with a key role to play not only in domains traditionally associated with the management of information, such as libraries and archives, but in a broad range of market sectors. Digital information is a defining feature of our age. As individuals we increasingly communicate and record our lives and memories in digital form, whether consciously or as a by-product of broader social, cultural and business activities. Throughout government and industry, there is a pressing need to manage complex information assets and to exploit their social, cultural and commercial value. This book addresses the key strategic, technical and practical issues around digital curation, curatorial practice, and locating the discussions within an appropriate theoretical context.

  6. MIPS: analysis and annotation of genome information in 2007.

    Science.gov (United States)

    Mewes, H W; Dietmann, S; Frishman, D; Gregory, R; Mannhaupt, G; Mayer, K F X; Münsterkötter, M; Ruepp, A; Spannagl, M; Stümpflen, V; Rattei, T

    2008-01-01

    The Munich Information Center for Protein Sequences (MIPS-GSF, Neuherberg, Germany) combines automatic processing of large amounts of sequences with manual annotation of selected model genomes. Due to the massive growth of the available data, the depth of annotation varies widely between independent databases. Also, the criteria for the transfer of information from known to orthologous sequences are diverse. To cope with the task of global in-depth genome annotation has become unfeasible. Therefore, our efforts are dedicated to three levels of annotation: (i) the curation of selected genomes, in particular from fungal and plant taxa (e.g. CYGD, MNCDB, MatDB), (ii) the comprehensive, consistent, automatic annotation employing exhaustive methods for the computation of sequence similarities and sequence-related attributes as well as the classification of individual sequences (SIMAP, PEDANT and FunCat) and (iii) the compilation of manually curated databases for protein interactions based on scrutinized information from the literature to serve as an accepted set of reliable annotated interaction data (MPACT, MPPI, CORUM). All databases and tools described as well as the detailed descriptions of our projects can be accessed through the MIPS web server (http://mips.gsf.de).

  7. DDMGD: the database of text-mined associations between genes methylated in diseases from different species.

    Science.gov (United States)

    Bin Raies, Arwa; Mansour, Hicham; Incitti, Roberto; Bajic, Vladimir B

    2015-01-01

    Gathering information about associations between methylated genes and diseases is important for diseases diagnosis and treatment decisions. Recent advancements in epigenetics research allow for large-scale discoveries of associations of genes methylated in diseases in different species. Searching manually for such information is not easy, as it is scattered across a large number of electronic publications and repositories. Therefore, we developed DDMGD database (http://www.cbrc.kaust.edu.sa/ddmgd/) to provide a comprehensive repository of information related to genes methylated in diseases that can be found through text mining. DDMGD's scope is not limited to a particular group of genes, diseases or species. Using the text mining system DEMGD we developed earlier and additional post-processing, we extracted associations of genes methylated in different diseases from PubMed Central articles and PubMed abstracts. The accuracy of extracted associations is 82% as estimated on 2500 hand-curated entries. DDMGD provides a user-friendly interface facilitating retrieval of these associations ranked according to confidence scores. Submission of new associations to DDMGD is provided. A comparison analysis of DDMGD with several other databases focused on genes methylated in diseases shows that DDMGD is comprehensive and includes most of the recent information on genes methylated in diseases. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. The Importance of Contamination Knowledge in Curation - Insights into Mars Sample Return

    Science.gov (United States)

    Harrington, A. D.; Calaway, M. J.; Regberg, A. B.; Mitchell, J. L.; Fries, M. D.; Zeigler, R. A.; McCubbin, F. M.

    2018-01-01

    The Astromaterials Acquisition and Curation Office at NASA Johnson Space Center (JSC), in Houston, TX (henceforth Curation Office) manages the curation of extraterrestrial samples returned by NASA missions and shared collections from international partners, preserving their integrity for future scientific study while providing the samples to the international community in a fair and unbiased way. The Curation Office also curates flight and non-flight reference materials and other materials from spacecraft assembly (e.g., lubricants, paints and gases) of sample return missions that would have the potential to cross-contaminate a present or future NASA astromaterials collection.

  9. Nuclear Energy Infrastructure Database Description and User's Manual

    International Nuclear Information System (INIS)

    Heidrich, Brenden

    2015-01-01

    In 2014, the Deputy Assistant Secretary for Science and Technology Innovation initiated the Nuclear Energy (NE)–Infrastructure Management Project by tasking the Nuclear Science User Facilities, formerly the Advanced Test Reactor National Scientific User Facility, to create a searchable and interactive database of all pertinent NE-supported and -related infrastructure. This database, known as the Nuclear Energy Infrastructure Database (NEID), is used for analyses to establish needs, redundancies, efficiencies, distributions, etc., to best understand the utility of NE's infrastructure and inform the content of infrastructure calls. The Nuclear Science User Facilities developed the database by utilizing data and policy direction from a variety of reports from the U.S. Department of Energy, the National Research Council, the International Atomic Energy Agency, and various other federal and civilian resources. The NEID currently contains data on 802 research and development instruments housed in 377 facilities at 84 institutions in the United States and abroad. The effort to maintain and expand the database is ongoing. Detailed information on many facilities must be gathered from associated institutions and added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements. This document provides a short tutorial on the navigation of the NEID web portal at NSUF-Infrastructure.INL.gov.

  10. Curative radiotherapy of supraglottic cancer

    International Nuclear Information System (INIS)

    Kim, Yong Ho; Chai, Gyu Young

    1998-01-01

    The purpose of this study was to evaluate the efficacy of curative radiotherapy in the management of supraglottic cancer. Twenty-one patients with squamous cell carcinoma of the supraglottis were treated with radiotherapy at Gyeongsang National University Hospital between 1990 and 1994. Median follow-up period was 36 months and 95% were observed for at least 2 years. Actuarial survival rate at 5 years was 39.3% for 21 patients. The 5-year actuarial survival rate was 75.0% in Stage I, 42.9% in Stage II, 33.3% in Stage III, and 28.6% in Stage IV(p=0.54). The 5-year local control rate was 52.0% for 21 patients. The 5-year local control rate was 75.0% in Stage I, 57.1% in Stage II, 66.7% in Stage III, and 28.6% in Stage IV(p=0.33). Double primary cancer was developed in 3 patients and those were all esophageal cancers. In early stage(Stage I and II) supraglottic cancer, curative radiotherapy would be a treatment of choice and surgery would be better to be reserved for salvage of radiotherapy failure. In advanced stage(Stage III and IV), radiotherapy alone is inadequate for curative therapy and combination with surgery should be done in operable patients. This report emphasizes the importance of esophagoscopy and esophagogram at the follow-up of patients with supraglottic cancer

  11. An Emergent Micro-Services Approach to Digital Curation Infrastructure

    Directory of Open Access Journals (Sweden)

    Stephen Abrams

    2010-07-01

    Full Text Available In order better to meet the needs of its diverse University of California (UC constituencies, the California Digital Library UC Curation Center is re-envisioning its approach to digital curation infrastructure by devolving function into a set of granular, independent, but interoperable micro-services. Since each of these services is small and self-contained, they are more easily developed, deployed, maintained, and enhanced; at the same time, complex curation function can emerge from the strategic combination of atomistic services. The emergent approach emphasizes the persistence of content rather than the systems in which that content is managemed, thus the paradigmatic archival culture is not unduly coupled to any particular technological context. This results in a curation environment that is comprehensive in scope, yet flexible with regard to local policies and practices and sustainable despite the inevitability of disruptive change in technology and user expectation.

  12. The BDNYC database of low-mass stars, brown dwarfs, and planetary mass companions

    Science.gov (United States)

    Cruz, Kelle; Rodriguez, David; Filippazzo, Joseph; Gonzales, Eileen; Faherty, Jacqueline K.; Rice, Emily; BDNYC

    2018-01-01

    We present a web-interface to a database of low-mass stars, brown dwarfs, and planetary mass companions. Users can send SELECT SQL queries to the database, perform searches by coordinates or name, check the database inventory on specified objects, and even plot spectra interactively. The initial version of this database contains information for 198 objects and version 2 will contain over 1000 objects. The database currently includes photometric data from 2MASS, WISE, and Spitzer and version 2 will include a significant portion of the publicly available optical and NIR spectra for brown dwarfs. The database is maintained and curated by the BDNYC research group and we welcome contributions from other researchers via GitHub.

  13. RegTransBase - A Database Of Regulatory Sequences and Interactionsin a Wide Range of Prokaryotic Genomes

    Energy Technology Data Exchange (ETDEWEB)

    Kazakov, Alexei E.; Cipriano, Michael J.; Novichkov, Pavel S.; Minovitsky, Simon; Vinogradov, Dmitry V.; Arkin, Adam; Mironov, AndreyA.; Gelfand, Mikhail S.; Dubchak, Inna

    2006-07-01

    RegTransBase, a manually curated database of regulatoryinteractions in prokaryotes, captures the knowledge in publishedscientific literature using a controlled vocabulary. Although a number ofdatabases describing interactions between regulatory proteins and theirbinding sites are currently being maintained, they focus mostly on themodel organisms Escherichia coli and Bacillus subtilis, or are entirelycomputationally derived. RegTransBase describes a large number ofregulatory interactions reported in many organisms and contains varioustypes of experimental data, in particular: the activation or repressionof transcription by an identified direct regulator; determining thetranscriptional regulatory function of a protein (or RNA) directlybinding to DNA (RNA); mapping or prediction of binding site for aregulatory protein; characterization of regulatory mutations. Currently,the RegTransBase content is derived from about 3000 relevant articlesdescribing over 7000 experiments in relation to 128 microbes. It containsdata on the regulation of about 7500 genes and evidence for 6500interactions with 650 regulators. RegTransBase also contains manuallycreated position weight matrices (PWM) that can be used to identifycandidate regulatory sites in over 60 species. RegTransBase is availableat http://regtransbase.lbl.gov.

  14. National Radiobiology Archives Distributed Access user's manual

    Energy Technology Data Exchange (ETDEWEB)

    Watson, C.; Smith, S. (Pacific Northwest Lab., Richland, WA (United States)); Prather, J. (Linfield Coll., McMinnville, OR (United States))

    1991-11-01

    This User's Manual describes installation and use of the National Radiobiology Archives (NRA) Distributed Access package. The package consists of a distributed subset of information representative of the NRA databases and database access software which provide an introduction to the scope and style of the NRA Information Systems.

  15. International survey of academic library data curation practices

    CERN Document Server

    2013-01-01

    This survey looks closely at the data curation practices of a sample of research-oriented universities largely from the USA, the UK, Australia and Scandinavia but also including India, South Africa and other countries. The study looks at how major universities are assisting faculty in developing data curation and management plans for large scale data projects, largely in the sciences and social sciences, often as pre-conditions for major grants. The report looks at which departments of universities are shouldering the data curation burden, the personnel involved in the efforts, the costs involved, types of software used, difficulties in procuring scientific experiment logs and other hard to obtain information, types of training offered to faculty, and other issues in large scale data management.

  16. Curating NASA's future extraterrestrial sample collections: How do we achieve maximum proficiency?

    Science.gov (United States)

    McCubbin, Francis; Evans, Cynthia; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael; Zeigler, Ryan

    2016-07-01

    Introduction: The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "…documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working to-wards a state of maximum proficiency. Founding Principle: Curatorial activities began at JSC (Manned Spacecraft Center before 1973) as soon as design and construction planning for the Lunar Receiving Laboratory (LRL) began in 1964 [1], not with the return of the Apollo samples in 1969, nor with the completion of the LRL in 1967. This practice has since proven that curation begins as soon as a sample return mission is conceived, and this founding principle continues to return dividends today [e.g., 2]. The Next Decade: Part of the curation process is planning for the future, and we refer to these planning efforts as "advanced curation" [3]. Advanced Curation is tasked with developing procedures, technology, and data sets necessary for curating new types of collections as envisioned by NASA exploration goals. We are (and have been) planning for future curation, including cold curation, extended curation of ices and volatiles, curation of samples with special chemical considerations such as perchlorate-rich samples, curation of organically- and biologically-sensitive samples, and the use of minimally invasive analytical techniques (e.g., micro-CT, [4]) to characterize samples. These efforts will be useful for Mars Sample Return

  17. Content curation en periodismo (y en documentación periodística)

    OpenAIRE

    Guallar, Javier

    2014-01-01

    The last years we have seen the appearance of concepts such as content curation and content curator as, respectively, the activity or system and the professional or specialist. Although the term is originally linked to the world of marketing, -considering marketer's Rohit Bhargava 'Manifesto for the content curator' (2009) as its founding article-, and its features mostly identify with those of the Information Science professional, content curation goes beyond a specific discipline or profess...

  18. Solvent Handbook Database System user's manual

    International Nuclear Information System (INIS)

    1993-03-01

    Industrial solvents and cleaners are used in maintenance facilities to remove wax, grease, oil, carbon, machining fluids, solder fluxes, mold release, and various other contaminants from parts, and to prepare the surface of various metals. However, because of growing environmental and worker-safety concerns, government regulations have already excluded the use of some chemicals and have restricted the use of halogenated hydrocarbons because they affect the ozone layer and may cause cancer. The Solvent Handbook Database System lets you view information on solvents and cleaners, including test results on cleaning performance, air emissions, recycling and recovery, corrosion, and non-metals compatibility. Company and product safety information is also available

  19. [Curative effect of ozone hydrotherapy for pemphigus].

    Science.gov (United States)

    Jiang, Fuqiong; Deng, Danqi; Li, Xiaolan; Wang, Wenfang; Xie, Hong; Wu, Yongzhuo; Luan, Chunyan; Yang, Binbin

    2018-02-28

    To determine clinical curative effects of ozone therapy for pemphigus vulgaris.
 Methods: Ozone hydrotherapy was used as an aid treatment for 32 patients with pemphigus vulgaris. The hydropathic compression of potassium permanganate solution for 34 patients with pemphigus vulgaris served as a control. The main treatment for both groups were glucocorticoids and immune inhibitors. The lesions of patients, bacterial infection, usage of antibiotics, patient's satisfaction, and clinical curative effect were evaluated in the 2 groups.
 Results: There was no significant difference in the curative effect and the average length of staying at hospital between the 2 groups (P>0.05). But rate for the usage of antibiotics was significantly reduced in the group of ozone hydrotherapy (P=0.039). The patients were more satisfied in using ozone hydrotherapy than the potassium permanganate solution after 7-day therapy (P>0.05).
 Conclusion: Ozone hydrotherapy is a safe and effective aid method for pemphigus vulgaris. It can reduce the usage of antibiotics.

  20. NeuroRDF: semantic integration of highly curated data to prioritize biomarker candidates in Alzheimer's disease.

    Science.gov (United States)

    Iyappan, Anandhi; Kawalia, Shweta Bagewadi; Raschka, Tamara; Hofmann-Apitius, Martin; Senger, Philipp

    2016-07-08

    Neurodegenerative diseases are incurable and debilitating indications with huge social and economic impact, where much is still to be learnt about the underlying molecular events. Mechanistic disease models could offer a knowledge framework to help decipher the complex interactions that occur at molecular and cellular levels. This motivates the need for the development of an approach integrating highly curated and heterogeneous data into a disease model of different regulatory data layers. Although several disease models exist, they often do not consider the quality of underlying data. Moreover, even with the current advancements in semantic web technology, we still do not have cure for complex diseases like Alzheimer's disease. One of the key reasons accountable for this could be the increasing gap between generated data and the derived knowledge. In this paper, we describe an approach, called as NeuroRDF, to develop an integrative framework for modeling curated knowledge in the area of complex neurodegenerative diseases. The core of this strategy lies in the usage of well curated and context specific data for integration into one single semantic web-based framework, RDF. This increases the probability of the derived knowledge to be novel and reliable in a specific disease context. This infrastructure integrates highly curated data from databases (Bind, IntAct, etc.), literature (PubMed), and gene expression resources (such as GEO and ArrayExpress). We illustrate the effectiveness of our approach by asking real-world biomedical questions that link these resources to prioritize the plausible biomarker candidates. Among the 13 prioritized candidate genes, we identified MIF to be a potential emerging candidate due to its role as a pro-inflammatory cytokine. We additionally report on the effort and challenges faced during generation of such an indication-specific knowledge base comprising of curated and quality-controlled data. Although many alternative approaches

  1. Design database for quantitative trait loci (QTL) data warehouse, data mining, and meta-analysis.

    Science.gov (United States)

    Hu, Zhi-Liang; Reecy, James M; Wu, Xiao-Lin

    2012-01-01

    A database can be used to warehouse quantitative trait loci (QTL) data from multiple sources for comparison, genomic data mining, and meta-analysis. A robust database design involves sound data structure logistics, meaningful data transformations, normalization, and proper user interface designs. This chapter starts with a brief review of relational database basics and concentrates on issues associated with curation of QTL data into a relational database, with emphasis on the principles of data normalization and structure optimization. In addition, some simple examples of QTL data mining and meta-analysis are included. These examples are provided to help readers better understand the potential and importance of sound database design.

  2. Curated routes: the project of developing experiential tracks in sub-urban landscape

    OpenAIRE

    Papathanasiou, Maximi; Uyttenhove, Pieter

    2015-01-01

    The Curated Routes project reflects on the visiting routes’ ability to make apparent the internal characteristics of urban environments. The project’s name allude to the intellectual function of curation and the materiality of routes. Curate deals with the practice of arranging material –tangible or intangible- in a way that a new understanding of an area is revealed. The word routes refers to the linear associations that link places and guide movement. The Curated Routes aim to reinforce the...

  3. DREMECELS: A Curated Database for Base Excision and Mismatch Repair Mechanisms Associated Human Malignancies.

    Directory of Open Access Journals (Sweden)

    Ankita Shukla

    Full Text Available DNA repair mechanisms act as a warrior combating various damaging processes that ensue critical malignancies. DREMECELS was designed considering the malignancies with frequent alterations in DNA repair pathways, that is, colorectal and endometrial cancers, associated with Lynch syndrome (also known as HNPCC. Since lynch syndrome carries high risk (~40-60% for both cancers, therefore we decided to cover all three diseases in this portal. Although a large population is presently affected by these malignancies, many resources are available for various cancer types but no database archives information on the genes specifically for only these cancers and disorders. The database contains 156 genes and two repair mechanisms, base excision repair (BER and mismatch repair (MMR. Other parameters include some of the regulatory processes that have roles in these disease progressions due to incompetent repair mechanisms, specifically BER and MMR. However, our unique database mainly provides qualitative and quantitative information on these cancer types along with methylation, drug sensitivity, miRNAs, copy number variation (CNV and somatic mutations data. This database would serve the scientific community by providing integrated information on these disease types, thus sustaining diagnostic and therapeutic processes. This repository would serve as an excellent accompaniment for researchers and biomedical professionals and facilitate in understanding such critical diseases. DREMECELS is publicly available at http://www.bioinfoindia.org/dremecels.

  4. Advanced Curation Activities at NASA: Implications for Astrobiological Studies of Future Sample Collections

    Science.gov (United States)

    McCubbin, F. M.; Evans, C. A.; Fries, M. D.; Harrington, A. D.; Regberg, A. B.; Snead, C. J.; Zeigler, R. A.

    2017-01-01

    The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10F JSC is charged with curation of all extraterrestrial material under NASA control, including future NASA missions. The Directive goes on to define Curation as including documentation, preservation, preparation, and distribution of samples for re-search, education, and public outreach. Here we briefly describe NASA's astromaterials collections and our ongoing efforts related to enhancing the utility of our current collections as well as our efforts to prepare for future sample return missions. We collectively refer to these efforts as advanced curation.

  5. PrionHome: a database of prions and other sequences relevant to prion phenomena.

    Directory of Open Access Journals (Sweden)

    Djamel Harbi

    Full Text Available Prions are units of propagation of an altered state of a protein or proteins; prions can propagate from organism to organism, through cooption of other protein copies. Prions contain no necessary nucleic acids, and are important both as both pathogenic agents, and as a potential force in epigenetic phenomena. The original prions were derived from a misfolded form of the mammalian Prion Protein PrP. Infection by these prions causes neurodegenerative diseases. Other prions cause non-Mendelian inheritance in budding yeast, and sometimes act as diseases of yeast. We report the bioinformatic construction of the PrionHome, a database of >2000 prion-related sequences. The data was collated from various public and private resources and filtered for redundancy. The data was then processed according to a transparent classification system of prionogenic sequences (i.e., sequences that can make prions, prionoids (i.e., proteins that propagate like prions between individual cells, and other prion-related phenomena. There are eight PrionHome classifications for sequences. The first four classifications are derived from experimental observations: prionogenic sequences, prionoids, other prion-related phenomena, and prion interactors. The second four classifications are derived from sequence analysis: orthologs, paralogs, pseudogenes, and candidate-prionogenic sequences. Database entries list: supporting information for PrionHome classifications, prion-determinant areas (where relevant, and disordered and compositionally-biased regions. Also included are literature references for the PrionHome classifications, transcripts and genomic coordinates, and structural data (including comparative models made for the PrionHome from manually curated alignments. We provide database usage examples for both vertebrate and fungal prion contexts. Using the database data, we have performed a detailed analysis of the compositional biases in known budding-yeast prionogenic

  6. PrionHome: a database of prions and other sequences relevant to prion phenomena.

    Science.gov (United States)

    Harbi, Djamel; Parthiban, Marimuthu; Gendoo, Deena M A; Ehsani, Sepehr; Kumar, Manish; Schmitt-Ulms, Gerold; Sowdhamini, Ramanathan; Harrison, Paul M

    2012-01-01

    Prions are units of propagation of an altered state of a protein or proteins; prions can propagate from organism to organism, through cooption of other protein copies. Prions contain no necessary nucleic acids, and are important both as both pathogenic agents, and as a potential force in epigenetic phenomena. The original prions were derived from a misfolded form of the mammalian Prion Protein PrP. Infection by these prions causes neurodegenerative diseases. Other prions cause non-Mendelian inheritance in budding yeast, and sometimes act as diseases of yeast. We report the bioinformatic construction of the PrionHome, a database of >2000 prion-related sequences. The data was collated from various public and private resources and filtered for redundancy. The data was then processed according to a transparent classification system of prionogenic sequences (i.e., sequences that can make prions), prionoids (i.e., proteins that propagate like prions between individual cells), and other prion-related phenomena. There are eight PrionHome classifications for sequences. The first four classifications are derived from experimental observations: prionogenic sequences, prionoids, other prion-related phenomena, and prion interactors. The second four classifications are derived from sequence analysis: orthologs, paralogs, pseudogenes, and candidate-prionogenic sequences. Database entries list: supporting information for PrionHome classifications, prion-determinant areas (where relevant), and disordered and compositionally-biased regions. Also included are literature references for the PrionHome classifications, transcripts and genomic coordinates, and structural data (including comparative models made for the PrionHome from manually curated alignments). We provide database usage examples for both vertebrate and fungal prion contexts. Using the database data, we have performed a detailed analysis of the compositional biases in known budding-yeast prionogenic sequences, showing

  7. PathwayAccess: CellDesigner plugins for pathway databases.

    Science.gov (United States)

    Van Hemert, John L; Dickerson, Julie A

    2010-09-15

    CellDesigner provides a user-friendly interface for graphical biochemical pathway description. Many pathway databases are not directly exportable to CellDesigner models. PathwayAccess is an extensible suite of CellDesigner plugins, which connect CellDesigner directly to pathway databases using respective Java application programming interfaces. The process is streamlined for creating new PathwayAccess plugins for specific pathway databases. Three PathwayAccess plugins, MetNetAccess, BioCycAccess and ReactomeAccess, directly connect CellDesigner to the pathway databases MetNetDB, BioCyc and Reactome. PathwayAccess plugins enable CellDesigner users to expose pathway data to analytical CellDesigner functions, curate their pathway databases and visually integrate pathway data from different databases using standard Systems Biology Markup Language and Systems Biology Graphical Notation. Implemented in Java, PathwayAccess plugins run with CellDesigner version 4.0.1 and were tested on Ubuntu Linux, Windows XP and 7, and MacOSX. Source code, binaries, documentation and video walkthroughs are freely available at http://vrac.iastate.edu/~jlv.

  8. The MetaCyc database of metabolic pathways and enzymes and the BioCyc collection of pathway/genome databases

    Science.gov (United States)

    Caspi, Ron; Altman, Tomer; Dale, Joseph M.; Dreher, Kate; Fulcher, Carol A.; Gilham, Fred; Kaipa, Pallavi; Karthikeyan, Athikkattuvalasu S.; Kothari, Anamika; Krummenacker, Markus; Latendresse, Mario; Mueller, Lukas A.; Paley, Suzanne; Popescu, Liviu; Pujar, Anuradha; Shearer, Alexander G.; Zhang, Peifen; Karp, Peter D.

    2010-01-01

    The MetaCyc database (MetaCyc.org) is a comprehensive and freely accessible resource for metabolic pathways and enzymes from all domains of life. The pathways in MetaCyc are experimentally determined, small-molecule metabolic pathways and are curated from the primary scientific literature. With more than 1400 pathways, MetaCyc is the largest collection of metabolic pathways currently available. Pathways reactions are linked to one or more well-characterized enzymes, and both pathways and enzymes are annotated with reviews, evidence codes, and literature citations. BioCyc (BioCyc.org) is a collection of more than 500 organism-specific Pathway/Genome Databases (PGDBs). Each BioCyc PGDB contains the full genome and predicted metabolic network of one organism. The network, which is predicted by the Pathway Tools software using MetaCyc as a reference, consists of metabolites, enzymes, reactions and metabolic pathways. BioCyc PGDBs also contain additional features, such as predicted operons, transport systems, and pathway hole-fillers. The BioCyc Web site offers several tools for the analysis of the PGDBs, including Omics Viewers that enable visualization of omics datasets on two different genome-scale diagrams and tools for comparative analysis. The BioCyc PGDBs generated by SRI are offered for adoption by any party interested in curation of metabolic, regulatory, and genome-related information about an organism. PMID:19850718

  9. ExplorEnz: a MySQL database of the IUBMB enzyme nomenclature.

    Science.gov (United States)

    McDonald, Andrew G; Boyce, Sinéad; Moss, Gerard P; Dixon, Henry B F; Tipton, Keith F

    2007-07-27

    We describe the database ExplorEnz, which is the primary repository for EC numbers and enzyme data that are being curated on behalf of the IUBMB. The enzyme nomenclature is incorporated into many other resources, including the ExPASy-ENZYME, BRENDA and KEGG bioinformatics databases. The data, which are stored in a MySQL database, preserve the formatting of chemical and enzyme names. A simple, easy to use, web-based query interface is provided, along with an advanced search engine for more complex queries. The database is publicly available at http://www.enzyme-database.org. The data are available for download as SQL and XML files via FTP. ExplorEnz has powerful and flexible search capabilities and provides the scientific community with the most up-to-date version of the IUBMB Enzyme List.

  10. Bookshelf: a simple curation system for the storage of biomolecular simulation data.

    Science.gov (United States)

    Vohra, Shabana; Hall, Benjamin A; Holdbrook, Daniel A; Khalid, Syma; Biggin, Philip C

    2010-01-01

    Molecular dynamics simulations can now routinely generate data sets of several hundreds of gigabytes in size. The ability to generate this data has become easier over recent years and the rate of data production is likely to increase rapidly in the near future. One major problem associated with this vast amount of data is how to store it in a way that it can be easily retrieved at a later date. The obvious answer to this problem is a database. However, a key issue in the development and maintenance of such a database is its sustainability, which in turn depends on the ease of the deposition and retrieval process. Encouraging users to care about meta-data is difficult and thus the success of any storage system will ultimately depend on how well used by end-users the system is. In this respect we suggest that even a minimal amount of metadata if stored in a sensible fashion is useful, if only at the level of individual research groups. We discuss here, a simple database system which we call 'Bookshelf', that uses python in conjunction with a mysql database to provide an extremely simple system for curating and keeping track of molecular simulation data. It provides a user-friendly, scriptable solution to the common problem amongst biomolecular simulation laboratories; the storage, logging and subsequent retrieval of large numbers of simulations. Download URL: http://sbcb.bioch.ox.ac.uk/bookshelf/

  11. SwissPalm: Protein Palmitoylation database.

    Science.gov (United States)

    Blanc, Mathieu; David, Fabrice; Abrami, Laurence; Migliozzi, Daniel; Armand, Florence; Bürgi, Jérôme; van der Goot, Françoise Gisou

    2015-01-01

    Protein S-palmitoylation is a reversible post-translational modification that regulates many key biological processes, although the full extent and functions of protein S-palmitoylation remain largely unexplored. Recent developments of new chemical methods have allowed the establishment of palmitoyl-proteomes of a variety of cell lines and tissues from different species.  As the amount of information generated by these high-throughput studies is increasing, the field requires centralization and comparison of this information. Here we present SwissPalm ( http://swisspalm.epfl.ch), our open, comprehensive, manually curated resource to study protein S-palmitoylation. It currently encompasses more than 5000 S-palmitoylated protein hits from seven species, and contains more than 500 specific sites of S-palmitoylation. SwissPalm also provides curated information and filters that increase the confidence in true positive hits, and integrates predictions of S-palmitoylated cysteine scores, orthologs and isoform multiple alignments. Systems analysis of the palmitoyl-proteome screens indicate that 10% or more of the human proteome is susceptible to S-palmitoylation. Moreover, ontology and pathway analyses of the human palmitoyl-proteome reveal that key biological functions involve this reversible lipid modification. Comparative analysis finally shows a strong crosstalk between S-palmitoylation and other post-translational modifications. Through the compilation of data and continuous updates, SwissPalm will provide a powerful tool to unravel the global importance of protein S-palmitoylation.

  12. Global Metabolic Reconstruction and Metabolic Gene Evolution in the Cattle Genome

    Science.gov (United States)

    Kim, Woonsu; Park, Hyesun; Seo, Seongwon

    2016-01-01

    The sequence of cattle genome provided a valuable opportunity to systematically link genetic and metabolic traits of cattle. The objectives of this study were 1) to reconstruct genome-scale cattle-specific metabolic pathways based on the most recent and updated cattle genome build and 2) to identify duplicated metabolic genes in the cattle genome for better understanding of metabolic adaptations in cattle. A bioinformatic pipeline of an organism for amalgamating genomic annotations from multiple sources was updated. Using this, an amalgamated cattle genome database based on UMD_3.1, was created. The amalgamated cattle genome database is composed of a total of 33,292 genes: 19,123 consensus genes between NCBI and Ensembl databases, 8,410 and 5,493 genes only found in NCBI or Ensembl, respectively, and 266 genes from NCBI scaffolds. A metabolic reconstruction of the cattle genome and cattle pathway genome database (PGDB) was also developed using Pathway Tools, followed by an intensive manual curation. The manual curation filled or revised 68 pathway holes, deleted 36 metabolic pathways, and added 23 metabolic pathways. Consequently, the curated cattle PGDB contains 304 metabolic pathways, 2,460 reactions including 2,371 enzymatic reactions, and 4,012 enzymes. Furthermore, this study identified eight duplicated genes in 12 metabolic pathways in the cattle genome compared to human and mouse. Some of these duplicated genes are related with specific hormone biosynthesis and detoxifications. The updated genome-scale metabolic reconstruction is a useful tool for understanding biology and metabolic characteristics in cattle. There has been significant improvements in the quality of cattle genome annotations and the MetaCyc database. The duplicated metabolic genes in the cattle genome compared to human and mouse implies evolutionary changes in the cattle genome and provides a useful information for further research on understanding metabolic adaptations of cattle. PMID

  13. Pathbase: A new reference resource and database for laboratory mouse pathology

    International Nuclear Information System (INIS)

    Schofield, P. N.; Bard, J. B. L.; Boniver, J.; Covelli, V.; Delvenne, P.; Ellender, M.; Engstrom, W.; Goessner, W.; Gruenberger, M.; Hoefler, H.; Hopewell, J. W.; Mancuso, M.; Mothersill, C.; Quintanilla-Martinez, L.; Rozell, B.; Sariola, H.; Sundberg, J. P.; Ward, A.

    2004-01-01

    Pathbase (http:/www.pathbase.net) is a web accessible database of histopathological images of laboratory mice, developed as a resource for the coding and archiving of data derived from the analysis of mutant or genetically engineered mice and their background strains. The metadata for the images, which allows retrieval and inter-operability with other databases, is derived from a series of orthogonal ontologies, and controlled vocabularies. One of these controlled vocabularies, MPATH, was developed by the Pathbase Consortium as a formal description of the content of mouse histopathological images. The database currently has over 1000 images on-line with 2000 more under curation and presents a paradigm for the development of future databases dedicated to aspects of experimental biology. (authors)

  14. Interpretation manual of emergency cranial tomography

    International Nuclear Information System (INIS)

    Mejias Soto, Carolina

    2012-01-01

    A manual has been prepared for the correct evaluation of tomographic studies in emergencies. The literature review was developed selecting printed books, journal articles, electronic of cases in etiopathogenesis, and neuroimaging findings frequently made emergencies. The information has been consulted in major bibliographic databases such as MD-Consult and Med-Line, radiology journals such as: Radiology, Radiographics, Clinics of North America, American Journal of Roetnology and newly available texts on neuroimaging. The manual has been a guide for radiology resident physicians, of the topics of greatest interest to collaborate in the diagnosis, monitoring of patients and therapeutic decisions [es

  15. FileMaker Pro 11 The Missing Manual

    CERN Document Server

    Prosser, Susan

    2010-01-01

    This hands-on, friendly guide shows you how to harness FileMaker's power to make your information work for you. With a few mouse clicks, the FileMaker Pro 11 database helps you create and print corporate reports, manage a mailing list, or run your entire business. FileMaker Pro 11: The Missing Manual helps you get started, build your database, and produce results, whether you're running a business, pursuing a hobby, or planning your retirement. It's a thorough, accessible guide for new, non-technical users, as well as those with more experience. Start up: Get your first database up and runnin

  16. The prognostic importance of jaundice in surgical resection with curative intent for gallbladder cancer.

    Science.gov (United States)

    Yang, Xin-wei; Yuan, Jian-mao; Chen, Jun-yi; Yang, Jue; Gao, Quan-gen; Yan, Xing-zhou; Zhang, Bao-hua; Feng, Shen; Wu, Meng-chao

    2014-09-03

    Preoperative jaundice is frequent in gallbladder cancer (GBC) and indicates advanced disease. Resection is rarely recommended to treat advanced GBC. An aggressive surgical approach for advanced GBC remains lacking because of the association of this disease with serious postoperative complications and poor prognosis. This study aims to re-assess the prognostic value of jaundice for the morbidity, mortality, and survival of GBC patients who underwent surgical resection with curative intent. GBC patients who underwent surgical resection with curative intent at a single institution between January 2003 and December 2012 were identified from a prospectively maintained database. A total of 192 patients underwent surgical resection with curative intent, of whom 47 had preoperative jaundice and 145 had none. Compared with the non-jaundiced patients, the jaundiced patients had significantly longer operative time (p jaundice was the only independent predictor of postoperative complications. The jaundiced patients had lower survival rates than the non-jaundiced patients (p jaundiced patients. The survival rates of the jaundiced patients with preoperative biliary drainage (PBD) were similar to those of the jaundiced patients without PBD (p = 0.968). No significant differences in the rate of postoperative intra-abdominal abscesses were found between the jaundiced patients with and without PBD (n = 4, 21.1% vs. n = 5, 17.9%, p = 0.787). Preoperative jaundice indicates poor prognosis and high postoperative morbidity but is not a surgical contraindication. Gallbladder neck tumors significantly increase the surgical difficulty and reduce the opportunities for radical resection. Gallbladder neck tumors can independently predict poor outcome. PBD correlates with neither a low rate of postoperative intra-abdominal abscesses nor a high survival rate.

  17. An overview of the BioCreative 2012 Workshop Track III: interactive text mining task.

    Science.gov (United States)

    Arighi, Cecilia N; Carterette, Ben; Cohen, K Bretonnel; Krallinger, Martin; Wilbur, W John; Fey, Petra; Dodson, Robert; Cooper, Laurel; Van Slyke, Ceri E; Dahdul, Wasila; Mabee, Paula; Li, Donghui; Harris, Bethany; Gillespie, Marc; Jimenez, Silvia; Roberts, Phoebe; Matthews, Lisa; Becker, Kevin; Drabkin, Harold; Bello, Susan; Licata, Luana; Chatr-aryamontri, Andrew; Schaeffer, Mary L; Park, Julie; Haendel, Melissa; Van Auken, Kimberly; Li, Yuling; Chan, Juancarlos; Muller, Hans-Michael; Cui, Hong; Balhoff, James P; Chi-Yang Wu, Johnny; Lu, Zhiyong; Wei, Chih-Hsuan; Tudor, Catalina O; Raja, Kalpana; Subramani, Suresh; Natarajan, Jeyakumar; Cejuela, Juan Miguel; Dubey, Pratibha; Wu, Cathy

    2013-01-01

    In many databases, biocuration primarily involves literature curation, which usually involves retrieving relevant articles, extracting information that will translate into annotations and identifying new incoming literature. As the volume of biological literature increases, the use of text mining to assist in biocuration becomes increasingly relevant. A number of groups have developed tools for text mining from a computer science/linguistics perspective, and there are many initiatives to curate some aspect of biology from the literature. Some biocuration efforts already make use of a text mining tool, but there have not been many broad-based systematic efforts to study which aspects of a text mining tool contribute to its usefulness for a curation task. Here, we report on an effort to bring together text mining tool developers and database biocurators to test the utility and usability of tools. Six text mining systems presenting diverse biocuration tasks participated in a formal evaluation, and appropriate biocurators were recruited for testing. The performance results from this evaluation indicate that some of the systems were able to improve efficiency of curation by speeding up the curation task significantly (∼1.7- to 2.5-fold) over manual curation. In addition, some of the systems were able to improve annotation accuracy when compared with the performance on the manually curated set. In terms of inter-annotator agreement, the factors that contributed to significant differences for some of the systems included the expertise of the biocurator on the given curation task, the inherent difficulty of the curation and attention to annotation guidelines. After the task, annotators were asked to complete a survey to help identify strengths and weaknesses of the various systems. The analysis of this survey highlights how important task completion is to the biocurators' overall experience of a system, regardless of the system's high score on design, learnability and

  18. DCC DIFFUSE Standards Frameworks: A Standards Path through the Curation Lifecycle

    Directory of Open Access Journals (Sweden)

    Sarah Higgins

    2009-10-01

    Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 DCC DIFFUSE Standards Frameworks aims to offer domain specific advice on standards relevant to digital preservation and curation, to help curators identify which standards they should be using and where they can be appropriately implemented, to ensure authoritative digital material. The Project uses the DCC Curation Lifecycle Model and Web 2.0 technology, to visually present standards frameworks for a number of disciplines. The Digital Curation Centre (DCC is actively working with a different relevant organisations to present searchable frameworks of standards, for a number of domains. These include digital repositories, records management, the geo-information sector, archives and the museum sector. Other domains, such as e-science, will shortly be investigated.

  19. SoyTEdb: a comprehensive database of transposable elements in the soybean genome

    Directory of Open Access Journals (Sweden)

    Zhu Liucun

    2010-02-01

    Full Text Available Abstract Background Transposable elements are the most abundant components of all characterized genomes of higher eukaryotes. It has been documented that these elements not only contribute to the shaping and reshaping of their host genomes, but also play significant roles in regulating gene expression, altering gene function, and creating new genes. Thus, complete identification of transposable elements in sequenced genomes and construction of comprehensive transposable element databases are essential for accurate annotation of genes and other genomic components, for investigation of potential functional interaction between transposable elements and genes, and for study of genome evolution. The recent availability of the soybean genome sequence has provided an unprecedented opportunity for discovery, and structural and functional characterization of transposable elements in this economically important legume crop. Description Using a combination of structure-based and homology-based approaches, a total of 32,552 retrotransposons (Class I and 6,029 DNA transposons (Class II with clear boundaries and insertion sites were structurally annotated and clearly categorized, and a soybean transposable element database, SoyTEdb, was established. These transposable elements have been anchored in and integrated with the soybean physical map and genetic map, and are browsable and visualizable at any scale along the 20 soybean chromosomes, along with predicted genes and other sequence annotations. BLAST search and other infrastracture tools were implemented to facilitate annotation of transposable elements or fragments from soybean and other related legume species. The majority (> 95% of these elements (particularly a few hundred low-copy-number families are first described in this study. Conclusion SoyTEdb provides resources and information related to transposable elements in the soybean genome, representing the most comprehensive and the largest manually

  20. Electromagnetic Systems Effects Database (EMSED). AERO 90, Phase II User's Manual

    National Research Council Canada - National Science Library

    Sawires, Kalim

    1998-01-01

    The Electromagnetic Systems Effects Database (EMSED), also called AIRBASE, is a training guide for users not familiar with the AIRBASE database and its operating platform, the Macintosh computer (Mac...

  1. The PMDB Protein Model Database

    Science.gov (United States)

    Castrignanò, Tiziana; De Meo, Paolo D'Onorio; Cozzetto, Domenico; Talamo, Ivano Giuseppe; Tramontano, Anna

    2006-01-01

    The Protein Model Database (PMDB) is a public resource aimed at storing manually built 3D models of proteins. The database is designed to provide access to models published in the scientific literature, together with validating experimental data. It is a relational database and it currently contains >74 000 models for ∼240 proteins. The system is accessible at and allows predictors to submit models along with related supporting evidence and users to download them through a simple and intuitive interface. Users can navigate in the database and retrieve models referring to the same target protein or to different regions of the same protein. Each model is assigned a unique identifier that allows interested users to directly access the data. PMID:16381873

  2. DISEASES

    DEFF Research Database (Denmark)

    Pletscher-Frankild, Sune; Pallejà, Albert; Tsafou, Kalliopi

    2015-01-01

    Text mining is a flexible technology that can be applied to numerous different tasks in biology and medicine. We present a system for extracting disease-gene associations from biomedical abstracts. The system consists of a highly efficient dictionary-based tagger for named entity recognition...... of human genes and diseases, which we combine with a scoring scheme that takes into account co-occurrences both within and between sentences. We show that this approach is able to extract half of all manually curated associations with a false positive rate of only 0.16%. Nonetheless, text mining should...... not stand alone, but be combined with other types of evidence. For this reason, we have developed the DISEASES resource, which integrates the results from text mining with manually curated disease-gene associations, cancer mutation data, and genome-wide association studies from existing databases...

  3. Human Variome Project Quality Assessment Criteria for Variation Databases.

    Science.gov (United States)

    Vihinen, Mauno; Hancock, John M; Maglott, Donna R; Landrum, Melissa J; Schaafsma, Gerard C P; Taschner, Peter

    2016-06-01

    Numerous databases containing information about DNA, RNA, and protein variations are available. Gene-specific variant databases (locus-specific variation databases, LSDBs) are typically curated and maintained for single genes or groups of genes for a certain disease(s). These databases are widely considered as the most reliable information source for a particular gene/protein/disease, but it should also be made clear they may have widely varying contents, infrastructure, and quality. Quality is very important to evaluate because these databases may affect health decision-making, research, and clinical practice. The Human Variome Project (HVP) established a Working Group for Variant Database Quality Assessment. The basic principle was to develop a simple system that nevertheless provides a good overview of the quality of a database. The HVP quality evaluation criteria that resulted are divided into four main components: data quality, technical quality, accessibility, and timeliness. This report elaborates on the developed quality criteria and how implementation of the quality scheme can be achieved. Examples are provided for the current status of the quality items in two different databases, BTKbase, an LSDB, and ClinVar, a central archive of submissions about variants and their clinical significance. © 2016 WILEY PERIODICALS, INC.

  4. MetSigDis: a manually curated resource for the metabolic signatures of diseases.

    Science.gov (United States)

    Cheng, Liang; Yang, Haixiu; Zhao, Hengqiang; Pei, Xiaoya; Shi, Hongbo; Sun, Jie; Zhang, Yunpeng; Wang, Zhenzhen; Zhou, Meng

    2017-08-22

    Complex diseases cannot be understood only on the basis of single gene, single mRNA transcript or single protein but the effect of their collaborations. The combination consequence in molecular level can be captured by the alterations of metabolites. With the rapidly developing of biomedical instruments and analytical platforms, a large number of metabolite signatures of complex diseases were identified and documented in the literature. Biologists' hardship in the face of this large amount of papers recorded metabolic signatures of experiments' results calls for an automated data repository. Therefore, we developed MetSigDis aiming to provide a comprehensive resource of metabolite alterations in various diseases. MetSigDis is freely available at http://www.bio-annotation.cn/MetSigDis/. By reviewing hundreds of publications, we collected 6849 curated relationships between 2420 metabolites and 129 diseases across eight species involving Homo sapiens and model organisms. All of these relationships were used in constructing a metabolite disease network (MDN). This network displayed scale-free characteristics according to the degree distribution (power-law distribution with R2 = 0.909), and the subnetwork of MDN for interesting diseases and their related metabolites can be visualized in the Web. The common alterations of metabolites reflect the metabolic similarity of diseases, which is measured using Jaccard index. We observed that metabolite-based similar diseases are inclined to share semantic associations of Disease Ontology. A human disease network was then built, where a node represents a disease, and an edge indicates similarity of pair-wise diseases. The network validated the observation that linked diseases based on metabolites should have more overlapped genes. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. The Annotometer; Encouraging uptake and use of freely available curation tools.

    OpenAIRE

    Zhe, Xiao Si; Edmunds, Scott; Li, Peter; Goodman, Laurie; Hunter, Christopher

    2016-01-01

    In recent years it has become clear that the amount of data being generated worldwide cannot be curated and annotated by any individual or small group. Currently, there is recognition that one of the best ways to provide ongoing curation is to employ the power of the community. To achieve this, the first hurdle to overcome was the development of user-friendly tools and apps that non-expert curators would be comfortable and capable of using. Such tools are now in place, inclu...

  6. Advances in Astromaterials Curation: Supporting Future Sample Return Missions

    Science.gov (United States)

    Evans, C. A.; Zeigler, R. A.; Fries, M. D..; Righter, K.; Allton, J. H.; Zolensky, M. E.; Calaway, M. J.; Bell, M. S.

    2015-01-01

    NASA's Astromaterials, curated at the Johnson Space Center in Houston, are the most extensive, best-documented, and leastcontaminated extraterrestrial samples that are provided to the worldwide research community. These samples include lunar samples from the Apollo missions, meteorites collected over nearly 40 years of expeditions to Antarctica (providing samples of dozens of asteroid bodies, the Moon, and Mars), Genesis solar wind samples, cosmic dust collected by NASA's high altitude airplanes, Comet Wild 2 and interstellar dust samples from the Stardust mission, and asteroid samples from JAXA's Hayabusa mission. A full account of NASA's curation efforts for these collections is provided by Allen, et al [1]. On average, we annually allocate about 1500 individual samples from NASA's astromaterials collections to hundreds of researchers from around the world, including graduate students and post-doctoral scientists; our allocation rate has roughly doubled over the past 10 years. The curation protocols developed for the lunar samples returned from the Apollo missions remain relevant and are adapted to new and future missions. Several lessons from the Apollo missions, including the need for early involvement of curation scientists in mission planning [1], have been applied to all subsequent sample return campaigns. From the 2013 National Academy of Sciences report [2]: "Curation is the critical interface between sample return missions and laboratory research. Proper curation has maintained the scientific integrity and utility of the Apollo, Antarctic meteorite, and cosmic dust collections for decades. Each of these collections continues to yield important new science. In the past decade, new state-of-the-art curatorial facilities for the Genesis and Stardust missions were key to the scientific breakthroughs provided by these missions." The results speak for themselves: research on NASA's astromaterials result in hundreds of papers annually, yield fundamental

  7. Teacher Training in Curative Education.

    Science.gov (United States)

    Juul, Kristen D.; Maier, Manfred

    1992-01-01

    This article considers the application of the philosophical and educational principles of Rudolf Steiner, called "anthroposophy," to the training of teachers and curative educators in the Waldorf schools. Special emphasis is on the Camphill movement which focuses on therapeutic schools and communities for children with special needs. (DB)

  8. Generic communications index: User's manual

    International Nuclear Information System (INIS)

    Dean, R.S.; Steinbrecher, D.H.; Hennick, A.

    1987-12-01

    This report is a manual for providing information required to use a special computer program developed by the NRC for indexing generic communications. The program is written in a user-friendly menu driven form using dBASE III programming language. It facilitates use of the required dBASE III search and sort capabilities to access records in a database called Generic Communications Index. This index is made up of one record each for all bulletins, circulars, and information notices, including revisions and supplements, from 1971, when such documentation started, through 1986 (or to the latest update). The program is designed for use by anyone modestly acquainted with the general use of IBM-compatible personal computers. The manual contains both a brief overview and a detailed description of the program, as well as detailed instructions for getting started using the program on a personal computer with either a two-floppy disk or a hard disk system. Included at the end are a brief description of how to handle problems which might occur, and notes on the makeup of the program and database files for help in adding records of communications for future years

  9. Hanford Environmental Information System (HEIS) user's manual

    International Nuclear Information System (INIS)

    1991-10-01

    The Hanford Environmental Information System (HEIS) is a consolidated set of automated resources that effectively manage the data gathered during environmental monitoring and restoration of the Hanford Site. The HEIS includes an integrated database that provides consistent and current data to all users and promotes sharing of data by the entire user community. Data stored in the HEIS are collected under several regulatory programs. Currently these include the Comprehensive Environmental Response, Compensation and Liability Act of 1980 (CERCLA); the Resource Conservation and Recovery Act of 1976 (RCRA); and the Ground-Water Environmental Surveillance Project, managed by the Pacific Northwest Laboratory. The HEIS is an information system with an inclusive database. The manual, the HEIS User's Manual, describes the facilities available to the scientist, engineer, or manager who uses the system for environmental monitoring, assessment, and restoration planning; and to the regulator who is responsible for reviewing Hanford Site operations against regulatory requirements and guidelines

  10. Renewable Electric Plant Information System user interface manual: Paradox 7 Runtime for Windows

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-11-01

    The Renewable Electric Plant Information System (REPiS) is a comprehensive database with detailed information on grid-connected renewable electric plants in the US. The current version, REPiS3 beta, was developed in Paradox for Windows. The user interface (UI) was developed to facilitate easy access to information in the database, without the need to have, or know how to use, Paradox for Windows. The UI is designed to provide quick responses to commonly requested sorts of the database. A quick perusal of this manual will familiarize one with the functions of the UI and will make use of the system easier. There are six parts to this manual: (1) Quick Start: Instructions for Users Familiar with Database Applications; (2) Getting Started: The Installation Process; (3) Choosing the Appropriate Report; (4) Using the User Interface; (5) Troubleshooting; (6) Appendices A and B.

  11. Advanced Curation Protocols for Mars Returned Sample Handling

    Science.gov (United States)

    Bell, M.; Mickelson, E.; Lindstrom, D.; Allton, J.

    Introduction: Johnson Space Center has over 30 years experience handling precious samples which include Lunar rocks and Antarctic meteorites. However, we recognize that future curation of samples from such missions as Genesis, Stardust, and Mars S mple Return, will require a high degree of biosafety combined witha extremely low levels of inorganic, organic, and biological contamination. To satisfy these requirements, research in the JSC Advanced Curation Lab is currently focused toward two major areas: preliminary examination techniques and cleaning and verification techniques . Preliminary Examination Techniques : In order to minimize the number of paths for contamination we are exploring the synergy between human &robotic sample handling in a controlled environment to help determine the limits of clean curation. Within the Advanced Curation Laboratory is a prototype, next-generation glovebox, which contains a robotic micromanipulator. The remotely operated manipulator has six degrees-of- freedom and can be programmed to perform repetitive sample handling tasks. Protocols are being tested and developed to perform curation tasks such as rock splitting, weighing, imaging, and storing. Techniques for sample transfer enabling more detailed remote examination without compromising the integrity of sample science are also being developed . The glovebox is equipped with a rapid transfer port through which samples can be passed without exposure. The transfer is accomplished by using a unique seal and engagement system which allows passage between containers while maintaining a first seal to the outside environment and a second seal to prevent the outside of the container cover and port door from becoming contaminated by the material being transferred. Cleaning and Verification Techniques: As part of the contamination control effort, innovative cleaning techniques are being identified and evaluated in conjunction with sensitive cleanliness verification methods. Towards this

  12. Causal biological network database: a comprehensive platform of causal biological network models focused on the pulmonary and vascular systems.

    Science.gov (United States)

    Boué, Stéphanie; Talikka, Marja; Westra, Jurjen Willem; Hayes, William; Di Fabio, Anselmo; Park, Jennifer; Schlage, Walter K; Sewer, Alain; Fields, Brett; Ansari, Sam; Martin, Florian; Veljkovic, Emilija; Kenney, Renee; Peitsch, Manuel C; Hoeng, Julia

    2015-01-01

    With the wealth of publications and data available, powerful and transparent computational approaches are required to represent measured data and scientific knowledge in a computable and searchable format. We developed a set of biological network models, scripted in the Biological Expression Language, that reflect causal signaling pathways across a wide range of biological processes, including cell fate, cell stress, cell proliferation, inflammation, tissue repair and angiogenesis in the pulmonary and cardiovascular context. This comprehensive collection of networks is now freely available to the scientific community in a centralized web-based repository, the Causal Biological Network database, which is composed of over 120 manually curated and well annotated biological network models and can be accessed at http://causalbionet.com. The website accesses a MongoDB, which stores all versions of the networks as JSON objects and allows users to search for genes, proteins, biological processes, small molecules and keywords in the network descriptions to retrieve biological networks of interest. The content of the networks can be visualized and browsed. Nodes and edges can be filtered and all supporting evidence for the edges can be browsed and is linked to the original articles in PubMed. Moreover, networks may be downloaded for further visualization and evaluation. Database URL: http://causalbionet.com © The Author(s) 2015. Published by Oxford University Press.

  13. Solid Waste Projection Model: Database User's Guide

    International Nuclear Information System (INIS)

    Blackburn, C.L.

    1993-10-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for using Version 1.4 of the SWPM database: system requirements and preparation, entering and maintaining data, and performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions and does not Provide instruction in the use of Paradox, the database management system in which the SWPM database is established

  14. The AraGWAS Catalog: a curated and standardized Arabidopsis thaliana GWAS catalog

    Science.gov (United States)

    Togninalli, Matteo; Seren, Ümit; Meng, Dazhe; Fitz, Joffrey; Nordborg, Magnus; Weigel, Detlef

    2018-01-01

    Abstract The abundance of high-quality genotype and phenotype data for the model organism Arabidopsis thaliana enables scientists to study the genetic architecture of many complex traits at an unprecedented level of detail using genome-wide association studies (GWAS). GWAS have been a great success in A. thaliana and many SNP-trait associations have been published. With the AraGWAS Catalog (https://aragwas.1001genomes.org) we provide a publicly available, manually curated and standardized GWAS catalog for all publicly available phenotypes from the central A. thaliana phenotype repository, AraPheno. All GWAS have been recomputed on the latest imputed genotype release of the 1001 Genomes Consortium using a standardized GWAS pipeline to ensure comparability between results. The catalog includes currently 167 phenotypes and more than 222 000 SNP-trait associations with P < 10−4, of which 3887 are significantly associated using permutation-based thresholds. The AraGWAS Catalog can be accessed via a modern web-interface and provides various features to easily access, download and visualize the results and summary statistics across GWAS. PMID:29059333

  15. Data Curation Education in Research Centers (DCERC)

    Science.gov (United States)

    Marlino, M. R.; Mayernik, M. S.; Kelly, K.; Allard, S.; Tenopir, C.; Palmer, C.; Varvel, V. E., Jr.

    2012-12-01

    Digital data both enable and constrain scientific research. Scientists are enabled by digital data to develop new research methods, utilize new data sources, and investigate new topics, but they also face new data collection, management, and preservation burdens. The current data workforce consists primarily of scientists who receive little formal training in data management and data managers who are typically educated through on-the-job training. The Data Curation Education in Research Centers (DCERC) program is investigating a new model for educating data professionals to contribute to scientific research. DCERC is a collaboration between the University of Illinois at Urbana-Champaign Graduate School of Library and Information Science, the University of Tennessee School of Information Sciences, and the National Center for Atmospheric Research. The program is organized around a foundations course in data curation and provides field experiences in research and data centers for both master's and doctoral students. This presentation will outline the aims and the structure of the DCERC program and discuss results and lessons learned from the first set of summer internships in 2012. Four masters students participated and worked with both data mentors and science mentors, gaining first hand experiences in the issues, methods, and challenges of scientific data curation. They engaged in a diverse set of topics, including climate model metadata, observational data management workflows, and data cleaning, documentation, and ingest processes within a data archive. The students learned current data management practices and challenges while developing expertise and conducting research. They also made important contributions to NCAR data and science teams by evaluating data management workflows and processes, preparing data sets to be archived, and developing recommendations for particular data management activities. The master's student interns will return in summer of 2013

  16. The Distinction Between Curative and Assistive Technology.

    Science.gov (United States)

    Stramondo, Joseph A

    2018-05-01

    Disability activists have sometimes claimed their disability has actually increased their well-being. Some even say they would reject a cure to keep these gains. Yet, these same activists often simultaneously propose improvements to the quality and accessibility of assistive technology. However, for any argument favoring assistive over curative technology (or vice versa) to work, there must be a coherent distinction between the two. This line is already vague and will become even less clear with the emergence of novel technologies. This paper asks and tries to answer the question: what is it about the paradigmatic examples of curative and assistive technologies that make them paradigmatic and how can these defining features help us clarify the hard cases? This analysis will begin with an argument that, while the common views of this distinction adequately explain the paradigmatic cases, they fail to accurately pick out the relevant features of those technologies that make them paradigmatic and to provide adequate guidance for parsing the hard cases. Instead, it will be claimed that these categories of curative or assistive technologies are defined by the role the technologies play in establishing a person's relational narrative identity as a member of one of two social groups: disabled people or non-disabled people.

  17. REDfly: a Regulatory Element Database for Drosophila.

    Science.gov (United States)

    Gallo, Steven M; Li, Long; Hu, Zihua; Halfon, Marc S

    2006-02-01

    Bioinformatics studies of transcriptional regulation in the metazoa are significantly hindered by the absence of readily available data on large numbers of transcriptional cis-regulatory modules (CRMs). Even the richly annotated Drosophila melanogaster genome lacks extensive CRM information. We therefore present here a database of Drosophila CRMs curated from the literature complete with both DNA sequence and a searchable description of the gene expression pattern regulated by each CRM. This resource should greatly facilitate the development of computational approaches to CRM discovery as well as bioinformatics analyses of regulatory sequence properties and evolution.

  18. Is automatic CPAP titration as effective as manual CPAP titration in OSAHS patients? A meta-analysis.

    Science.gov (United States)

    Gao, Weijie; Jin, Yinghui; Wang, Yan; Sun, Mei; Chen, Baoyuan; Zhou, Ning; Deng, Yuan

    2012-06-01

    It is costly and time-consuming to conduct the standard manual titration to identify an effective pressure before continuous positive airway pressure (CPAP) treatment for obstructive sleep apnea (OSA) patients. Automatic titration is cheaper and more easily available than manual titration. The purpose of this systematic review was to evaluate the effect of automatic titration in identifying a pressure and on the improvement of apnea/hyponea index (AHI) and somnolence, the change of sleep quality, and the acceptance and compliance of CPAP treatment, compared with the manual titration. A systematic search was made of the PubMed, EMBASE, Cochrane Library, SCI, China Academic Journals Full-text Databases, Chinese Biomedical Literature Database, Chinese Scientific Journals Databases and Chinese Medical Association Journals. Randomized controlled trials comparing automatic titration and manual titration were reviewed. Studies were pooled to yield odds ratios (OR) or mean differences (MD) with 95% confidence intervals (CI). Ten trials involving 849 patients met the inclusion criteria. It is hard to identify a trend in the pressures determined by either automatic or manual titration. Automatic titration can improve the AHI (MD = 0.03/h, 95% CI = -4.48 to 4.53) and Epworth sleepiness scale (SMD = -0.02, 95% CI = -0.34 to 0.31,) as effectively as the manual titration. There is no difference between sleep architecture under automatic titration or manual titration. The acceptance of CPAP treatment (OR = 0.96, 95% CI = 0.60 to 1.55) and the compliance with treatment (MD = -0.04, 95% CI = -0.17 to 0.10) after automatic titration is not different from manual titration. Automatic titration is as effective as standard manual titration in improving AHI, somnolence while maintaining sleep quality similar to the standard method. In addition, automatic titration has the same effect on the acceptance and compliance of CPAP treatment as manual titration. With the potential advantage

  19. Content curation en periodismo (y en documentación periodística

    Directory of Open Access Journals (Sweden)

    Javier Guallar

    2014-05-01

    Full Text Available Desde hace unos pocos años están con nosotros los conceptos de content curation y de content curator como actividad o sistema y como profesional o especialista, respectivamente. Aunque el origen del término se vincule al mundo del marketing –se considera como artículo fundacional “Manifesto for the content curator” del marketer Rohit Bhargava (2009-, y sus características se identifiquen en buena medida con las del profesional de la Información y la Documentación, la content curation va más allá de una disciplina concreta o de un rol profesional determinado. Ambos perfiles citados, marketers y bibliotecarios-documentalistas, pueden hacer content curation por supuesto, pero también otros, como educadores, por ejemplo. Y, como es el caso que tratamos aquí, periodistas (y documentalistas de prensa. Porque en estas breves líneas nos vamos a centrar en presentar el papel de la content curation en el ámbito del periodismo.

  20. Advanced Curation: Solving Current and Future Sample Return Problems

    Science.gov (United States)

    Fries, M.; Calaway, M.; Evans, C.; McCubbin, F.

    2015-01-01

    Advanced Curation is a wide-ranging and comprehensive research and development effort at NASA Johnson Space Center that identifies and remediates sample related issues. For current collections, Advanced Curation investigates new cleaning, verification, and analytical techniques to assess their suitability for improving curation processes. Specific needs are also assessed for future sample return missions. For each need, a written plan is drawn up to achieve the requirement. The plan draws while upon current Curation practices, input from Curators, the analytical expertise of the Astromaterials Research and Exploration Science (ARES) team, and suitable standards maintained by ISO, IEST, NIST and other institutions. Additionally, new technologies are adopted on the bases of need and availability. Implementation plans are tested using customized trial programs with statistically robust courses of measurement, and are iterated if necessary until an implementable protocol is established. Upcoming and potential NASA missions such as OSIRIS-REx, the Asteroid Retrieval Mission (ARM), sample return missions in the New Frontiers program, and Mars sample return (MSR) all feature new difficulties and specialized sample handling requirements. The Mars 2020 mission in particular poses a suite of challenges since the mission will cache martian samples for possible return to Earth. In anticipation of future MSR, the following problems are among those under investigation: What is the most efficient means to achieve the less than 1.0 ng/sq cm total organic carbon (TOC) cleanliness required for all sample handling hardware? How do we maintain and verify cleanliness at this level? The Mars 2020 Organic Contamination Panel (OCP) predicts that organic carbon, if present, will be present at the "one to tens" of ppb level in martian near-surface samples. The same samples will likely contain wt% perchlorate salts, or approximately 1,000,000x as much perchlorate oxidizer as organic carbon

  1. miRiaD: A Text Mining Tool for Detecting Associations of microRNAs with Diseases.

    Science.gov (United States)

    Gupta, Samir; Ross, Karen E; Tudor, Catalina O; Wu, Cathy H; Schmidt, Carl J; Vijay-Shanker, K

    2016-04-29

    MicroRNAs are increasingly being appreciated as critical players in human diseases, and questions concerning the role of microRNAs arise in many areas of biomedical research. There are several manually curated databases of microRNA-disease associations gathered from the biomedical literature; however, it is difficult for curators of these databases to keep up with the explosion of publications in the microRNA-disease field. Moreover, automated literature mining tools that assist manual curation of microRNA-disease associations currently capture only one microRNA property (expression) in the context of one disease (cancer). Thus, there is a clear need to develop more sophisticated automated literature mining tools that capture a variety of microRNA properties and relations in the context of multiple diseases to provide researchers with fast access to the most recent published information and to streamline and accelerate manual curation. We have developed miRiaD (microRNAs in association with Disease), a text-mining tool that automatically extracts associations between microRNAs and diseases from the literature. These associations are often not directly linked, and the intermediate relations are often highly informative for the biomedical researcher. Thus, miRiaD extracts the miR-disease pairs together with an explanation for their association. We also developed a procedure that assigns scores to sentences, marking their informativeness, based on the microRNA-disease relation observed within the sentence. miRiaD was applied to the entire Medline corpus, identifying 8301 PMIDs with miR-disease associations. These abstracts and the miR-disease associations are available for browsing at http://biotm.cis.udel.edu/miRiaD . We evaluated the recall and precision of miRiaD with respect to information of high interest to public microRNA-disease database curators (expression and target gene associations), obtaining a recall of 88.46-90.78. When we expanded the evaluation to

  2. Library of molecular associations: curating the complex molecular basis of liver diseases

    Directory of Open Access Journals (Sweden)

    Maass Thorsten

    2010-03-01

    Full Text Available Abstract Background Systems biology approaches offer novel insights into the development of chronic liver diseases. Current genomic databases supporting systems biology analyses are mostly based on microarray data. Although these data often cover genome wide expression, the validity of single microarray experiments remains questionable. However, for systems biology approaches addressing the interactions of molecular networks comprehensive but also highly validated data are necessary. Results We have therefore generated the first comprehensive database for published molecular associations in human liver diseases. It is based on PubMed published abstracts and aimed to close the gap between genome wide coverage of low validity from microarray data and individual highly validated data from PubMed. After an initial text mining process, the extracted abstracts were all manually validated to confirm content and potential genetic associations and may therefore be highly trusted. All data were stored in a publicly available database, Library of Molecular Associations http://www.medicalgenomics.org/databases/loma/news, currently holding approximately 1260 confirmed molecular associations for chronic liver diseases such as HCC, CCC, liver fibrosis, NASH/fatty liver disease, AIH, PBC, and PSC. We furthermore transformed these data into a powerful resource for molecular liver research by connecting them to multiple biomedical information resources. Conclusion Together, this database is the first available database providing a comprehensive view and analysis options for published molecular associations on multiple liver diseases.

  3. Integration of data: the Nanomaterial Registry project and data curation

    International Nuclear Information System (INIS)

    Guzan, K A; Mills, K C; Gupta, V; Murry, D; Ostraat, M L; Scheier, C N; Willis, D A

    2013-01-01

    Due to the use of nanomaterials in multiple fields of applied science and technology, there is a need for accelerated understanding of any potential implications of using these unique and promising materials. There is a multitude of research data that, if integrated, can be leveraged to drive toward a better understanding. Integration can be achieved by applying nanoinformatics concepts. The Nanomaterial Registry is using applied minimal information about nanomaterials to support a robust data curation process in order to promote integration across a diverse data set. This paper describes the evolution of the curation methodology used in the Nanomaterial Registry project as well as the current procedure that is used. Some of the lessons learned about curation of nanomaterial data are also discussed. (paper)

  4. Protective, curative and eradicative activities of fungicides against grapevine rust

    Directory of Open Access Journals (Sweden)

    Francislene Angelotti

    2014-01-01

    Full Text Available The protective, eradicative and curative activities of the fungicides azoxystrobin, tebuconazole, pyraclostrobin+metiram, and ciproconazole against grapevine rust, were determined in greenhouse. To evaluate the protective activity, leaves of potted ´Niagara´ (Vitis labrusca vines were artificially inoculated with an urediniospore suspension of Phakopsora euvitis four, eight or forteen days after fungicidal spray; and to evaluate the curative and eradicative activities, leaves were sprayed with fungicides two, four or eight days after inoculation. Disease severity was assessed 14 days after each inoculation. All tested fungicides present excellent preventive activity against grapevine rust; however, tebuconazole and ciproconazole provide better curative activity than azoxystrobin and pyraclostrobin+metiram. It was observed also that all tested fungicides significantly reduced the germination of urediniospore produced on sprayed leaves.

  5. Data Stewardship: Environmental Data Curation and a Web-of-Repositories

    Directory of Open Access Journals (Sweden)

    Karen S. Baker

    2009-10-01

    Full Text Available Scientific researchers today frequently package measurements and associated metadata as digital datasets in anticipation of storage in data repositories. Through the lens of environmental data stewardship, we consider the data repository as an organizational element central to data curation. One aspect of non-commercial repositories, their distance-from-origin of the data, is explored in terms of near and remote categories. Three idealized repository types are distinguished – local, center, and archive - paralleling research, resource, and reference collection categories respectively. Repository type characteristics such as scope, structure, and goals are discussed. Repository similarities in terms of roles, activities and responsibilities are also examined. Data stewardship is related to care of research data and responsible scientific communication supported by an infrastructure that coordinates curation activities; data curation is defined as a set of repeated and repeatable activities focusing on tending data and creating data products within a particular arena. The concept of “sphere-of-context” is introduced as an aid to distinguishing repository types. Conceptualizing a “web-of-repositories” accommodates a variety of repository types and represents an ecologically inclusive approach to data curation.

  6. Brain Tumor Database, a free relational database for collection and analysis of brain tumor patient information.

    Science.gov (United States)

    Bergamino, Maurizio; Hamilton, David J; Castelletti, Lara; Barletta, Laura; Castellan, Lucio

    2015-03-01

    In this study, we describe the development and utilization of a relational database designed to manage the clinical and radiological data of patients with brain tumors. The Brain Tumor Database was implemented using MySQL v.5.0, while the graphical user interface was created using PHP and HTML, thus making it easily accessible through a web browser. This web-based approach allows for multiple institutions to potentially access the database. The BT Database can record brain tumor patient information (e.g. clinical features, anatomical attributes, and radiological characteristics) and be used for clinical and research purposes. Analytic tools to automatically generate statistics and different plots are provided. The BT Database is a free and powerful user-friendly tool with a wide range of possible clinical and research applications in neurology and neurosurgery. The BT Database graphical user interface source code and manual are freely available at http://tumorsdatabase.altervista.org. © The Author(s) 2013.

  7. Respiratory cancer database: An open access database of respiratory cancer gene and miRNA.

    Science.gov (United States)

    Choubey, Jyotsna; Choudhari, Jyoti Kant; Patel, Ashish; Verma, Mukesh Kumar

    2017-01-01

    Respiratory cancer database (RespCanDB) is a genomic and proteomic database of cancer of respiratory organ. It also includes the information of medicinal plants used for the treatment of various respiratory cancers with structure of its active constituents as well as pharmacological and chemical information of drug associated with various respiratory cancers. Data in RespCanDB has been manually collected from published research article and from other databases. Data has been integrated using MySQL an object-relational database management system. MySQL manages all data in the back-end and provides commands to retrieve and store the data into the database. The web interface of database has been built in ASP. RespCanDB is expected to contribute to the understanding of scientific community regarding respiratory cancer biology as well as developments of new way of diagnosing and treating respiratory cancer. Currently, the database consist the oncogenomic information of lung cancer, laryngeal cancer, and nasopharyngeal cancer. Data for other cancers, such as oral and tracheal cancers, will be added in the near future. The URL of RespCanDB is http://ridb.subdic-bioinformatics-nitrr.in/.

  8. Calculation of the effectiveness of manual control rods for the reactor of Ignalina NPP Unit 2

    International Nuclear Information System (INIS)

    Bubelis, E.; Pabarcius, R.

    2001-01-01

    On the basis of one of the recent databases of the reactor of Ignalina NPP Unit 2, calculations of the effectiveness of separate manual control rods, groups of manual control rods and axial characteristic of effectiveness of separate manual control rods were performed. The results of the calculations indicated, that all analyzed separate manual control rods have approximately the same effectiveness, which doesn't depend on the location of a control rod in the reactor core layout Manual control rod of the new design has about 10% greater effectiveness than manual control rod of the old design. (author)

  9. Data Curation Education Grounded in Earth Sciences and the Science of Data

    Science.gov (United States)

    Palmer, C. L.

    2015-12-01

    This presentation looks back over ten years of experience advancing data curation education at two Information Schools, highlighting the vital role of earth science case studies, expertise, and collaborations in development of curriculum and internships. We also consider current data curation practices and workforce demand in data centers in the geosciences, drawing on studies conducted in the Data Curation Education in Research Centers (DCERC) initiative and the Site-Based Data Curation project. Outcomes from this decade of data curation research and education has reinforced the importance of key areas of information science in preparing data professionals to respond to the needs of user communities, provide services across disciplines, invest in standards and interoperability, and promote open data practices. However, a serious void remains in principles to guide education and practice that are distinct to the development of data systems and services that meet both local and global aims. We identify principles emerging from recent empirical studies on the reuse value of data in the earth sciences and propose an approach for advancing data curation education that depends on systematic coordination with data intensive research and propagation of current best practices from data centers into curriculum. This collaborative model can increase both domain-based and cross-disciplinary expertise among data professionals, ultimately improving data systems and services in our universities and data centers while building the new base of knowledge needed for a foundational science of data.

  10. Online Training and Practice Manual for ERIC Data Base Searchers. 2nd Edition.

    Science.gov (United States)

    Markey, Karen; Cochrane, Pauline A.

    Revised to reflect changes in both DIALOG and ERIC, this second edition of a self-improvement manual for online searchers who wish to refine their skills presents three basic approaches to searching and provides illustrations from DIALOG's ERIC ONTAP database. The manual is divided into two parts: an 8-step model of the total search process which…

  11. The Hayabusa Curation Facility at Johnson Space Center

    Science.gov (United States)

    Zolensky, M.; Bastien, R.; McCann, B.; Frank, D.; Gonzalez, C.; Rodriguez, M.

    2013-01-01

    The Japan Aerospace Exploration Agency (JAXA) Hayabusa spacecraft made contact with the asteroid 25143 Itokawa and collected regolith dust from Muses Sea region of smooth terrain [1]. The spacecraft returned to Earth with more than 10,000 grains ranging in size from just over 300 µm to less than 10 µm [2, 3]. These grains represent the only collection of material returned from an asteroid by a spacecraft. As part of the joint agreement between JAXA and NASA for the mission, 10% of the Hayabusa grains are being transferred to NASA for parallel curation and allocation. In order to properly receive process and curate these samples, a new curation facility was established at Johnson Space Center (JSC). Since the Hayabusa samples within the JAXA curation facility have been stored free from exposure to terrestrial atmosphere and contamination [4], one of the goals of the new NASA curation facility was to continue this treatment. An existing lab space at JSC was transformed into a 120 sq.ft. ISO class 4 (equivalent to the original class 10 standard) clean room. Hayabusa samples are stored, observed, processed, and packaged for allocation inside a stainless steel glove box under dry N2. Construction of the clean laboratory was completed in 2012. Currently, 25 Itokawa particles are lodged in NASA's Hayabusa Lab. Special care has been taken during lab construction to remove or contain materials that may contribute contaminant particles in the same size range as the Hayabusa grains. Several witness plates of various materials are installed around the clean lab and within the glove box to permit characterization of local contaminants at regular intervals by SEM and mass spectrometry, and particle counts of the lab environment are frequently acquired. Of particular interest is anodized aluminum, which contains copious sub-mm grains of a multitude of different materials embedded in its upper surface. Unfortunately the use of anodized aluminum was necessary in the construction

  12. Organic Contamination Baseline Study: In NASA JSC Astromaterials Curation Laboratories. Summary Report

    Science.gov (United States)

    Calaway, Michael J.

    2013-01-01

    In preparation for OSIRIS-REx and other future sample return missions concerned with analyzing organics, we conducted an Organic Contamination Baseline Study for JSC Curation Labsoratories in FY12. For FY12 testing, organic baseline study focused only on molecular organic contamination in JSC curation gloveboxes: presumably future collections (i.e. Lunar, Mars, asteroid missions) would use isolation containment systems over only cleanrooms for primary sample storage. This decision was made due to limit historical data on curation gloveboxes, limited IR&D funds and Genesis routinely monitors organics in their ISO class 4 cleanrooms.

  13. Refusal of curative radiation therapy and surgery among patients with cancer.

    Science.gov (United States)

    Aizer, Ayal A; Chen, Ming-Hui; Parekh, Arti; Choueiri, Toni K; Hoffman, Karen E; Kim, Simon P; Martin, Neil E; Hu, Jim C; Trinh, Quoc-Dien; Nguyen, Paul L

    2014-07-15

    Surgery and radiation therapy represent the only curative options for many patients with solid malignancies. However, despite the recommendations of their physicians, some patients refuse these therapies. This study characterized factors associated with refusal of surgical or radiation therapy as well as the impact of refusal of recommended therapy on patients with localized malignancies. We used the Surveillance, Epidemiology, and End Results program to identify a population-based sample of 925,127 patients who had diagnoses of 1 of 8 common malignancies for which surgery and/or radiation are believed to confer a survival benefit between 1995 and 2008. Refusal of oncologic therapy, as documented in the SEER database, was the primary outcome measure. Multivariable logistic regression was used to investigate factors associated with refusal. The impact of refusal of therapy on cancer-specific mortality was assessed with Fine and Gray's competing risks regression. In total, 2441 of 692,938 patients (0.4%) refused surgery, and 2113 of 232,189 patients (0.9%) refused radiation, despite the recommendations of their physicians. On multivariable analysis, advancing age, decreasing annual income, nonwhite race, and unmarried status were associated with refusal of surgery, whereas advancing age, decreasing annual income, Asian American race, and unmarried status were associated with refusal of radiation (PRefusal of surgery and radiation were associated with increased estimates of cancer-specific mortality for all malignancies evaluated (hazard ratio [HR], 2.80, 95% confidence interval [CI], 2.59-3.03; Prefuse curative surgical and/or radiation-based oncologic therapy, raising concern that socioeconomic factors may drive some patients to forego potentially life-saving care. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. ODIN. Online Database Information Network: ODIN Policy & Procedure Manual.

    Science.gov (United States)

    Townley, Charles T.; And Others

    Policies and procedures are outlined for the Online Database Information Network (ODIN), a cooperative of libraries in south-central Pennsylvania, which was organized to improve library services through technology. The first section covers organization and goals, members, and responsibilities of the administrative council and libraries. Patrons…

  15. Locus-Specific Databases and Recommendations to Strengthen Their Contribution to the Classification of Variants in Cancer Susceptibility Genes

    NARCIS (Netherlands)

    Greenblatt, Marc S.; Brody, Lawrence C.; Foulkes, William D.; Genuardi, Maurizio; Hofstra, Robert M. W.; Olivier, Magali; Plon, Sharon E.; Sijmons, Rolf H.; Sinilnikova, Olga; Spurdle, Amanda B.

    2008-01-01

    Locus-specific databases (LSDBs) are curated collections of sequence variants in genes associated with disease. LSDBs of cancer-related genes often serve as a critical resource to researchers, diagnostic laboratories, clinicians, and others in the cancer genetics community. LSDBs are poised to play

  16. Caveat emptor: limitations of the automated reconstruction of metabolic pathways in Plasmodium.

    Science.gov (United States)

    Ginsburg, Hagai

    2009-01-01

    The functional reconstruction of metabolic pathways from an annotated genome is a tedious and demanding enterprise. Automation of this endeavor using bioinformatics algorithms could cope with the ever-increasing number of sequenced genomes and accelerate the process. Here, the manual reconstruction of metabolic pathways in the functional genomic database of Plasmodium falciparum--Malaria Parasite Metabolic Pathways--is described and compared with pathways generated automatically as they appear in PlasmoCyc, metaSHARK and the Kyoto Encyclopedia for Genes and Genomes. A critical evaluation of this comparison discloses that the automatic reconstruction of pathways generates manifold paths that need an expert manual verification to accept some and reject most others based on manually curated gene annotation.

  17. Data Curation Program Development in U.S. Universities: The Georgia Institute of Technology Example

    Directory of Open Access Journals (Sweden)

    Tyler O. Walters

    2009-12-01

    Full Text Available The curation of scientific research data at U.S. universities is a story of enterprising individuals and of incremental progress. A small number of libraries and data centers who see the possibilities of becoming “digital information management centers” are taking entrepreneurial steps to extend beyond their traditional information assets and include managing scientific and scholarly research data. The Georgia Institute of Technology (GT has had a similar development path toward a data curation program based in its library. This paper will articulate GT’s program development, which the author offers as an experience common in U.S. universities. The main characteristic is a program devoid of top-level mandates and incentives, but rich with independent, “bottom-up” action. The paper will address program antecedents and context, inter-institutional partnerships that advance the library’s curation program, library organizational developments, partnerships with campus research communities, and a proposed model for curation program development. It concludes that despite the clear need for data curation put forth by researchers such as the groups of neuroscientists and bioscientists referenced in this paper, the university experience examined suggests that gathering resources for developing data curation programs at the institutional level is proving to be a quite onerous. However, and in spite of the challenges, some U.S. research universities are beginning to establish perceptible data curation programs.

  18. Information retrieval system of nuclear power plant database (PPD) user's guide

    International Nuclear Information System (INIS)

    Izumi, Fumio; Horikami, Kunihiko; Kobayashi, Kensuke.

    1990-12-01

    A nuclear power plant database (PPD) and its retrieval system have been developed. The database involves a large number of safety design data of nuclear power plants, operating and planned in Japan. The information stored in the database can be retrieved at high speed, whenever they are needed, by use of the retrieval system. The report is a user's manual of the system to access the database utilizing a display unit of the JAERI computer network system. (author)

  19. Curating Public Art 2.0: The case of Autopoiesis

    DEFF Research Database (Denmark)

    Ajana, Btihaj

    2017-01-01

    This article examines the intersections between public art, curation and Web 2.0 technology. Building on the case study of Autopoiesis, a digital art project focusing on the curation and online exhibition of artworks received from members of the public in the United Arab Emirates, the article...... to facilitate autonomous creative self-expressions and enable greater public participation in culture. By providing a critical reflection on the ‘material’ contexts of this digital project, the article also demonstrates the related tensions between the virtual and the physical, and the wider ‘local’ realities...

  20. Vaxjo: A Web-Based Vaccine Adjuvant Database and Its Application for Analysis of Vaccine Adjuvants and Their Uses in Vaccine Development

    Directory of Open Access Journals (Sweden)

    Samantha Sayers

    2012-01-01

    Full Text Available Vaccine adjuvants are compounds that enhance host immune responses to co-administered antigens in vaccines. Vaxjo is a web-based central database and analysis system that curates, stores, and analyzes vaccine adjuvants and their usages in vaccine development. Basic information of a vaccine adjuvant stored in Vaxjo includes adjuvant name, components, structure, appearance, storage, preparation, function, safety, and vaccines that use this adjuvant. Reliable references are curated and cited. Bioinformatics scripts are developed and used to link vaccine adjuvants to different adjuvanted vaccines stored in the general VIOLIN vaccine database. Presently, 103 vaccine adjuvants have been curated in Vaxjo. Among these adjuvants, 98 have been used in 384 vaccines stored in VIOLIN against over 81 pathogens, cancers, or allergies. All these vaccine adjuvants are categorized and analyzed based on adjuvant types, pathogens used, and vaccine types. As a use case study of vaccine adjuvants in infectious disease vaccines, the adjuvants used in Brucella vaccines are specifically analyzed. A user-friendly web query and visualization interface is developed for interactive vaccine adjuvant search. To support data exchange, the information of vaccine adjuvants is stored in the Vaccine Ontology (VO in the Web Ontology Language (OWL format.

  1. Vaxjo: a web-based vaccine adjuvant database and its application for analysis of vaccine adjuvants and their uses in vaccine development.

    Science.gov (United States)

    Sayers, Samantha; Ulysse, Guerlain; Xiang, Zuoshuang; He, Yongqun

    2012-01-01

    Vaccine adjuvants are compounds that enhance host immune responses to co-administered antigens in vaccines. Vaxjo is a web-based central database and analysis system that curates, stores, and analyzes vaccine adjuvants and their usages in vaccine development. Basic information of a vaccine adjuvant stored in Vaxjo includes adjuvant name, components, structure, appearance, storage, preparation, function, safety, and vaccines that use this adjuvant. Reliable references are curated and cited. Bioinformatics scripts are developed and used to link vaccine adjuvants to different adjuvanted vaccines stored in the general VIOLIN vaccine database. Presently, 103 vaccine adjuvants have been curated in Vaxjo. Among these adjuvants, 98 have been used in 384 vaccines stored in VIOLIN against over 81 pathogens, cancers, or allergies. All these vaccine adjuvants are categorized and analyzed based on adjuvant types, pathogens used, and vaccine types. As a use case study of vaccine adjuvants in infectious disease vaccines, the adjuvants used in Brucella vaccines are specifically analyzed. A user-friendly web query and visualization interface is developed for interactive vaccine adjuvant search. To support data exchange, the information of vaccine adjuvants is stored in the Vaccine Ontology (VO) in the Web Ontology Language (OWL) format.

  2. Fish Karyome version 2.1: a chromosome database of fishes and other aquatic organisms.

    Science.gov (United States)

    Nagpure, Naresh Sahebrao; Pathak, Ajey Kumar; Pati, Rameshwar; Rashid, Iliyas; Sharma, Jyoti; Singh, Shri Prakash; Singh, Mahender; Sarkar, Uttam Kumar; Kushwaha, Basdeo; Kumar, Ravindra; Murali, S

    2016-01-01

    A voluminous information is available on karyological studies of fishes; however, limited efforts were made for compilation and curation of the available karyological data in a digital form. 'Fish Karyome' database was the preliminary attempt to compile and digitize the available karyological information on finfishes belonging to the Indian subcontinent. But the database had limitations since it covered data only on Indian finfishes with limited search options. Perceiving the feedbacks from the users and its utility in fish cytogenetic studies, the Fish Karyome database was upgraded by applying Linux, Apache, MySQL and PHP (pre hypertext processor) (LAMP) technologies. In the present version, the scope of the system was increased by compiling and curating the available chromosomal information over the globe on fishes and other aquatic organisms, such as echinoderms, molluscs and arthropods, especially of aquaculture importance. Thus, Fish Karyome version 2.1 presently covers 866 chromosomal records for 726 species supported with 253 published articles and the information is being updated regularly. The database provides information on chromosome number and morphology, sex chromosomes, chromosome banding, molecular cytogenetic markers, etc. supported by fish and karyotype images through interactive tools. It also enables the users to browse and view chromosomal information based on habitat, family, conservation status and chromosome number. The system also displays chromosome number in model organisms, protocol for chromosome preparation and allied techniques and glossary of cytogenetic terms. A data submission facility has also been provided through data submission panel. The database can serve as a unique and useful resource for cytogenetic characterization, sex determination, chromosomal mapping, cytotaxonomy, karyo-evolution and systematics of fishes. Database URL: http://mail.nbfgr.res.in/Fish_Karyome. © The Author(s) 2016. Published by Oxford University Press.

  3. Practices of research data curation in institutional repositories: A qualitative view from repository staff.

    Science.gov (United States)

    Lee, Dong Joon; Stvilia, Besiki

    2017-01-01

    The importance of managing research data has been emphasized by the government, funding agencies, and scholarly communities. Increased access to research data increases the impact and efficiency of scientific activities and funding. Thus, many research institutions have established or plan to establish research data curation services as part of their Institutional Repositories (IRs). However, in order to design effective research data curation services in IRs, and to build active research data providers and user communities around those IRs, it is essential to study current data curation practices and provide rich descriptions of the sociotechnical factors and relationships shaping those practices. Based on 13 interviews with 15 IR staff members from 13 large research universities in the United States, this paper provides a rich, qualitative description of research data curation and use practices in IRs. In particular, the paper identifies data curation and use activities in IRs, as well as their structures, roles played, skills needed, contradictions and problems present, solutions sought, and workarounds applied. The paper can inform the development of best practice guides, infrastructure and service templates, as well as education in research data curation in Library and Information Science (LIS) schools.

  4. Comprehensive annotation of secondary metabolite biosynthetic genes and gene clusters of Aspergillus nidulans, A. fumigatus, A. niger and A. oryzae

    Science.gov (United States)

    2013-01-01

    Background Secondary metabolite production, a hallmark of filamentous fungi, is an expanding area of research for the Aspergilli. These compounds are potent chemicals, ranging from deadly toxins to therapeutic antibiotics to potential anti-cancer drugs. The genome sequences for multiple Aspergilli have been determined, and provide a wealth of predictive information about secondary metabolite production. Sequence analysis and gene overexpression strategies have enabled the discovery of novel secondary metabolites and the genes involved in their biosynthesis. The Aspergillus Genome Database (AspGD) provides a central repository for gene annotation and protein information for Aspergillus species. These annotations include Gene Ontology (GO) terms, phenotype data, gene names and descriptions and they are crucial for interpreting both small- and large-scale data and for aiding in the design of new experiments that further Aspergillus research. Results We have manually curated Biological Process GO annotations for all genes in AspGD with recorded functions in secondary metabolite production, adding new GO terms that specifically describe each secondary metabolite. We then leveraged these new annotations to predict roles in secondary metabolism for genes lacking experimental characterization. As a starting point for manually annotating Aspergillus secondary metabolite gene clusters, we used antiSMASH (antibiotics and Secondary Metabolite Analysis SHell) and SMURF (Secondary Metabolite Unknown Regions Finder) algorithms to identify potential clusters in A. nidulans, A. fumigatus, A. niger and A. oryzae, which we subsequently refined through manual curation. Conclusions This set of 266 manually curated secondary metabolite gene clusters will facilitate the investigation of novel Aspergillus secondary metabolites. PMID:23617571

  5. Identification and correction of abnormal, incomplete and mispredicted proteins in public databases

    Directory of Open Access Journals (Sweden)

    Bányai László

    2008-08-01

    Full Text Available Abstract Background Despite significant improvements in computational annotation of genomes, sequences of abnormal, incomplete or incorrectly predicted genes and proteins remain abundant in public databases. Since the majority of incomplete, abnormal or mispredicted entries are not annotated as such, these errors seriously affect the reliability of these databases. Here we describe the MisPred approach that may provide an efficient means for the quality control of databases. The current version of the MisPred approach uses five distinct routines for identifying abnormal, incomplete or mispredicted entries based on the principle that a sequence is likely to be incorrect if some of its features conflict with our current knowledge about protein-coding genes and proteins: (i conflict between the predicted subcellular localization of proteins and the absence of the corresponding sequence signals; (ii presence of extracellular and cytoplasmic domains and the absence of transmembrane segments; (iii co-occurrence of extracellular and nuclear domains; (iv violation of domain integrity; (v chimeras encoded by two or more genes located on different chromosomes. Results Analyses of predicted EnsEMBL protein sequences of nine deuterostome (Homo sapiens, Mus musculus, Rattus norvegicus, Monodelphis domestica, Gallus gallus, Xenopus tropicalis, Fugu rubripes, Danio rerio and Ciona intestinalis and two protostome species (Caenorhabditis elegans and Drosophila melanogaster have revealed that the absence of expected signal peptides and violation of domain integrity account for the majority of mispredictions. Analyses of sequences predicted by NCBI's GNOMON annotation pipeline show that the rates of mispredictions are comparable to those of EnsEMBL. Interestingly, even the manually curated UniProtKB/Swiss-Prot dataset is contaminated with mispredicted or abnormal proteins, although to a much lesser extent than UniProtKB/TrEMBL or the EnsEMBL or GNOMON

  6. Physical Therapy and Manual Physical Therapy: Differences in Patient Characteristics.

    NARCIS (Netherlands)

    van Ravensberg, C. D. Dorine; Oostendorp, Rob A B; van Berkel, Lonneke M.; Scholten-Peeters, Gwendolijne G M; Pool, Jan J.M.; Swinkels, Raymond A. H. M.; Huijbregts, Peter A.

    2005-01-01

    This study compared socio-demographic characteristics, health problem characteristics, and primary process data between database samples of patients referred to physical therapy (PT) versus a sample of patients referred to manual physical therapy (MPT) in the Netherlands. Statistical analysis

  7. Physical therapy and manual physical therapy: Differences in patient characteristics

    NARCIS (Netherlands)

    van Ravensberg, C. D. Dorine; Oostendorp, R.A.B.; van Berkel, Lonneke M.; Scholten-Peeters, G.G.M.; Pool, J.J.M.; Swinkels, Raymond A. H. M.; Huijbregts, Peter A.

    2005-01-01

    This study compared socio-demographic characteristics, health problem characteristics, and primary process data between database samples of patients referred to physical therapy (PT) versus a sample of patients referred to manual physical therapy (MPT) in the Netherlands. Statistical analysis

  8. Physical Therapy and Manual Physical Therapy: Differences in Patient Characteristics

    NARCIS (Netherlands)

    van Ravensberg, C. D. Dorine; Oostendorp, Rob A B; van Berkel, Lonneke M.; Scholten-Peeters, Gwendolijne G. M.; Pool, Jan J.M.; Swinkels, Raymond A. H. M.; Huijbregts, Peter A.

    2005-01-01

    This study compared socio-demographic characteristics, health problem characteristics, and primary process data between database samples of patients referred to physical therapy (PT) versus a sample of patients referred to manual physical therapy (MPT) in the Netherlands. Statistical analysis

  9. Crowdsourcing and curation: perspectives from biology and natural language processing.

    Science.gov (United States)

    Hirschman, Lynette; Fort, Karën; Boué, Stéphanie; Kyrpides, Nikos; Islamaj Doğan, Rezarta; Cohen, Kevin Bretonnel

    2016-01-01

    Crowdsourcing is increasingly utilized for performing tasks in both natural language processing and biocuration. Although there have been many applications of crowdsourcing in these fields, there have been fewer high-level discussions of the methodology and its applicability to biocuration. This paper explores crowdsourcing for biocuration through several case studies that highlight different ways of leveraging 'the crowd'; these raise issues about the kind(s) of expertise needed, the motivations of participants, and questions related to feasibility, cost and quality. The paper is an outgrowth of a panel session held at BioCreative V (Seville, September 9-11, 2015). The session consisted of four short talks, followed by a discussion. In their talks, the panelists explored the role of expertise and the potential to improve crowd performance by training; the challenge of decomposing tasks to make them amenable to crowdsourcing; and the capture of biological data and metadata through community editing.Database URL: http://www.mitre.org/publications/technical-papers/crowdsourcing-and-curation-perspectives. © The Author(s) 2016. Published by Oxford University Press.

  10. Curation of US Martian Meteorites Collected in Antarctica

    Science.gov (United States)

    Lindstrom, M.; Satterwhite, C.; Allton, J.; Stansbury, E.

    1998-01-01

    To date the ANSMET field team has collected five martian meteorites (see below) in Antarctica and returned them for curation at the Johnson Space Center (JSC) Meteorite Processing Laboratory (MPL). ne meteorites were collected with the clean procedures used by ANSMET in collecting all meteorites: They were handled with JSC-cleaned tools, packaged in clean bags, and shipped frozen to JSC. The five martian meteorites vary significantly in size (12-7942 g) and rock type (basalts, lherzolites, and orthopyroxenite). Detailed descriptions are provided in the Mars Meteorite compendium, which describes classification, curation and research results. A table gives the names, classifications and original and curatorial masses of the martian meteorites. The MPL and measures for contamination control are described.

  11. Curative and eradicant action of fungicides to control Phakopsora pachyrhizi in soybean plants

    Directory of Open Access Journals (Sweden)

    Erlei Melo Reis

    Full Text Available ABSTRACT Experiments were carried out in a growth chamber and laboratory to quantify the curative and eradicant actions of fungicides in Asian soybean rust control. The experiments were conducted with the CD 214 RR cultivar, assessing the following fungicides, separately or in association, chlorothalonil, flutriafol, cyproconazole + trifloxystrobin, epoxiconazole + pyraclostrobin, cyproconazole + azoxystrobin, and cyproconazole + picoxystrobin. The fungicides were applied at four (curative and nine days after inoculation (eradicant treatment. Treatments were evaluated according to the density of lesions and uredia/cm2, and the eradicant treatment was assessed based on the necrosis of lesions/uredia and on uredospore viability. Except for the fungicide chlorothalonil, there was curative action of latent/virtual infections by the fungicides. Penetrant fungicides that are absorbed have curative and eradicant action to soybean rust.

  12. A literature search tool for intelligent extraction of disease-associated genes.

    Science.gov (United States)

    Jung, Jae-Yoon; DeLuca, Todd F; Nelson, Tristan H; Wall, Dennis P

    2014-01-01

    To extract disorder-associated genes from the scientific literature in PubMed with greater sensitivity for literature-based support than existing methods. We developed a PubMed query to retrieve disorder-related, original research articles. Then we applied a rule-based text-mining algorithm with keyword matching to extract target disorders, genes with significant results, and the type of study described by the article. We compared our resulting candidate disorder genes and supporting references with existing databases. We demonstrated that our candidate gene set covers nearly all genes in manually curated databases, and that the references supporting the disorder-gene link are more extensive and accurate than other general purpose gene-to-disorder association databases. We implemented a novel publication search tool to find target articles, specifically focused on links between disorders and genotypes. Through comparison against gold-standard manually updated gene-disorder databases and comparison with automated databases of similar functionality we show that our tool can search through the entirety of PubMed to extract the main gene findings for human diseases rapidly and accurately.

  13. Economic evaluation of manual therapy for musculoskeletal diseases: a protocol for a systematic review and narrative synthesis of evidence.

    Science.gov (United States)

    Kim, Chang-Gon; Mun, Su-Jeong; Kim, Ka-Na; Shin, Byung-Cheul; Kim, Nam-Kwen; Lee, Dong-Hyo; Lee, Jung-Han

    2016-05-13

    Manual therapy is the non-surgical conservative management of musculoskeletal disorders using the practitioner's hands on the patient's body for diagnosing and treating disease. The aim of this study is to systematically review trial-based economic evaluations of manual therapy relative to other interventions used for the management of musculoskeletal diseases. Randomised clinical trials (RCTs) on the economic evaluation of manual therapy for musculoskeletal diseases will be included in the review. The following databases will be searched from their inception: Medline, Embase, Cochrane Central Register of Controlled Trials (CENTRAL), Cumulative Index to Nursing and Allied Health Literature (CINAHL), Econlit, Mantis, Index to Chiropractic Literature, Science Citation Index, Social Science Citation Index, Allied and Complementary Medicine Database (AMED), Cochrane Database of Systematic Reviews (CDSR), National Health Service Database of Abstracts of Reviews of Effects (NHS DARE), National Health Service Health Technology Assessment Database (NHS HTA), National Health Service Economic Evaluation Database (NHS EED), CENTRAL, five Korean medical databases (Oriental Medicine Advanced Searching Integrated System (OASIS), Research Information Service System (RISS), DBPIA, Korean Traditional Knowledge Portal (KTKP) and KoreaMed) and three Chinese databases (China National Knowledge Infrastructure (CNKI), VIP and Wanfang). The evidence for the cost-effectiveness, cost-utility and cost-benefit of manual therapy for musculoskeletal diseases will be assessed as the primary outcome. Health-related quality of life and adverse effects will be assessed as secondary outcomes. We will critically appraise the included studies using the Cochrane risk of bias tool and the Drummond checklist. Results will be summarised using Slavin's qualitative best-evidence synthesis approach. The results of the study will be disseminated via a peer-reviewed journal and/or conference presentations

  14. An emerging role: the nurse content curator.

    Science.gov (United States)

    Brooks, Beth A

    2015-01-01

    A new phenomenon, the inverted or "flipped" classroom, assumes that students are no longer acquiring knowledge exclusively through textbooks or lectures. Instead, they are seeking out the vast amount of free information available to them online (the very essence of open source) to supplement learning gleaned in textbooks and lectures. With so much open-source content available to nursing faculty, it benefits the faculty to use readily available, technologically advanced content. The nurse content curator supports nursing faculty in its use of such content. Even more importantly, the highly paid, time-strapped faculty is not spending an inordinate amount of effort surfing for and evaluating content. The nurse content curator does that work, while the faculty uses its time more effectively to help students vet the truth, make meaning of the content, and learn to problem-solve. Brooks. © 2014 Wiley Periodicals, Inc.

  15. The DigCurV Curriculum Framework for Digital Curation in the Cultural Heritage Sector

    Directory of Open Access Journals (Sweden)

    Laura Molloy

    2014-07-01

    Full Text Available In 2013, the DigCurV collaborative network completed development of a Curriculum Framework for digital curation skills in the European cultural heritage sector. DigCurV synthesised a variety of established skills and competence models in the digital curation and LIS sectors with expertise from digital curation professionals, in order to develop a new Curriculum Framework. The resulting Framework provides a common language and helps define the skills, knowledge and abilities that are necessary for the development of digital curation training; for benchmarking existing programmes; and for promoting the continuing production, improvement and refinement of digital curation training programmes. This paper describes the salient points of this work, including how the project team conducted the research necessary to develop the Framework, the structure of the Framework, the processes used to validate the Framework, and three ‘lenses’ onto the Framework. The paper also provides suggestions as to how the Framework might be used, including a description of potential audiences and purposes.

  16. The art and science of data curation: Lessons learned from constructing a virtual collection

    Science.gov (United States)

    Bugbee, Kaylin; Ramachandran, Rahul; Maskey, Manil; Gatlin, Patrick

    2018-03-01

    A digital, or virtual, collection is a value added service developed by libraries that curates information and resources around a topic, theme or organization. Adoption of the virtual collection concept as an Earth science data service improves the discoverability, accessibility and usability of data both within individual data centers but also across data centers and disciplines. In this paper, we introduce a methodology for systematically and rigorously curating Earth science data and information into a cohesive virtual collection. This methodology builds on the geocuration model of searching, selecting and synthesizing Earth science data, metadata and other information into a single and useful collection. We present our experiences curating a virtual collection for one of NASA's twelve Distributed Active Archive Centers (DAACs), the Global Hydrology Resource Center (GHRC), and describe lessons learned as a result of this curation effort. We also provide recommendations and best practices for data centers and data providers who wish to curate virtual collections for the Earth sciences.

  17. AgdbNet – antigen sequence database software for bacterial typing

    Directory of Open Access Journals (Sweden)

    Maiden Martin CJ

    2006-06-01

    Full Text Available Abstract Background Bacterial typing schemes based on the sequences of genes encoding surface antigens require databases that provide a uniform, curated, and widely accepted nomenclature of the variants identified. Due to the differences in typing schemes, imposed by the diversity of genes targeted, creating these databases has typically required the writing of one-off code to link the database to a web interface. Here we describe agdbNet, widely applicable web database software that facilitates simultaneous BLAST querying of multiple loci using either nucleotide or peptide sequences. Results Databases are described by XML files that are parsed by a Perl CGI script. Each database can have any number of loci, which may be defined by nucleotide and/or peptide sequences. The software is currently in use on at least five public databases for the typing of Neisseria meningitidis, Campylobacter jejuni and Streptococcus equi and can be set up to query internal isolate tables or suitably-configured external isolate databases, such as those used for multilocus sequence typing. The style of the resulting website can be fully configured by modifying stylesheets and through the use of customised header and footer files that surround the output of the script. Conclusion The software provides a rapid means of setting up customised Internet antigen sequence databases. The flexible configuration options enable typing schemes with differing requirements to be accommodated.

  18. Competencies for preservation and digital curation

    Directory of Open Access Journals (Sweden)

    Sonia Boeres

    2016-09-01

    Full Text Available Information Science, throughout its existence, has been a multi and interdisciplinary field, and has undergone constant change because of its object of study: information. Seen that this element is not static and is increasingly linked to information technology, we have witnessed a challenge arise: how to ensure the permanence of digital libraries? How to secure the terabytes generated with increasing speed, and in various formats, will be available and fully capable of use over time? This is a challenge that Information Science professionals are being challenged to solve in the process of so-called digital preservation and curation. Thus, this article aims to raise the skills that the information professional must have to carry out the process of preservation and digital curation. The article discusses the emergence of professions (from the perspective of Sociology, the need to work for the realization of the human being (Psychology and proficiencies of exercising the office of Information Science to ensure the preservation of digital information in information units.

  19. Organic Contamination Baseline Study in NASA Johnson Space Center Astromaterials Curation Laboratories

    Science.gov (United States)

    Calaway, Michael J.; Allen, Carlton C.; Allton, Judith H.

    2014-01-01

    Future robotic and human spaceflight missions to the Moon, Mars, asteroids, and comets will require curating astromaterial samples with minimal inorganic and organic contamination to preserve the scientific integrity of each sample. 21st century sample return missions will focus on strict protocols for reducing organic contamination that have not been seen since the Apollo manned lunar landing program. To properly curate these materials, the Astromaterials Acquisition and Curation Office under the Astromaterial Research and Exploration Science Directorate at NASA Johnson Space Center houses and protects all extraterrestrial materials brought back to Earth that are controlled by the United States government. During fiscal year 2012, we conducted a year-long project to compile historical documentation and laboratory tests involving organic investigations at these facilities. In addition, we developed a plan to determine the current state of organic cleanliness in curation laboratories housing astromaterials. This was accomplished by focusing on current procedures and protocols for cleaning, sample handling, and storage. While the intention of this report is to give a comprehensive overview of the current state of organic cleanliness in JSC curation laboratories, it also provides a baseline for determining whether our cleaning procedures and sample handling protocols need to be adapted and/or augmented to meet the new requirements for future human spaceflight and robotic sample return missions.

  20. Refusal of Curative Radiation Therapy and Surgery Among Patients With Cancer

    International Nuclear Information System (INIS)

    Aizer, Ayal A.; Chen, Ming-Hui; Parekh, Arti; Choueiri, Toni K.; Hoffman, Karen E.; Kim, Simon P.; Martin, Neil E.; Hu, Jim C.; Trinh, Quoc-Dien; Nguyen, Paul L.

    2014-01-01

    Purpose: Surgery and radiation therapy represent the only curative options for many patients with solid malignancies. However, despite the recommendations of their physicians, some patients refuse these therapies. This study characterized factors associated with refusal of surgical or radiation therapy as well as the impact of refusal of recommended therapy on patients with localized malignancies. Methods and Materials: We used the Surveillance, Epidemiology, and End Results program to identify a population-based sample of 925,127 patients who had diagnoses of 1 of 8 common malignancies for which surgery and/or radiation are believed to confer a survival benefit between 1995 and 2008. Refusal of oncologic therapy, as documented in the SEER database, was the primary outcome measure. Multivariable logistic regression was used to investigate factors associated with refusal. The impact of refusal of therapy on cancer-specific mortality was assessed with Fine and Gray's competing risks regression. Results: In total, 2441 of 692,938 patients (0.4%) refused surgery, and 2113 of 232,189 patients (0.9%) refused radiation, despite the recommendations of their physicians. On multivariable analysis, advancing age, decreasing annual income, nonwhite race, and unmarried status were associated with refusal of surgery, whereas advancing age, decreasing annual income, Asian American race, and unmarried status were associated with refusal of radiation (P<.001 in all cases). Refusal of surgery and radiation were associated with increased estimates of cancer-specific mortality for all malignancies evaluated (hazard ratio [HR], 2.80, 95% confidence interval [CI], 2.59-3.03; P<.001 and HR 1.97 [95% CI, 1.78-2.18]; P<.001, respectively). Conclusions: Nonwhite, less affluent, and unmarried patients are more likely to refuse curative surgical and/or radiation-based oncologic therapy, raising concern that socioeconomic factors may drive some patients to forego potentially life

  1. Refusal of Curative Radiation Therapy and Surgery Among Patients With Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Aizer, Ayal A., E-mail: aaaizer@partners.org [Harvard Radiation Oncology Program, Boston, Massachusetts (United States); Chen, Ming-Hui [Department of Statistics, University of Connecticut, Storrs, Connecticut (United States); Parekh, Arti [Boston University School of Medicine, Boston, Massachusetts (United States); Choueiri, Toni K. [Lank Center for Genitourinary Oncology, Dana-Farber Cancer Institute/Brigham and Women' s Hospital, Harvard Medical School, Boston, Massachusetts (United States); Hoffman, Karen E. [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Kim, Simon P. [Department of Urology, Mayo Clinic, Rochester, Minnesota (United States); Martin, Neil E. [Lank Center for Genitourinary Oncology, Dana-Farber Cancer Institute/Brigham and Women' s Hospital, Harvard Medical School, Boston, Massachusetts (United States); Hu, Jim C. [Department of Urology, University of California, Los Angeles, California (United States); Trinh, Quoc-Dien [Cancer Prognostics and Health Outcomes Unit, University of Montreal Health Center, Montreal, Quebec (Canada); Nguyen, Paul L. [Lank Center for Genitourinary Oncology, Dana-Farber Cancer Institute/Brigham and Women' s Hospital, Harvard Medical School, Boston, Massachusetts (United States)

    2014-07-15

    Purpose: Surgery and radiation therapy represent the only curative options for many patients with solid malignancies. However, despite the recommendations of their physicians, some patients refuse these therapies. This study characterized factors associated with refusal of surgical or radiation therapy as well as the impact of refusal of recommended therapy on patients with localized malignancies. Methods and Materials: We used the Surveillance, Epidemiology, and End Results program to identify a population-based sample of 925,127 patients who had diagnoses of 1 of 8 common malignancies for which surgery and/or radiation are believed to confer a survival benefit between 1995 and 2008. Refusal of oncologic therapy, as documented in the SEER database, was the primary outcome measure. Multivariable logistic regression was used to investigate factors associated with refusal. The impact of refusal of therapy on cancer-specific mortality was assessed with Fine and Gray's competing risks regression. Results: In total, 2441 of 692,938 patients (0.4%) refused surgery, and 2113 of 232,189 patients (0.9%) refused radiation, despite the recommendations of their physicians. On multivariable analysis, advancing age, decreasing annual income, nonwhite race, and unmarried status were associated with refusal of surgery, whereas advancing age, decreasing annual income, Asian American race, and unmarried status were associated with refusal of radiation (P<.001 in all cases). Refusal of surgery and radiation were associated with increased estimates of cancer-specific mortality for all malignancies evaluated (hazard ratio [HR], 2.80, 95% confidence interval [CI], 2.59-3.03; P<.001 and HR 1.97 [95% CI, 1.78-2.18]; P<.001, respectively). Conclusions: Nonwhite, less affluent, and unmarried patients are more likely to refuse curative surgical and/or radiation-based oncologic therapy, raising concern that socioeconomic factors may drive some patients to forego potentially life

  2. Hanford Environmental Information System (HEIS) Operator`s Manual. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Schreck, R.I.

    1991-10-01

    The Hanford Environmental Information System (HEIS) is a consolidated set of automated resources that effectively manage the data gathered during environmental monitoring and restoration of the Hanford Site. The HEIS includes an integrated database that provides consistent and current data to all users and promotes sharing of data by the entire user community. This manual describes the facilities available to the operational user who is responsible for data entry, processing, scheduling, reporting, and quality assurance. A companion manual, the HEIS User`s Manual, describes the facilities available-to the scientist, engineer, or manager who uses the system for environmental monitoring, assessment, and restoration planning; and to the regulator who is responsible for reviewing Hanford Site operations against regulatory requirements and guidelines.

  3. The baladi curative system of Cairo, Egypt.

    Science.gov (United States)

    Early, E A

    1988-03-01

    The article explores the symbolic structure of the baladi (traditional) cultural system as revealed in everyday narratives, with a focus on baladi curative action. The everyday illness narrative provides a cultural window to the principles of fluidity and restorative balance of baladi curative practices. The body is seen as a dynamic organism through which both foreign objects and physiological entities can move. The body should be in balance, as with any humorally-influenced system, and so baladi cures aim to restore normal balance and functioning of the body. The article examines in detail a narrative on treatment of a sick child, and another on treatment of fertility problems. It traces such cultural oppositions as insider: outsider; authentic:inauthentic; home remedy:cosmopolitan medicine. In the social as well as the medical arena these themes organize social/medical judgements about correct action and explanations of events.

  4. Construction and analysis of a human hepatotoxicity database suitable for QSAR modeling using post-market safety data

    International Nuclear Information System (INIS)

    Zhu, Xiao; Kruhlak, Naomi L.

    2014-01-01

    Graphical abstract: - Abstract: Drug-induced liver injury (DILI) is one of the most common drug-induced adverse events (AEs) leading to life-threatening conditions such as acute liver failure. It has also been recognized as the single most common cause of safety-related post-market withdrawals or warnings. Efforts to develop new predictive methods to assess the likelihood of a drug being a hepatotoxicant have been challenging due to the complexity and idiosyncrasy of clinical manifestations of DILI. The FDA adverse event reporting system (AERS) contains post-market data that depict the morbidity of AEs. Here, we developed a scalable approach to construct a hepatotoxicity database using post-market data for the purpose of quantitative structure–activity relationship (QSAR) modeling. A set of 2029 unique and modelable drug entities with 13,555 drug-AE combinations was extracted from the AERS database using 37 hepatotoxicity-related query preferred terms (PTs). In order to determine the optimal classification scheme to partition positive from negative drugs, a manually-curated DILI calibration set composed of 105 negatives and 177 positives was developed based on the published literature. The final classification scheme combines hepatotoxicity-related PT data with supporting information that optimize the predictive performance across the calibration set. Data for other toxicological endpoints related to liver injury such as liver enzyme abnormalities, cholestasis, and bile duct disorders, were also extracted and classified. Collectively, these datasets can be used to generate a battery of QSAR models that assess a drug's potential to cause DILI

  5. Judson_Mansouri_Automated_Chemical_Curation_QSAREnvRes_Data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Here we describe the development of an automated KNIME workflow to curate and correct errors in the structure and identity of chemicals using the publically...

  6. The UniProtKB/Swiss-Prot knowledgebase and its Plant Proteome Annotation Program.

    Science.gov (United States)

    Schneider, Michel; Lane, Lydie; Boutet, Emmanuel; Lieberherr, Damien; Tognolli, Michael; Bougueleret, Lydie; Bairoch, Amos

    2009-04-13

    The UniProt knowledgebase, UniProtKB, is the main product of the UniProt consortium. It consists of two sections, UniProtKB/Swiss-Prot, the manually curated section, and UniProtKB/TrEMBL, the computer translation of the EMBL/GenBank/DDBJ nucleotide sequence database. Taken together, these two sections cover all the proteins characterized or inferred from all publicly available nucleotide sequences. The Plant Proteome Annotation Program (PPAP) of UniProtKB/Swiss-Prot focuses on the manual annotation of plant-specific proteins and protein families. Our major effort is currently directed towards the two model plants Arabidopsis thaliana and Oryza sativa. In UniProtKB/Swiss-Prot, redundancy is minimized by merging all data from different sources in a single entry. The proposed protein sequence is frequently modified after comparison with ESTs, full length transcripts or homologous proteins from other species. The information present in manually curated entries allows the reconstruction of all described isoforms. The annotation also includes proteomics data such as PTM and protein identification MS experimental results. UniProtKB and the other products of the UniProt consortium are accessible online at www.uniprot.org.

  7. Reducing Organic Contamination in NASA JSC Astromaterial Curation Facility

    Science.gov (United States)

    Calaway, M. J.; Allen, C. C.; Allton, J. H.

    2013-01-01

    Future robotic and human spaceflight missions to the Moon, Mars, asteroids and comets will require handling and storing astromaterial samples with minimal inorganic and organic contamination to preserve the scientific integrity of each sample. Much was learned from the rigorous attempts to minimize and monitor organic contamination during Apollo, but it was not adequate for current analytical requirements; thus [1]. OSIRIS-REx, Hayabusa-2, and future Mars sample return will require better protocols for reducing organic contamination. Future isolation con-tainment systems for astromaterials, possibly nitrogen enriched gloveboxes, must be able to reduce organic and inorganic cross-contamination. In 2012, a baseline study established the current state of organic cleanliness in gloveboxes used by NASA JSC astromaterials curation labs that could be used as a benchmark for future mission designs [2, 3]. After standard ultra-pure water (UPW) cleaning, the majority of organic contaminates found were hydrocarbons, plasticizers, silicones, and solvents. Hydro-carbons loads (> C7) ranged from 1.9 to 11.8 ng/cm2 for TD-GC-MS wafer exposure analyses and 5.0 to 19.5 ng/L for TD-GC-MS adsorbent tube exposure. Plasticizers included peracetic acid sterilization were used in the atmospheric de-contamination (R) cabinets. Later, Lunar curation gloveboxes were degreased with a pressurized Freon 113 wash. Today, UPW has replaced Freon as the standard cleaning procedure, but does not have the degreasing solvency power of Freon. Future Cleaning Studies: Cleaning experiments are cur-rently being orchestrated to study how to degrease and reduce organics in a JSC curation glovebox lower than the established baseline. Several new chemicals in the industry have replaced traditional degreasing solvents such as Freon and others that are now federally restricted. However, these new suites of chemicals remain untested for lowering organics in curation gloveboxes. 3M's HFE-7100DL and Du

  8. Sharing Responsibility for Data Stewardship Between Scientists and Curators

    Science.gov (United States)

    Hedstrom, M. L.

    2012-12-01

    Data stewardship is becoming increasingly important to support accurate conclusions from new forms of data, integration of and computation across heterogeneous data types, interactions between models and data, replication of results, data governance and long-term archiving. In addition to increasing recognition of the importance of data management, data science, and data curation by US and international scientific agencies, the National Academies of Science Board on Research Data and Information is sponsoring a study on Data Curation Education and Workforce Issues. Effective data stewardship requires a distributed effort among scientists who produce data, IT staff and/or vendors who provide data storage and computational facilities and services, and curators who enhance data quality, manage data governance, provide access to third parties, and assume responsibility for long-term archiving of data. The expertise necessary for scientific data management includes a mix of knowledge of the scientific domain; an understanding of domain data requirements, standards, ontologies and analytical methods; facility with leading edge information technology; and knowledge of data governance, standards, and best practices for long-term preservation and access that rarely are found in a single individual. Rather than developing data science and data curation as new and distinct occupations, this paper examines the set of tasks required for data stewardship. The paper proposes an alternative model that embeds data stewardship in scientific workflows and coordinates hand-offs between instruments, repositories, analytical processing, publishers, distributors, and archives. This model forms the basis for defining knowledge and skill requirements for specific actors in the processes required for data stewardship and the corresponding educational and training needs.

  9. Curating Big Data Made Simple: Perspectives from Scientific Communities.

    Science.gov (United States)

    Sowe, Sulayman K; Zettsu, Koji

    2014-03-01

    The digital universe is exponentially producing an unprecedented volume of data that has brought benefits as well as fundamental challenges for enterprises and scientific communities alike. This trend is inherently exciting for the development and deployment of cloud platforms to support scientific communities curating big data. The excitement stems from the fact that scientists can now access and extract value from the big data corpus, establish relationships between bits and pieces of information from many types of data, and collaborate with a diverse community of researchers from various domains. However, despite these perceived benefits, to date, little attention is focused on the people or communities who are both beneficiaries and, at the same time, producers of big data. The technical challenges posed by big data are as big as understanding the dynamics of communities working with big data, whether scientific or otherwise. Furthermore, the big data era also means that big data platforms for data-intensive research must be designed in such a way that research scientists can easily search and find data for their research, upload and download datasets for onsite/offsite use, perform computations and analysis, share their findings and research experience, and seamlessly collaborate with their colleagues. In this article, we present the architecture and design of a cloud platform that meets some of these requirements, and a big data curation model that describes how a community of earth and environmental scientists is using the platform to curate data. Motivation for developing the platform, lessons learnt in overcoming some challenges associated with supporting scientists to curate big data, and future research directions are also presented.

  10. Radioactive Materials Packaging (RAMPAC) Radioactive Materials Incident Report (RMIR). RAMTEMP users manual

    International Nuclear Information System (INIS)

    Tyron-Hopko, A.K.; Driscoll, K.L.

    1985-10-01

    The purpose of this document is to familiarize the potential user with RadioActive Materials PACkaging (RAMPAC), Radioactive Materials Incident Report (RMIR), and RAMTEMP databases. RAMTEMP is a minor image of RAMPAC. This reference document will enable the user to access and obtain reports from databases while in an interactive mode. This manual will be revised as necessary to reflect enhancements made to the system

  11. Improving taxonomic accuracy for fungi in public sequence databases: applying ‘one name one species’ in well-defined genera with Trichoderma/Hypocrea as a test case

    Science.gov (United States)

    Strope, Pooja K; Chaverri, Priscila; Gazis, Romina; Ciufo, Stacy; Domrachev, Michael; Schoch, Conrad L

    2017-01-01

    Abstract The ITS (nuclear ribosomal internal transcribed spacer) RefSeq database at the National Center for Biotechnology Information (NCBI) is dedicated to the clear association between name, specimen and sequence data. This database is focused on sequences obtained from type material stored in public collections. While the initial ITS sequence curation effort together with numerous fungal taxonomy experts attempted to cover as many orders as possible, we extended our latest focus to the family and genus ranks. We focused on Trichoderma for several reasons, mainly because the asexual and sexual synonyms were well documented, and a list of proposed names and type material were recently proposed and published. In this case study the recent taxonomic information was applied to do a complete taxonomic audit for the genus Trichoderma in the NCBI Taxonomy database. A name status report is available here: https://www.ncbi.nlm.nih.gov/Taxonomy/TaxIdentifier/tax_identifier.cgi. As a result, the ITS RefSeq Targeted Loci database at NCBI has been augmented with more sequences from type and verified material from Trichoderma species. Additionally, to aid in the cross referencing of data from single loci and genomes we have collected a list of quality records of the RPB2 gene obtained from type material in GenBank that could help validate future submissions. During the process of curation misidentified genomes were discovered, and sequence records from type material were found hidden under previous classifications. Source metadata curation, although more cumbersome, proved to be useful as confirmation of the type material designation. Database URL: http://www.ncbi.nlm.nih.gov/bioproject/PRJNA177353 PMID:29220466

  12. [Curative Effects of Hydroxyurea on the Patients with β-thalassaemia Intermadia].

    Science.gov (United States)

    Huang, Li; Yao, Hong-Xia

    2016-06-01

    To investigate the clinical features of β-thalassaemia intermediate (TI) patients and the curative effect and side reactions of hydroxyurea therapys. Twenty nine patients with TI were divided into hydroxyurea therapy group and no hydroxyurea therapy group; the curative effect and side reactions in 2 groups were compared; the situation of blood transfusion in the 2 groups was evaluated. In hydroxyurea therapy group, the hemoglobin level increased after treatment for 3 months; the reticulocyte percentage obviously decreased after treatment for 12 months; the serum ferritin had been maintained at a low level; while in no hydroxyurea therapy group, the levels of hemoglobin and reticulocytes were not significantly improved after treatment, the serum ferritin level gradually increased. In hydroxyurea therapy group, 12 cases were out of blood transfusion after treatment for 12 months, effective rate of treatment was 85.71%; while in no hydroxyurea therapy group, the blood transfusion dependency was not improved after treatment. No serious side reactions were found in all the hydroxyurea treated patients. The hydroxyurea shows a better curative effect on TI patients, no serious side reactions occur in all the patients treated with hydroxyurea, but the long-term curative effect and side reactions should be observed continuously.

  13. The Steward Observatory asteroid relational database

    Science.gov (United States)

    Sykes, Mark V.; Alvarezdelcastillo, Elizabeth M.

    1991-01-01

    The Steward Observatory Asteroid Relational Database (SOARD) was created as a flexible tool for undertaking studies of asteroid populations and sub-populations, to probe the biases intrinsic to asteroid databases, to ascertain the completeness of data pertaining to specific problems, to aid in the development of observational programs, and to develop pedagogical materials. To date, SOARD has compiled an extensive list of data available on asteroids and made it accessible through a single menu-driven database program. Users may obtain tailored lists of asteroid properties for any subset of asteroids or output files which are suitable for plotting spectral data on individual asteroids. The program has online help as well as user and programmer documentation manuals. The SOARD already has provided data to fulfill requests by members of the astronomical community. The SOARD continues to grow as data is added to the database and new features are added to the program.

  14. TIPdb-3D: the three-dimensional structure database of phytochemicals from Taiwan indigenous plants.

    Science.gov (United States)

    Tung, Chun-Wei; Lin, Ying-Chi; Chang, Hsun-Shuo; Wang, Chia-Chi; Chen, Ih-Sheng; Jheng, Jhao-Liang; Li, Jih-Heng

    2014-01-01

    The rich indigenous and endemic plants in Taiwan serve as a resourceful bank for biologically active phytochemicals. Based on our TIPdb database curating bioactive phytochemicals from Taiwan indigenous plants, this study presents a three-dimensional (3D) chemical structure database named TIPdb-3D to support the discovery of novel pharmacologically active compounds. The Merck Molecular Force Field (MMFF94) was used to generate 3D structures of phytochemicals in TIPdb. The 3D structures could facilitate the analysis of 3D quantitative structure-activity relationship, the exploration of chemical space and the identification of potential pharmacologically active compounds using protein-ligand docking. Database URL: http://cwtung.kmu.edu.tw/tipdb. © The Author(s) 2014. Published by Oxford University Press.

  15. User's Manual for RESRAD-OFFSITE Version 2.

    Energy Technology Data Exchange (ETDEWEB)

    Yu, C.; Gnanapragasam, E.; Biwer, B. M.; Kamboj, S.; Cheng, J. -J.; Klett, T.; LePoire, D.; Zielen, A. J.; Chen, S. Y.; Williams, W. A.; Wallo, A.; Domotor, S.; Mo, T.; Schwartzman, A.; Environmental Science Division; DOE; NRC

    2007-09-05

    The RESRAD-OFFSITE code is an extension of the RESRAD (onsite) code, which has been widely used for calculating doses and risks from exposure to radioactively contaminated soils. The development of RESRAD-OFFSITE started more than 10 years ago, but new models and methodologies have been developed, tested, and incorporated since then. Some of the new models have been benchmarked against other independently developed (international) models. The databases used have also expanded to include all the radionuclides (more than 830) contained in the International Commission on Radiological Protection (ICRP) 38 database. This manual provides detailed information on the design and application of the RESRAD-OFFSITE code. It describes in detail the new models used in the code, such as the three-dimensional dispersion groundwater flow and radionuclide transport model, the Gaussian plume model for atmospheric dispersion, and the deposition model used to estimate the accumulation of radionuclides in offsite locations and in foods. Potential exposure pathways and exposure scenarios that can be modeled by the RESRAD-OFFSITE code are also discussed. A user's guide is included in Appendix A of this manual. The default parameter values and parameter distributions are presented in Appendix B, along with a discussion on the statistical distributions for probabilistic analysis. A detailed discussion on how to reduce run time, especially when conducting probabilistic (uncertainty) analysis, is presented in Appendix C of this manual.

  16. Non-animal methods to predict skin sensitization (I): the Cosmetics Europe database.

    Science.gov (United States)

    Hoffmann, Sebastian; Kleinstreuer, Nicole; Alépée, Nathalie; Allen, David; Api, Anne Marie; Ashikaga, Takao; Clouet, Elodie; Cluzel, Magalie; Desprez, Bertrand; Gellatly, Nichola; Goebel, Carsten; Kern, Petra S; Klaric, Martina; Kühnl, Jochen; Lalko, Jon F; Martinozzi-Teissier, Silvia; Mewes, Karsten; Miyazawa, Masaaki; Parakhia, Rahul; van Vliet, Erwin; Zang, Qingda; Petersohn, Dirk

    2018-05-01

    Cosmetics Europe, the European Trade Association for the cosmetics and personal care industry, is conducting a multi-phase program to develop regulatory accepted, animal-free testing strategies enabling the cosmetics industry to conduct safety assessments. Based on a systematic evaluation of test methods for skin sensitization, five non-animal test methods (DPRA (Direct Peptide Reactivity Assay), KeratinoSens TM , h-CLAT (human cell line activation test), U-SENS TM , SENS-IS) were selected for inclusion in a comprehensive database of 128 substances. Existing data were compiled and completed with newly generated data, the latter amounting to one-third of all data. The database was complemented with human and local lymph node assay (LLNA) reference data, physicochemical properties and use categories, and thoroughly curated. Focused on the availability of human data, the substance selection resulted nevertheless resulted in a high diversity of chemistries in terms of physico-chemical property ranges and use categories. Predictivities of skin sensitization potential and potency, where applicable, were calculated for the LLNA as compared to human data and for the individual test methods compared to both human and LLNA reference data. In addition, various aspects of applicability of the test methods were analyzed. Due to its high level of curation, comprehensiveness, and completeness, we propose our database as a point of reference for the evaluation and development of testing strategies, as done for example in the associated work of Kleinstreuer et al. We encourage the community to use it to meet the challenge of conducting skin sensitization safety assessment without generating new animal data.

  17. Observations on the Curative Effect of Acupuncture on Depressive Neurosis

    Institute of Scientific and Technical Information of China (English)

    FU Wen-bin; WANG Si-you

    2003-01-01

    Purpose To evaluate the curative effect of acupuncture on depressive neurosis. Method Sixty-two patients were randomly divided into a treatment group of 32 cases and a control group of 30 cases. The treatment group and the control group were treated with acupuncture and Fluoxetine, respectively. The curative effects were evaluated by HAMD. Results There was a significant difference between pretreatment and posttreatmentin each group ( P 0.05). But acupuncture had no side effects and was good in compliance. Conclusion Acupuncture is an effective method for treating depressive neurosis.

  18. Jean-Paul Martinon, ed. - The Curatorial: A Philosophy of Curating

    Directory of Open Access Journals (Sweden)

    Sofia Romualdo

    2015-12-01

    Full Text Available In the words of Jean-Paul Martinon, this book’s editor, The Curatorial: A Philosophy of Curating originated from a “wish to talk about curating”, the same wish that led to the creation, in 2006, of a practice-led PhD programme at Goldsmiths College, called Curatorial/Knowledge. The anthology features contributions from tutors, guest speakers and students, all of whom delve into what “the curatorial” is and what it might mean in the future. Curating, or the act of organizing exhibitions, paral...

  19. Solubility Study of Curatives in Various Rubbers

    NARCIS (Netherlands)

    Guo, R.; Talma, Auke; Datta, Rabin; Dierkes, Wilma K.; Noordermeer, Jacobus W.M.

    2008-01-01

    The previous works on solubility of curatives in rubbers were mainly carried out in natural rubber. Not too much information available on dissimilar rubbers and this is important because most of the compounds today are blends of dissimilar rubbers. Although solubility can be expected to certain

  20. The Molecular Signatures Database (MSigDB) hallmark gene set collection.

    Science.gov (United States)

    Liberzon, Arthur; Birger, Chet; Thorvaldsdóttir, Helga; Ghandi, Mahmoud; Mesirov, Jill P; Tamayo, Pablo

    2015-12-23

    The Molecular Signatures Database (MSigDB) is one of the most widely used and comprehensive databases of gene sets for performing gene set enrichment analysis. Since its creation, MSigDB has grown beyond its roots in metabolic disease and cancer to include >10,000 gene sets. These better represent a wider range of biological processes and diseases, but the utility of the database is reduced by increased redundancy across, and heterogeneity within, gene sets. To address this challenge, here we use a combination of automated approaches and expert curation to develop a collection of "hallmark" gene sets as part of MSigDB. Each hallmark in this collection consists of a "refined" gene set, derived from multiple "founder" sets, that conveys a specific biological state or process and displays coherent expression. The hallmarks effectively summarize most of the relevant information of the original founder sets and, by reducing both variation and redundancy, provide more refined and concise inputs for gene set enrichment analysis.

  1. Hayabusa Asteroidal Sample Preliminary Examination Team (HASPET) and the Astromaterial Curation Facility at JAXA/ISAS

    Science.gov (United States)

    Yano, H.; Fujiwara, A.

    After the successful launch in May 2003, the Hayabusa (MUSES-C) mission of JAXA/ISAS will collect surface materials (e.g., regolith) of several hundred mg to several g in total from the S-type near Earth asteroid (25143) Itokawa in late 2005 and bring them back to ground laboratories in the summer of 2007. The retrieved samples will be given initial analysis at the JAXA/ISAS astromaterial curation facility, which is currently in the preparation for its construction, by the Hayabusa Asteroidal Sample Preliminary Examination Team (HASPET). HASPET is consisted of the ISAS Hayabusa team, the international partners from NASA and Australia and all-Japan meteoritic scientists to be selected as outsourcing parts of the initial analyses. The initial analysis to characterize general aspects of returned samples can consume only 15 % of its total mass and must complete the whole analyses including the database building before international AO for detailed analyses within the maximum of 1 year. Confident exercise of non-destructive, micro-analyses whenever possible are thus vital for the HASPET analysis. In the purpose to survey what kinds and levels of micro-analysis techniques in respective fields, from major elements and mineralogy to trace and isotopic elements and organics, are available in Japan at present, ISAS has conducted the HASPET open competitions in 2000-01 and 2004. The initial evaluation was made by multiple domestic peer reviews. Applicants were then provided two kinds of unknown asteroid sample analogs in order to conduct proposed analysis with self-claimed amount of samples in self-claimed duration. After the completion of multiple, international peer reviews, the Selection Committee compiled evaluations and recommended the finalists of each round. The final members of the HASPET will be appointed about 2 years prior to the Earth return. Then they will conduct a test-run of the whole initial analysis procedures at the ISAS astromaterial curation facility and

  2. Centralized database for interconnection system design. [for spacecraft

    Science.gov (United States)

    Billitti, Joseph W.

    1989-01-01

    A database application called DFACS (Database, Forms and Applications for Cabling and Systems) is described. The objective of DFACS is to improve the speed and accuracy of interconnection system information flow during the design and fabrication stages of a project, while simultaneously supporting both the horizontal (end-to-end wiring) and the vertical (wiring by connector) design stratagems used by the Jet Propulsion Laboratory (JPL) project engineering community. The DFACS architecture is centered around a centralized database and program methodology which emulates the manual design process hitherto used at JPL. DFACS has been tested and successfully applied to existing JPL hardware tasks with a resulting reduction in schedule time and costs.

  3. DBCG hypo trial validation of radiotherapy parameters from a national data bank versus manual reporting.

    Science.gov (United States)

    Brink, Carsten; Lorenzen, Ebbe L; Krogh, Simon Long; Westberg, Jonas; Berg, Martin; Jensen, Ingelise; Thomsen, Mette Skovhus; Yates, Esben Svitzer; Offersen, Birgitte Vrou

    2018-01-01

    The current study evaluates the data quality achievable using a national data bank for reporting radiotherapy parameters relative to the classical manual reporting method of selected parameters. The data comparison is based on 1522 Danish patients of the DBCG hypo trial with data stored in the Danish national radiotherapy data bank. In line with standard DBCG trial practice selected parameters were also reported manually to the DBCG database. Categorical variables are compared using contingency tables, and comparison of continuous parameters is presented in scatter plots. For categorical variables 25 differences between the data bank and manual values were located. Of these 23 were related to mistakes in the manual reported value whilst the remaining two were a wrong classification in the data bank. The wrong classification in the data bank was related to lack of dose information, since the two patients had been treated with an electron boost based on a manual calculation, thus data was not exported to the data bank, and this was not detected prior to comparison with the manual data. For a few database fields in the manual data an ambiguity of the parameter definition of the specific field is seen in the data. This was not the case for the data bank, which extract all data consistently. In terms of data quality the data bank is superior to manually reported values. However, there is a need to allocate resources for checking the validity of the available data as well as ensuring that all relevant data is present. The data bank contains more detailed information, and thus facilitates research related to the actual dose distribution in the patients.

  4. State of the Art of Cost and Benefit Models for Digital Curation

    DEFF Research Database (Denmark)

    Kejser, Ulla Bøgvad; Davidson, Joy; Wang, David

    2014-01-01

    , to support decision-making and for selecting the most efficient processes – all of which are critical for ensuring sustainability of digital curation investment. The evaluation revealed that the most prominent challenges are associated with the models’ usability, their inability to model quality and benefits......This paper presents the results of an evaluation carried out by the EU 4C project to assess how well current digital curation cost and benefit models meet a range of stakeholders’ needs. This work aims to elicit a means of modelling that enables comparing financial information across organisations...... of curation, and the lack of a clear terminology and conceptual description of costs and benefits. The paper provides recommendations on how these gaps in cost and benefit modelling can be bridged....

  5. Thoracic manual therapy is not more effective than placebo thoracic manual therapy in patients with shoulder dysfunctions: A systematic review with meta-analysis.

    Science.gov (United States)

    Bizzarri, Paolo; Buzzatti, Luca; Cattrysse, Erik; Scafoglieri, Aldo

    2018-02-01

    Manual treatments targeting different regions (shoulder, cervical spine, thoracic spine, ribs) have been studied to deal with patients complaining of shoulder pain. Thoracic manual treatments seem able to produce beneficial effects on this group of patients. However, it is not clear whether the patient improvement is a consequence of thoracic manual therapy or a placebo effect. To compare the efficacy of thoracic manual therapy and placebo thoracic manual treatment for patients with shoulder dysfunction. Electronic databases (MEDLINE, CENTRAL, PEDro, CINAHL, WoS, EMBASE, ERIC) were searched through November 2016. Randomized Controlled Trials assessing pain, mobility and function were selected. The Cochrane bias estimation tool was applied. Outcome results were either extracted or computed from raw data. Meta-analysis was performed for outcomes with low heterogeneity. Four studies were included in the review. The methodology of the included studies was generally good except for one study that was rated as high risk of bias. Meta-analysis showed no significant effect for "pain at present" (SMD -0.02; 95% CI: -0.35, 0.32) and "pain during movement" (SMD -0.12; 95% CI: -0.45, 0.21). There is very low to low quality of evidence that a single session of thoracic manual therapy is not more effective than a single session of placebo thoracic manual therapy in patients with shoulder dysfunction at immediate post-treatment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Solid waste projection model: Database user's guide (Version 1.0)

    International Nuclear Information System (INIS)

    Carr, F.; Stiles, D.

    1991-01-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for preparing to use Version 1 of the SWPM database, for entering and maintaining data, and for performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions, and does not provide instructions in the use of Paradox, the database management system in which the SWPM database is established. 3 figs., 1 tab

  7. Solid Waste Projection Model: Database user's guide (Version 1.3)

    International Nuclear Information System (INIS)

    Blackburn, C.L.

    1991-11-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for preparing to use Version 1.3 of the SWPM database, for entering and maintaining data, and for performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions and does not provide instruction in the use of Paradox, the database management system in which the SWPM database is established

  8. A scalable machine-learning approach to recognize chemical names within large text databases

    Directory of Open Access Journals (Sweden)

    Wren Jonathan D

    2006-09-01

    Full Text Available Abstract Motivation The use or study of chemical compounds permeates almost every scientific field and in each of them, the amount of textual information is growing rapidly. There is a need to accurately identify chemical names within text for a number of informatics efforts such as database curation, report summarization, tagging of named entities and keywords, or the development/curation of reference databases. Results A first-order Markov Model (MM was evaluated for its ability to distinguish chemical names from words, yielding ~93% recall in recognizing chemical terms and ~99% precision in rejecting non-chemical terms on smaller test sets. However, because total false-positive events increase with the number of words analyzed, the scalability of name recognition was measured by processing 13.1 million MEDLINE records. The method yielded precision ranges from 54.7% to 100%, depending upon the cutoff score used, averaging 82.7% for approximately 1.05 million putative chemical terms extracted. Extracted chemical terms were analyzed to estimate the number of spelling variants per term, which correlated with the total number of times the chemical name appeared in MEDLINE. This variability in term construction was found to affect both information retrieval and term mapping when using PubMed and Ovid.

  9. The DREO Elint Browser Utility (DEBU) reference manual

    Science.gov (United States)

    Ford, Barbara; Jones, David

    1992-04-01

    An electronic intelligent database browsing tool called DEBU has been developed that allows databases such as ELP, Kilting, EWIR, and AFEWC to be reviewed and analyzed from a user-friendly environment on a personal computer. DEBU's basic function is to allow users to examine the contents of user-selected subfiles of user-selected emitters of user-selected databases. DEBU augments this functionality with support for selecting (filtering) and combining subsets of emitters by user-selected attributes such as name, parameter type, or parameter value. DEBU provides facilities for examining histograms and x-y plots of selected parameters, for doing ambiguity analysis and mode level analysis, and for generating and printing a variety of reports. A manual is provided for users of DEBU, including descriptions and illustrations of menus and windows.

  10. INSPIRE: Managing Metadata in a Global Digital Library for High-Energy Physics

    CERN Document Server

    Martin Montull, Javier

    2011-01-01

    Four leading laboratories in the High-Energy Physics (HEP) field are collaborating to roll-out the next-generation scientific information portal: INSPIRE. The goal of this project is to replace the popular 40 year-old SPIRES database. INSPIRE already provides access to about 1 million records and includes services such as fulltext search, automatic keyword assignment, ingestion and automatic display of LaTeX, citation analysis, automatic author disambiguation, metadata harvesting, extraction of figures from fulltext and search in figure captions. In order to achieve high quality metadata both automatic processing and manual curation are needed. The different tools available in the system use modern web technologies to provide the curators of the maximum efficiency, while dealing with the MARC standard format. The project is under heavy development in order to provide new features including semantic analysis, crowdsourcing of metadata curation, user tagging, recommender systems, integration of OAIS standards a...

  11. JSC Advanced Curation: Research and Development for Current Collections and Future Sample Return Mission Demands

    Science.gov (United States)

    Fries, M. D.; Allen, C. C.; Calaway, M. J.; Evans, C. A.; Stansbery, E. K.

    2015-01-01

    Curation of NASA's astromaterials sample collections is a demanding and evolving activity that supports valuable science from NASA missions for generations, long after the samples are returned to Earth. For example, NASA continues to loan hundreds of Apollo program samples to investigators every year and those samples are often analyzed using instruments that did not exist at the time of the Apollo missions themselves. The samples are curated in a manner that minimizes overall contamination, enabling clean, new high-sensitivity measurements and new science results over 40 years after their return to Earth. As our exploration of the Solar System progresses, upcoming and future NASA sample return missions will return new samples with stringent contamination control, sample environmental control, and Planetary Protection requirements. Therefore, an essential element of a healthy astromaterials curation program is a research and development (R&D) effort that characterizes and employs new technologies to maintain current collections and enable new missions - an Advanced Curation effort. JSC's Astromaterials Acquisition & Curation Office is continually performing Advanced Curation research, identifying and defining knowledge gaps about research, development, and validation/verification topics that are critical to support current and future NASA astromaterials sample collections. The following are highlighted knowledge gaps and research opportunities.

  12. Analysis of disease-associated objects at the Rat Genome Database

    Science.gov (United States)

    Wang, Shur-Jen; Laulederkind, Stanley J. F.; Hayman, G. T.; Smith, Jennifer R.; Petri, Victoria; Lowry, Timothy F.; Nigam, Rajni; Dwinell, Melinda R.; Worthey, Elizabeth A.; Munzenmaier, Diane H.; Shimoyama, Mary; Jacob, Howard J.

    2013-01-01

    The Rat Genome Database (RGD) is the premier resource for genetic, genomic and phenotype data for the laboratory rat, Rattus norvegicus. In addition to organizing biological data from rats, the RGD team focuses on manual curation of gene–disease associations for rat, human and mouse. In this work, we have analyzed disease-associated strains, quantitative trait loci (QTL) and genes from rats. These disease objects form the basis for seven disease portals. Among disease portals, the cardiovascular disease and obesity/metabolic syndrome portals have the highest number of rat strains and QTL. These two portals share 398 rat QTL, and these shared QTL are highly concentrated on rat chromosomes 1 and 2. For disease-associated genes, we performed gene ontology (GO) enrichment analysis across portals using RatMine enrichment widgets. Fifteen GO terms, five from each GO aspect, were selected to profile enrichment patterns of each portal. Of the selected biological process (BP) terms, ‘regulation of programmed cell death’ was the top enriched term across all disease portals except in the obesity/metabolic syndrome portal where ‘lipid metabolic process’ was the most enriched term. ‘Cytosol’ and ‘nucleus’ were common cellular component (CC) annotations for disease genes, but only the cancer portal genes were highly enriched with ‘nucleus’ annotations. Similar enrichment patterns were observed in a parallel analysis using the DAVID functional annotation tool. The relationship between the preselected 15 GO terms and disease terms was examined reciprocally by retrieving rat genes annotated with these preselected terms. The individual GO term–annotated gene list showed enrichment in physiologically related diseases. For example, the ‘regulation of blood pressure’ genes were enriched with cardiovascular disease annotations, and the ‘lipid metabolic process’ genes with obesity annotations. Furthermore, we were able to enhance enrichment of neurological

  13. Updates to the Cool Season Food Legume Genome Database: Resources for pea, lentil, faba bean and chickpea genetics, genomics and breeding

    Science.gov (United States)

    The Cool Season Food Legume Genome database (CSFL, www.coolseasonfoodlegume.org) is an online resource for genomics, genetics, and breeding research for chickpea, lentil,pea, and faba bean. The user-friendly and curated website allows for all publicly available map,marker,trait, gene,transcript, ger...

  14. Sample Transport for a European Sample Curation Facility

    Science.gov (United States)

    Berthoud, L.; Vrublevskis, J. B.; Bennett, A.; Pottage, T.; Bridges, J. C.; Holt, J. M. C.; Dirri, F.; Longobardo, A.; Palomba, E.; Russell, S.; Smith, C.

    2018-04-01

    This work has looked at the recovery of Mars Sample Return capsule once it arrives on Earth. It covers possible landing sites, planetary protection requirements, and transportation from the landing site to a European Sample Curation Facility.

  15. Semi-automating the manual literature search for systematic reviews increases efficiency.

    Science.gov (United States)

    Chapman, Andrea L; Morgan, Laura C; Gartlehner, Gerald

    2010-03-01

    To minimise retrieval bias, manual literature searches are a key part of the search process of any systematic review. Considering the need to have accurate information, valid results of the manual literature search are essential to ensure scientific standards; likewise efficient approaches that minimise the amount of personnel time required to conduct a manual literature search are of great interest. The objective of this project was to determine the validity and efficiency of a new manual search method that utilises the scopus database. We used the traditional manual search approach as the gold standard to determine the validity and efficiency of the proposed scopus method. Outcome measures included completeness of article detection and personnel time involved. Using both methods independently, we compared the results based on accuracy of the results, validity and time spent conducting the search, efficiency. Regarding accuracy, the scopus method identified the same studies as the traditional approach indicating its validity. In terms of efficiency, using scopus led to a time saving of 62.5% compared with the traditional approach (3 h versus 8 h). The scopus method can significantly improve the efficiency of manual searches and thus of systematic reviews.

  16. VIOLIN: vaccine investigation and online information network.

    Science.gov (United States)

    Xiang, Zuoshuang; Todd, Thomas; Ku, Kim P; Kovacic, Bethany L; Larson, Charles B; Chen, Fang; Hodges, Andrew P; Tian, Yuying; Olenzek, Elizabeth A; Zhao, Boyang; Colby, Lesley A; Rush, Howard G; Gilsdorf, Janet R; Jourdian, George W; He, Yongqun

    2008-01-01

    Vaccines are among the most efficacious and cost-effective tools for reducing morbidity and mortality caused by infectious diseases. The vaccine investigation and online information network (VIOLIN) is a web-based central resource, allowing easy curation, comparison and analysis of vaccine-related research data across various human pathogens (e.g. Haemophilus influenzae, human immunodeficiency virus (HIV) and Plasmodium falciparum) of medical importance and across humans, other natural hosts and laboratory animals. Vaccine-related peer-reviewed literature data have been downloaded into the database from PubMed and are searchable through various literature search programs. Vaccine data are also annotated, edited and submitted to the database through a web-based interactive system that integrates efficient computational literature mining and accurate manual curation. Curated information includes general microbial pathogenesis and host protective immunity, vaccine preparation and characteristics, stimulated host responses after vaccination and protection efficacy after challenge. Vaccine-related pathogen and host genes are also annotated and available for searching through customized BLAST programs. All VIOLIN data are available for download in an eXtensible Markup Language (XML)-based data exchange format. VIOLIN is expected to become a centralized source of vaccine information and to provide investigators in basic and clinical sciences with curated data and bioinformatics tools for vaccine research and development. VIOLIN is publicly available at http://www.violinet.org.

  17. Curative efficacy and safety of traditional Chinese medicine xuebijing injections combined with ulinastatin for treating sepsis in the Chinese population: A meta-analysis.

    Science.gov (United States)

    Xiao, Shi-Hui; Luo, Liang; Liu, Xiang-Hong; Zhou, Yu-Ming; Liu, Hong-Ming; Huang, Zhen-Fei

    2018-06-01

    Sepsis is a clinically critical disease. However, it is still controversial whether the combined use of traditional Chinese medicine Xuebijing injections (XBJI) and western medicine can enhance curative efficacy and ensure safety compared with western medicine alone. Thus, this research consisted of a systematic review of the curative efficacy and safety of traditional Chinese medicine XBJI combined with ulinastatin for treating sepsis in the Chinese population. A total of 8 databases were retrieved: 4 foreign databases, namely, PubMed, The Cochrane Library, Embase, and Web of Science; and 4 Chinese databases, namely, Sino Med, China National Knowledge Infrastructure (CNKI), VIP, and Wangfang Data. The time span of retrieval began from the establishment of each database and ended on August 1, 2017. Published randomized controlled trials about the combined use of traditional Chinese medicine XBJI and western medicine were included, regardless of language. Stata12.0 software was used for statistical analysis. Finally, 16 papers involving 1335 cases were included. The result of meta-analysis showed that compared with the single use of ulinastatin, traditional Chinese medicine XBJI combined with ulinastatin could reduce the time of mechanical ventilation, shorten the length of intensive care unit (ICU) stay, improve the 28-day survival rate, and decrease the occurrence rate of multiple organ dysfunction syndrome, case fatality rate, procalcitonin (PCT) content, APACKEII score, tumor necrosis factor (TNF)-α level, and interleukin (IL)-6 level. On the basis of the common basic therapeutic regimen, the combined use of traditional Chinese medicine XBJI and ulinastatin was compared with the use of ulinastatin alone for treating sepsis in the Chinese population. It was found that the number of adverse events of combination therapy is not significantly increased, and its clinical safety is well within the permitted range. However, considering the limitations of this

  18. Curating and Nudging in Virtual CLIL Environments

    Science.gov (United States)

    Nielsen, Helle Lykke

    2014-01-01

    Foreign language teachers can benefit substantially from the notions of curation and nudging when scaffolding CLIL activities on the internet. This article shows how these principles can be integrated into CLILstore, a free multimedia-rich learning tool with seamless access to online dictionaries, and presents feedback from first and second year…

  19. Outcomes of the 'Data Curation for Geobiology at Yellowstone National Park' Workshop

    Science.gov (United States)

    Thomer, A.; Palmer, C. L.; Fouke, B. W.; Rodman, A.; Choudhury, G. S.; Baker, K. S.; Asangba, A. E.; Wickett, K.; DiLauro, T.; Varvel, V.

    2013-12-01

    The continuing proliferation of geological and biological data generated at scientifically significant sites (such as hot springs, coral reefs, volcanic fields and other unique, data-rich locales) has created a clear need for the curation and active management of these data. However, there has been little exploration of what these curation processes and policies would entail. To that end, the Site-Based Data Curation (SBDC) project is developing a framework of guidelines and processes for the curation of research data generated at scientifically significant sites. A workshop was held in April 2013 at Yellowstone National Park (YNP) to gather input from scientists and stakeholders. Workshop participants included nine researchers actively conducting geobiology research at YNP, and seven YNP representatives, including permitting staff and information professionals from the YNP research library and archive. Researchers came from a range of research areas -- geology, molecular and microbial biology, ecology, environmental engineering, and science education. Through group discussions, breakout sessions and hands-on activities, we sought to generate policy recommendations and curation guidelines for the collection, representation, sharing and quality control of geobiological datasets. We report on key themes that emerged from workshop discussions, including: - participants' broad conceptions of the long-term usefulness, reusability and value of data. - the benefits of aggregating site-specific data in general, and geobiological data in particular. - the importance of capturing a dataset's originating context, and the potential usefulness of photographs as a reliable and easy way of documenting context. - researchers' and resource managers' overlapping priorities with regards to 'big picture' data collection and management in the long-term. Overall, we found that workshop participants were enthusiastic and optimistic about future collaboration and development of community

  20. Research resources: curating the new eagle-i discovery system

    Science.gov (United States)

    Vasilevsky, Nicole; Johnson, Tenille; Corday, Karen; Torniai, Carlo; Brush, Matthew; Segerdell, Erik; Wilson, Melanie; Shaffer, Chris; Robinson, David; Haendel, Melissa

    2012-01-01

    Development of biocuration processes and guidelines for new data types or projects is a challenging task. Each project finds its way toward defining annotation standards and ensuring data consistency with varying degrees of planning and different tools to support and/or report on consistency. Further, this process may be data type specific even within the context of a single project. This article describes our experiences with eagle-i, a 2-year pilot project to develop a federated network of data repositories in which unpublished, unshared or otherwise ‘invisible’ scientific resources could be inventoried and made accessible to the scientific community. During the course of eagle-i development, the main challenges we experienced related to the difficulty of collecting and curating data while the system and the data model were simultaneously built, and a deficiency and diversity of data management strategies in the laboratories from which the source data was obtained. We discuss our approach to biocuration and the importance of improving information management strategies to the research process, specifically with regard to the inventorying and usage of research resources. Finally, we highlight the commonalities and differences between eagle-i and similar efforts with the hope that our lessons learned will assist other biocuration endeavors. Database URL: www.eagle-i.net PMID:22434835

  1. Greengenes: Chimera-checked 16S rRNA gene database and workbenchcompatible in ARB

    Energy Technology Data Exchange (ETDEWEB)

    DeSantis, T.Z.; Hugenholtz, P.; Larsen, N.; Rojas, M.; Brodie,E.L; Keller, K.; Huber, T.; Dalevi, D.; Hu, P.; Andersen, G.L.

    2006-02-01

    A 16S rRNA gene database (http://greengenes.lbl.gov) addresses limitations of public repositories by providing chimera-screening, standard alignments and taxonomic classification using multiple published taxonomies. It was revealed that incongruent taxonomic nomenclature exists among curators even at the phylum-level. Putative chimeras were identified in 3% of environmental sequences and 0.2% of records derived from isolates. Environmental sequences were classified into 100 phylum-level lineages within the Archaea and Bacteria.

  2. IMGMD: A platform for the integration and standardisation of In silico Microbial Genome-scale Metabolic Models.

    Science.gov (United States)

    Ye, Chao; Xu, Nan; Dong, Chuan; Ye, Yuannong; Zou, Xuan; Chen, Xiulai; Guo, Fengbiao; Liu, Liming

    2017-04-07

    Genome-scale metabolic models (GSMMs) constitute a platform that combines genome sequences and detailed biochemical information to quantify microbial physiology at the system level. To improve the unity, integrity, correctness, and format of data in published GSMMs, a consensus IMGMD database was built in the LAMP (Linux + Apache + MySQL + PHP) system by integrating and standardizing 328 GSMMs constructed for 139 microorganisms. The IMGMD database can help microbial researchers download manually curated GSMMs, rapidly reconstruct standard GSMMs, design pathways, and identify metabolic targets for strategies on strain improvement. Moreover, the IMGMD database facilitates the integration of wet-lab and in silico data to gain an additional insight into microbial physiology. The IMGMD database is freely available, without any registration requirements, at http://imgmd.jiangnan.edu.cn/database.

  3. Organic Contamination Baseline Study on NASA JSC Astromaterial Curation Gloveboxes

    Science.gov (United States)

    Calaway, Michael J.; Allton, J. H.; Allen, C. C.; Burkett, P. J.

    2013-01-01

    Future planned sample return missions to carbon-rich asteroids and Mars in the next two decades will require strict handling and curation protocols as well as new procedures for reducing organic contamination. After the Apollo program, astromaterial collections have mainly been concerned with inorganic contamination [1-4]. However, future isolation containment systems for astromaterials, possibly nitrogen enriched gloveboxes, must be able to reduce organic and inorganic cross-contamination. In 2012, a baseline study was orchestrated to establish the current state of organic cleanliness in gloveboxes used by NASA JSC astromaterials curation labs that could be used as a benchmark for future mission designs.

  4. Curating Media Learning: Towards a Porous Expertise

    Science.gov (United States)

    McDougall, Julian; Potter, John

    2015-01-01

    This article combines research results from a range of projects with two consistent themes. Firstly, we explore the potential for curation to offer a productive metaphor for the convergence of digital media learning across and between home/lifeworld and formal educational/system-world spaces--or between the public and private spheres. Secondly, we…

  5. Hospital of Diagnosis Influences the Probability of Receiving Curative Treatment for Esophageal Cancer.

    Science.gov (United States)

    van Putten, Margreet; Koëter, Marijn; van Laarhoven, Hanneke W M; Lemmens, Valery E P P; Siersema, Peter D; Hulshof, Maarten C C M; Verhoeven, Rob H A; Nieuwenhuijzen, Grard A P

    2018-02-01

    The aim of this article was to study the influence of hospital of diagnosis on the probability of receiving curative treatment and its impact on survival among patients with esophageal cancer (EC). Although EC surgery is centralized in the Netherlands, the disease is often diagnosed in hospitals that do not perform this procedure. Patients with potentially curable esophageal or gastroesophageal junction tumors diagnosed between 2005 and 2013 who were potentially curable (cT1-3,X, any N, M0,X) were selected from the Netherlands Cancer Registry. Multilevel logistic regression was performed to examine the probability to undergo curative treatment (resection with or without neoadjuvant treatment, definitive chemoradiotherapy, or local tumor excision) according to hospital of diagnosis. Effects of variation in probability of undergoing curative treatment among these hospitals on survival were investigated by Cox regression. All 13,017 patients with potentially curable EC, diagnosed in 91 hospitals, were included. The proportion of patients receiving curative treatment ranged from 37% to 83% and from 45% to 86% in the periods 2005-2009 and 2010-2013, respectively, depending on hospital of diagnosis. After adjustment for patient- and hospital-related characteristics these proportions ranged from 41% to 77% and from 50% to 82%, respectively (both P < 0.001). Multivariable survival analyses showed that patients diagnosed in hospitals with a low probability of undergoing curative treatment had a worse overall survival (hazard ratio = 1.13, 95% confidence interval 1.06-1.20; hazard ratio = 1.15, 95% confidence interval 1.07-1.24). The variation in probability of undergoing potentially curative treatment for EC between hospitals of diagnosis and its impact on survival indicates that treatment decision making in EC may be improved.

  6. Curator's process of meaning-making in National museums

    DEFF Research Database (Denmark)

    Cole, Anne Jodon

    2014-01-01

    The paper aims to understand the meaning-making process curators engage in designing/developing exhibitions of the nations indigenous peoples. How indigenous people are represented can with perpetuate stereotypes or mediate change while strengthening their personal and group identity. Analysis...

  7. Impact of database quality in knowledge-based treatment planning for prostate cancer.

    Science.gov (United States)

    Wall, Phillip D H; Carver, Robert L; Fontenot, Jonas D

    2018-03-13

    This article investigates dose-volume prediction improvements in a common knowledge-based planning (KBP) method using a Pareto plan database compared with using a conventional, clinical plan database. Two plan databases were created using retrospective, anonymized data of 124 volumetric modulated arc therapy (VMAT) prostate cancer patients. The clinical plan database (CPD) contained planning data from each patient's clinically treated VMAT plan, which were manually optimized by various planners. The multicriteria optimization database (MCOD) contained Pareto-optimal plan data from VMAT plans created using a standardized multicriteria optimization protocol. Overlap volume histograms, incorporating fractional organ at risk volumes only within the treatment fields, were computed for each patient and used to match new patient anatomy to similar database patients. For each database patient, CPD and MCOD KBP predictions were generated for D 10 , D 30 , D 50 , D 65 , and D 80 of the bladder and rectum in a leave-one-out manner. Prediction achievability was evaluated through a replanning study on a subset of 31 randomly selected database patients using the best KBP predictions, regardless of plan database origin, as planning goals. MCOD predictions were significantly lower than CPD predictions for all 5 bladder dose-volumes and rectum D 50 (P = .004) and D 65 (P databases affects the performance and achievability of dose-volume predictions from a common knowledge-based planning approach for prostate cancer. Bladder and rectum dose-volume predictions derived from a database of standardized Pareto-optimal plans were compared with those derived from clinical plans manually designed by various planners. Dose-volume predictions from the Pareto plan database were significantly lower overall than those from the clinical plan database, without compromising achievability. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Manual therapy for unsettled, distressed and excessively crying infants: a systematic review and meta-analyses.

    Science.gov (United States)

    Carnes, Dawn; Plunkett, Austin; Ellwood, Julie; Miles, Clare

    2018-01-24

    To conduct a systematic review and meta-analyses to assess the effect of manual therapy interventions for healthy but unsettled, distressed and excessively crying infants and to provide information to help clinicians and parents inform decisions about care. We reviewed published peer-reviewed primary research articles in the last 26 years from nine databases (Medline Ovid, Embase, Web of Science, Physiotherapy Evidence Database, Osteopathic Medicine Digital Repository , Cochrane (all databases), Index of Chiropractic Literature, Open Access Theses and Dissertations and Cumulative Index to Nursing and Allied Health Literature). Our inclusion criteria were: manual therapy (by regulated or registered professionals) of unsettled, distressed and excessively crying infants who were otherwise healthy and treated in a primary care setting. Outcomes of interest were: crying, feeding, sleep, parent-child relations, parent experience/satisfaction and parent-reported global change. Nineteen studies were selected for full review: seven randomised controlled trials, seven case series, three cohort studies, one service evaluation study and one qualitative study.We found moderate strength evidence for the effectiveness of manual therapy on: reduction in crying time (favourable: -1.27 hours per day (95% CI -2.19 to -0.36)), sleep (inconclusive), parent-child relations (inconclusive) and global improvement (no effect). The risk of reported adverse events was low: seven non-serious events per 1000 infants exposed to manual therapy (n=1308) and 110 per 1000 in those not exposed. Some small benefits were found, but whether these are meaningful to parents remains unclear as does the mechanisms of action. Manual therapy appears relatively safe. CRD42016037353. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  9. Observation of curative effect of 131I in treatment of hyperthyroidism

    International Nuclear Information System (INIS)

    Huang Kebin; Xu Fan; Zhang Yaping; Wang Jingchang; Zhao Mingli; Ye Ming

    2012-01-01

    Objective: To explore the curative effect of 131 I in the treatment of hyperthyroidism. Method: 126 patients with hyperthyroidism were treated with 131 I and the curative effect was analyzed. Result: The results showed that among 126 cases of hyperthyroidism treated with 131 I, 117 cases had recovered and the cure rate was 92.9%. 9 cases were found hypothyroidism in one-year follow-up and the occurrence rate was 7.1%. Conclusion: The treatment of hyperthyroidism with 131 I is safe and effective method. (authors)

  10. AT_CHLORO, a comprehensive chloroplast proteome database with subplastidial localization and curated information on envelope proteins.

    Science.gov (United States)

    Ferro, Myriam; Brugière, Sabine; Salvi, Daniel; Seigneurin-Berny, Daphné; Court, Magali; Moyet, Lucas; Ramus, Claire; Miras, Stéphane; Mellal, Mourad; Le Gall, Sophie; Kieffer-Jaquinod, Sylvie; Bruley, Christophe; Garin, Jérôme; Joyard, Jacques; Masselon, Christophe; Rolland, Norbert

    2010-06-01

    Recent advances in the proteomics field have allowed a series of high throughput experiments to be conducted on chloroplast samples, and the data are available in several public databases. However, the accurate localization of many chloroplast proteins often remains hypothetical. This is especially true for envelope proteins. We went a step further into the knowledge of the chloroplast proteome by focusing, in the same set of experiments, on the localization of proteins in the stroma, the thylakoids, and envelope membranes. LC-MS/MS-based analyses first allowed building the AT_CHLORO database (http://www.grenoble.prabi.fr/protehome/grenoble-plant-proteomics/), a comprehensive repertoire of the 1323 proteins, identified by 10,654 unique peptide sequences, present in highly purified chloroplasts and their subfractions prepared from Arabidopsis thaliana leaves. This database also provides extensive proteomics information (peptide sequences and molecular weight, chromatographic retention times, MS/MS spectra, and spectral count) for a unique chloroplast protein accurate mass and time tag database gathering identified peptides with their respective and precise analytical coordinates, molecular weight, and retention time. We assessed the partitioning of each protein in the three chloroplast compartments by using a semiquantitative proteomics approach (spectral count). These data together with an in-depth investigation of the literature were compiled to provide accurate subplastidial localization of previously known and newly identified proteins. A unique knowledge base containing extensive information on the proteins identified in envelope fractions was thus obtained, allowing new insights into this membrane system to be revealed. Altogether, the data we obtained provide unexpected information about plastidial or subplastidial localization of some proteins that were not suspected to be associated to this membrane system. The spectral counting-based strategy was further

  11. Neural systems language: a formal modeling language for the systematic description, unambiguous communication, and automated digital curation of neural connectivity.

    Science.gov (United States)

    Brown, Ramsay A; Swanson, Larry W

    2013-09-01

    Systematic description and the unambiguous communication of findings and models remain among the unresolved fundamental challenges in systems neuroscience. No common descriptive frameworks exist to describe systematically the connective architecture of the nervous system, even at the grossest level of observation. Furthermore, the accelerating volume of novel data generated on neural connectivity outpaces the rate at which this data is curated into neuroinformatics databases to synthesize digitally systems-level insights from disjointed reports and observations. To help address these challenges, we propose the Neural Systems Language (NSyL). NSyL is a modeling language to be used by investigators to encode and communicate systematically reports of neural connectivity from neuroanatomy and brain imaging. NSyL engenders systematic description and communication of connectivity irrespective of the animal taxon described, experimental or observational technique implemented, or nomenclature referenced. As a language, NSyL is internally consistent, concise, and comprehensible to both humans and computers. NSyL is a promising development for systematizing the representation of neural architecture, effectively managing the increasing volume of data on neural connectivity and streamlining systems neuroscience research. Here we present similar precedent systems, how NSyL extends existing frameworks, and the reasoning behind NSyL's development. We explore NSyL's potential for balancing robustness and consistency in representation by encoding previously reported assertions of connectivity from the literature as examples. Finally, we propose and discuss the implications of a framework for how NSyL will be digitally implemented in the future to streamline curation of experimental results and bridge the gaps among anatomists, imagers, and neuroinformatics databases. Copyright © 2013 Wiley Periodicals, Inc.

  12. Curating NASA's Astromaterials Collections: Past, Present, and Future

    Science.gov (United States)

    Zeigler, Ryan

    2015-01-01

    Planning for the curation of samples from future sample return missions must begin during the initial planning stages of a mission. Waiting until the samples have been returned to Earth, or even when you begin to physically build the spacecraft is too late. A lack of proper planning could lead to irreversible contamination of the samples, which in turn would compromise the scientific integrity of the mission. For example, even though the Apollo missions first returned samples in 1969, planning for the curation facility began in the early 1960s, and construction of the Lunar Receiving Laboratory was completed in 1967. In addition to designing the receiving facility and laboratory that the samples will be characterized and stored in, there are many aspects of contamination that must be addressed during the planning and building of the spacecraft: planetary protection (both outbound and inbound); cataloging, documenting, and preserving the materials used to build spacecraft (also known as coupons); near real-time monitoring of the environment in which the spacecraft is being built using witness plates for critical aspects of contamination (known as contamination control); and long term monitoring and preservation of the environment in which the spacecraft is being built for most aspects of potential contamination through the use of witness plates (known as contamination knowledge). The OSIRIS REx asteroid sample return mission, currently being built, is dealing with all of these aspects of contamination in order to ensure they return the best preserved sample possible. Coupons and witness plates from OSIRIS REx are currently being studied and stored (for future studies) at the Johnson Space Center. Similarly, planning for the clean room facility at Johnson Space Center to house the OSIRIS-REx samples is well advanced, and construction of the facility should begin in early 2017 (despite a nominal 2023 return date for OSIRIS-REx samples). Similar development is being

  13. Montana Rivers Information System : Edit/Entry Program User's Manual.

    Energy Technology Data Exchange (ETDEWEB)

    United States. Bonneville Power Administration; Montana Department of Fish, Wildlife and Parks

    1992-07-01

    The Montana Rivers Information System (MRIS) was initiated to assess the state`s fish, wildlife, and recreation value; and natural cultural, and geologic features. The MRIS is now a set of data bases containing part of the information in the Natural Heritage Program natural features and threatened and endangered species data bases and comprises of the Montana Interagency Stream Fisheries Database; the MDFWP Recreation Database; and the MDFWP Wildlife Geographic Information System. The purpose of this User`s Manual is to describe to the user how to maintain the MRIS database of their choice by updating, changing, deleting, and adding records using the edit/entry programs; and to provide to the user all information and instructions necessary to complete data entry into the MRIS databases.

  14. TaxMan: a taxonomic database manager

    Directory of Open Access Journals (Sweden)

    Blaxter Mark

    2006-12-01

    Full Text Available Abstract Background Phylogenetic analysis of large, multiple-gene datasets, assembled from public sequence databases, is rapidly becoming a popular way to approach difficult phylogenetic problems. Supermatrices (concatenated multiple sequence alignments of multiple genes can yield more phylogenetic signal than individual genes. However, manually assembling such datasets for a large taxonomic group is time-consuming and error-prone. Additionally, sequence curation, alignment and assessment of the results of phylogenetic analysis are made particularly difficult by the potential for a given gene in a given species to be unrepresented, or to be represented by multiple or partial sequences. We have developed a software package, TaxMan, that largely automates the processes of sequence acquisition, consensus building, alignment and taxon selection to facilitate this type of phylogenetic study. Results TaxMan uses freely available tools to allow rapid assembly, storage and analysis of large, aligned DNA and protein sequence datasets for user-defined sets of species and genes. The user provides GenBank format files and a list of gene names and synonyms for the loci to analyse. Sequences are extracted from the GenBank files on the basis of annotation and sequence similarity. Consensus sequences are built automatically. Alignment is carried out (where possible, at the protein level and aligned sequences are stored in a database. TaxMan can automatically determine the best subset of taxa to examine phylogeny at a given taxonomic level. By using the stored aligned sequences, large concatenated multiple sequence alignments can be generated rapidly for a subset and output in analysis-ready file formats. Trees resulting from phylogenetic analysis can be stored and compared with a reference taxonomy. Conclusion TaxMan allows rapid automated assembly of a multigene datasets of aligned sequences for large taxonomic groups. By extracting sequences on the basis of

  15. A justification for semantic training in data curation frameworks development

    Science.gov (United States)

    Ma, X.; Branch, B. D.; Wegner, K.

    2013-12-01

    In the complex data curation activities involving proper data access, data use optimization and data rescue, opportunities exist where underlying skills in semantics may play a crucial role in data curation professionals ranging from data scientists, to informaticists, to librarians. Here, We provide a conceptualization of semantics use in the education data curation framework (EDCF) [1] under development by Purdue University and endorsed by the GLOBE program [2] for further development and application. Our work shows that a comprehensive data science training includes both spatial and non-spatial data, where both categories are promoted by standard efforts of organizations such as the Open Geospatial Consortium (OGC) and the World Wide Web Consortium (W3C), as well as organizations such as the Federation of Earth Science Information Partners (ESIP) that share knowledge and propagate best practices in applications. Outside the context of EDCF, semantics training may be same critical to such data scientists, informaticists or librarians in other types of data curation activity. Past works by the authors have suggested that such data science should augment an ontological literacy where data science may become sustainable as a discipline. As more datasets are being published as open data [3] and made linked to each other, i.e., in the Resource Description Framework (RDF) format, or at least their metadata are being published in such a way, vocabularies and ontologies of various domains are being created and used in the data management, such as the AGROVOC [4] for agriculture and the GCMD keywords [5] and CLEAN vocabulary [6] for climate sciences. The new generation of data scientist should be aware of those technologies and receive training where appropriate to incorporate those technologies into their reforming daily works. References [1] Branch, B.D., Fosmire, M., 2012. The role of interdisciplinary GIS and data curation librarians in enhancing authentic scientific

  16. Collecting, curating, and researching writers' libraries a handbook

    CERN Document Server

    Oram, Richard W

    2014-01-01

    Collecting, Curating, and Researching Writers' Libraries: A Handbook is the first book to examine the history, acquisition, cataloging, and scholarly use of writers' personal libraries. This book also includes interviews with several well-known writers, who discuss their relationship with their books.

  17. A relevancy algorithm for curating earth science data around phenomenon

    Science.gov (United States)

    Maskey, Manil; Ramachandran, Rahul; Li, Xiang; Weigel, Amanda; Bugbee, Kaylin; Gatlin, Patrick; Miller, J. J.

    2017-09-01

    Earth science data are being collected for various science needs and applications, processed using different algorithms at multiple resolutions and coverages, and then archived at different archiving centers for distribution and stewardship causing difficulty in data discovery. Curation, which typically occurs in museums, art galleries, and libraries, is traditionally defined as the process of collecting and organizing information around a common subject matter or a topic of interest. Curating data sets around topics or areas of interest addresses some of the data discovery needs in the field of Earth science, especially for unanticipated users of data. This paper describes a methodology to automate search and selection of data around specific phenomena. Different components of the methodology including the assumptions, the process, and the relevancy ranking algorithm are described. The paper makes two unique contributions to improving data search and discovery capabilities. First, the paper describes a novel methodology developed for automatically curating data around a topic using Earth science metadata records. Second, the methodology has been implemented as a stand-alone web service that is utilized to augment search and usability of data in a variety of tools.

  18. Hanford External Dosimetry Technical Basis Manual PNL-MA-842

    International Nuclear Information System (INIS)

    Rathbone, Bruce A.

    2006-01-01

    The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at Hanford. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with 10 CFR 835, DOELAP, DOE-RL, ORP, PNSO, and Hanford contractor requirements. The dosimetry system is operated by PNNL's Hanford External Dosimetry Program which provides dosimetry services to all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. Rev. 0 marks the first revision to be released through PNNL's Electronic Records & Information Capture Architecture (ERICA) database

  19. Hanford External Dosimetry Technical Basis Manual PNL-MA-842

    Energy Technology Data Exchange (ETDEWEB)

    Rathbone, Bruce A.

    2005-02-25

    The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at Hanford. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with 10 CFR 835, DOELAP, DOE-RL, ORP, PNSO, and Hanford contractor requirements. The dosimetry system is operated by PNNL’s Hanford External Dosimetry Program which provides dosimetry services to all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. Rev. 0 marks the first revision to be released through PNNL’s Electronic Records & Information Capture Architecture (ERICA) database.

  20. Database on epidemiological survey in high background radiation research

    International Nuclear Information System (INIS)

    Zhou Sunyuan; Guo Furong; Liu Yusheng

    1992-01-01

    In order to store and check the data of the health survey in high background radiation area (HBRA) and control area in Guangdong Province, and to use these data in future, three databases were set up by using RBASE 5000 database software. (1) HD: the database based on the household registers especially established for the health survey from 1979 to 1986, covering more than 160000 subjects and 2200000 data. (2) DC: the database based on the registration cards of deaths from cancers and all other diseases during the period of 1975-1986 including more than 10000 cases and 260000 data. (3) MCC: the database for the case-control study on mutation-related factors for four kinds of cancers (liver, stomach, lung cancers and leukemia), embracing 626 subjects and close to 90000 data. The data in the databases were checked up with the original records and compared with the manual analytical results

  1. LSE-Sign: A lexical database for Spanish Sign Language.

    Science.gov (United States)

    Gutierrez-Sigut, Eva; Costello, Brendan; Baus, Cristina; Carreiras, Manuel

    2016-03-01

    The LSE-Sign database is a free online tool for selecting Spanish Sign Language stimulus materials to be used in experiments. It contains 2,400 individual signs taken from a recent standardized LSE dictionary, and a further 2,700 related nonsigns. Each entry is coded for a wide range of grammatical, phonological, and articulatory information, including handshape, location, movement, and non-manual elements. The database is accessible via a graphically based search facility which is highly flexible both in terms of the search options available and the way the results are displayed. LSE-Sign is available at the following website: http://www.bcbl.eu/databases/lse/.

  2. Generating Gene Ontology-Disease Inferences to Explore Mechanisms of Human Disease at the Comparative Toxicogenomics Database.

    Directory of Open Access Journals (Sweden)

    Allan Peter Davis

    Full Text Available Strategies for discovering common molecular events among disparate diseases hold promise for improving understanding of disease etiology and expanding treatment options. One technique is to leverage curated datasets found in the public domain. The Comparative Toxicogenomics Database (CTD; http://ctdbase.org/ manually curates chemical-gene, chemical-disease, and gene-disease interactions from the scientific literature. The use of official gene symbols in CTD interactions enables this information to be combined with the Gene Ontology (GO file from NCBI Gene. By integrating these GO-gene annotations with CTD's gene-disease dataset, we produce 753,000 inferences between 15,700 GO terms and 4,200 diseases, providing opportunities to explore presumptive molecular underpinnings of diseases and identify biological similarities. Through a variety of applications, we demonstrate the utility of this novel resource. As a proof-of-concept, we first analyze known repositioned drugs (e.g., raloxifene and sildenafil and see that their target diseases have a greater degree of similarity when comparing GO terms vs. genes. Next, a computational analysis predicts seemingly non-intuitive diseases (e.g., stomach ulcers and atherosclerosis as being similar to bipolar disorder, and these are validated in the literature as reported co-diseases. Additionally, we leverage other CTD content to develop testable hypotheses about thalidomide-gene networks to treat seemingly disparate diseases. Finally, we illustrate how CTD tools can rank a series of drugs as potential candidates for repositioning against B-cell chronic lymphocytic leukemia and predict cisplatin and the small molecule inhibitor JQ1 as lead compounds. The CTD dataset is freely available for users to navigate pathologies within the context of extensive biological processes, molecular functions, and cellular components conferred by GO. This inference set should aid researchers, bioinformaticists, and

  3. Manual hyperinflation in airway clearance in pediatric patients: a systematic review

    Science.gov (United States)

    de Godoy, Vanessa Cristina Waetge Pires; Zanetti, Nathalia Mendonça; Johnston, Cíntia

    2013-01-01

    Objective To perform an assessment of the available literature on manual hyperinflation as a respiratory physical therapy technique used in pediatric patients, with the main outcome of achieving airway clearance. Methods We reviewed articles included in the Lilacs (Latin American and Caribbean Literature on Health Sciences/Literatura Latino Americana e do Caribe em Ciências da Saúde), Cochrane Library, Medline (via Virtual Health Library and PubMed), SciELO (Scientific Electronic Library), and PEDro (Physiotherapy Evidence Database) databases from 2002 to 2013 using the following search terms: "physiotherapy (techniques)", "respiratory therapy", "intensive care", and "airway clearance". The selected studies were classified according to the level of evidence and grades of recommendation (method of the Oxford Centre for Evidence-Based Medicine) by two examiners, while a third examiner repeated the search and analysis and checked the classification of the articles. Results Three articles were included for analysis, comprising 250 children (aged 0 to 16 years). The main diagnoses were acute respiratory failure, recovery following heart congenital disease and upper abdominal surgery, bone marrow transplantation, asthma, tracheal reconstruction, brain injury, airway injury, and heterogeneous lung diseases. The studies were classified as having a level of evidence 2C and grade of recommendation C. Conclusions Manual hyperinflation appeared useful for airway clearance in the investigated population, although the evidence available in the literature remains insufficient. Therefore, controlled randomized studies are needed to establish the safety and efficacy of manual hyperinflation in pediatric patients. However, manual hyperinflation must be performed by trained physical therapists only. PMID:24213091

  4. Learning relationships: Church of England curates and training ...

    African Journals Online (AJOL)

    2017-06-20

    Jun 20, 2017 ... exploring how this affects the dynamic of the relationship with their curates. Scripture is also ... factors, as employed in the models of personality advanced by Costa and .... psychological type preferences of their training incumbents. The data ..... to conceptualising and implementing Christian vocation.

  5. DISEASES: text mining and data integration of disease-gene associations.

    Science.gov (United States)

    Pletscher-Frankild, Sune; Pallejà, Albert; Tsafou, Kalliopi; Binder, Janos X; Jensen, Lars Juhl

    2015-03-01

    Text mining is a flexible technology that can be applied to numerous different tasks in biology and medicine. We present a system for extracting disease-gene associations from biomedical abstracts. The system consists of a highly efficient dictionary-based tagger for named entity recognition of human genes and diseases, which we combine with a scoring scheme that takes into account co-occurrences both within and between sentences. We show that this approach is able to extract half of all manually curated associations with a false positive rate of only 0.16%. Nonetheless, text mining should not stand alone, but be combined with other types of evidence. For this reason, we have developed the DISEASES resource, which integrates the results from text mining with manually curated disease-gene associations, cancer mutation data, and genome-wide association studies from existing databases. The DISEASES resource is accessible through a web interface at http://diseases.jensenlab.org/, where the text-mining software and all associations are also freely available for download. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  6. GarlicESTdb: an online database and mining tool for garlic EST sequences

    Directory of Open Access Journals (Sweden)

    Choi Sang-Haeng

    2009-05-01

    Full Text Available Abstract Background Allium sativum., commonly known as garlic, is a species in the onion genus (Allium, which is a large and diverse one containing over 1,250 species. Its close relatives include chives, onion, leek and shallot. Garlic has been used throughout recorded history for culinary, medicinal use and health benefits. Currently, the interest in garlic is highly increasing due to nutritional and pharmaceutical value including high blood pressure and cholesterol, atherosclerosis and cancer. For all that, there are no comprehensive databases available for Expressed Sequence Tags(EST of garlic for gene discovery and future efforts of genome annotation. That is why we developed a new garlic database and applications to enable comprehensive analysis of garlic gene expression. Description GarlicESTdb is an integrated database and mining tool for large-scale garlic (Allium sativum EST sequencing. A total of 21,595 ESTs collected from an in-house cDNA library were used to construct the database. The analysis pipeline is an automated system written in JAVA and consists of the following components: automatic preprocessing of EST reads, assembly of raw sequences, annotation of the assembled sequences, storage of the analyzed information into MySQL databases, and graphic display of all processed data. A web application was implemented with the latest J2EE (Java 2 Platform Enterprise Edition software technology (JSP/EJB/JavaServlet for browsing and querying the database, for creation of dynamic web pages on the client side, and for mapping annotated enzymes to KEGG pathways, the AJAX framework was also used partially. The online resources, such as putative annotation, single nucleotide polymorphisms (SNP and tandem repeat data sets, can be searched by text, explored on the website, searched using BLAST, and downloaded. To archive more significant BLAST results, a curation system was introduced with which biologists can easily edit best-hit annotation

  7. GarlicESTdb: an online database and mining tool for garlic EST sequences.

    Science.gov (United States)

    Kim, Dae-Won; Jung, Tae-Sung; Nam, Seong-Hyeuk; Kwon, Hyuk-Ryul; Kim, Aeri; Chae, Sung-Hwa; Choi, Sang-Haeng; Kim, Dong-Wook; Kim, Ryong Nam; Park, Hong-Seog

    2009-05-18

    Allium sativum., commonly known as garlic, is a species in the onion genus (Allium), which is a large and diverse one containing over 1,250 species. Its close relatives include chives, onion, leek and shallot. Garlic has been used throughout recorded history for culinary, medicinal use and health benefits. Currently, the interest in garlic is highly increasing due to nutritional and pharmaceutical value including high blood pressure and cholesterol, atherosclerosis and cancer. For all that, there are no comprehensive databases available for Expressed Sequence Tags(EST) of garlic for gene discovery and future efforts of genome annotation. That is why we developed a new garlic database and applications to enable comprehensive analysis of garlic gene expression. GarlicESTdb is an integrated database and mining tool for large-scale garlic (Allium sativum) EST sequencing. A total of 21,595 ESTs collected from an in-house cDNA library were used to construct the database. The analysis pipeline is an automated system written in JAVA and consists of the following components: automatic preprocessing of EST reads, assembly of raw sequences, annotation of the assembled sequences, storage of the analyzed information into MySQL databases, and graphic display of all processed data. A web application was implemented with the latest J2EE (Java 2 Platform Enterprise Edition) software technology (JSP/EJB/JavaServlet) for browsing and querying the database, for creation of dynamic web pages on the client side, and for mapping annotated enzymes to KEGG pathways, the AJAX framework was also used partially. The online resources, such as putative annotation, single nucleotide polymorphisms (SNP) and tandem repeat data sets, can be searched by text, explored on the website, searched using BLAST, and downloaded. To archive more significant BLAST results, a curation system was introduced with which biologists can easily edit best-hit annotation information for others to view. The Garlic

  8. Manual for reactor produced radioisotopes

    International Nuclear Information System (INIS)

    2003-01-01

    numerous new developments that have taken place since then. Hence in this manual it was decided to focus only on reactor produced radioisotopes. This manual contains procedures for 48 important reactor-produced isotopes. These were contributed by major radioisotope producers from different parts of the world and are based on their practical experience. In case of widely used radioisotopes such as 131 I, 32 P and 99 Mo, information from more than one centre is included so that the users can compare the procedures. As in the earlier two versions, a general introductory write-up is included covering basic information on related aspects such as target irradiation, handling facilities, radiation protection and transportation, but in less detail. Relevant IAEA publications on such matters, particularly related to radiation protection and transportation, should be referred to for guidelines. Similarly, the nuclear data contained in the manual are only indicative and the relevant databases should be referred to for more authentic values. It is hoped that the manual will be a useful source of information for those working in radioisotope production laboratories as well as those intending to initiate such activities

  9. GDR (Genome Database for Rosaceae): integrated web-database for Rosaceae genomics and genetics data.

    Science.gov (United States)

    Jung, Sook; Staton, Margaret; Lee, Taein; Blenda, Anna; Svancara, Randall; Abbott, Albert; Main, Dorrie

    2008-01-01

    The Genome Database for Rosaceae (GDR) is a central repository of curated and integrated genetics and genomics data of Rosaceae, an economically important family which includes apple, cherry, peach, pear, raspberry, rose and strawberry. GDR contains annotated databases of all publicly available Rosaceae ESTs, the genetically anchored peach physical map, Rosaceae genetic maps and comprehensively annotated markers and traits. The ESTs are assembled to produce unigene sets of each genus and the entire Rosaceae. Other annotations include putative function, microsatellites, open reading frames, single nucleotide polymorphisms, gene ontology terms and anchored map position where applicable. Most of the published Rosaceae genetic maps can be viewed and compared through CMap, the comparative map viewer. The peach physical map can be viewed using WebFPC/WebChrom, and also through our integrated GDR map viewer, which serves as a portal to the combined genetic, transcriptome and physical mapping information. ESTs, BACs, markers and traits can be queried by various categories and the search result sites are linked to the mapping visualization tools. GDR also provides online analysis tools such as a batch BLAST/FASTA server for the GDR datasets, a sequence assembly server and microsatellite and primer detection tools. GDR is available at http://www.rosaceae.org.

  10. A tuberculosis biomarker database: the key to novel TB diagnostics

    Directory of Open Access Journals (Sweden)

    Seda Yerlikaya

    2017-03-01

    Full Text Available New diagnostic innovations for tuberculosis (TB, including point-of-care solutions, are critical to reach the goals of the End TB Strategy. However, despite decades of research, numerous reports on new biomarker candidates, and significant investment, no well-performing, simple and rapid TB diagnostic test is yet available on the market, and the search for accurate, non-DNA biomarkers remains a priority. To help overcome this ‘biomarker pipeline problem’, FIND and partners are working on the development of a well-curated and user-friendly TB biomarker database. The web-based database will enable the dynamic tracking of evidence surrounding biomarker candidates in relation to target product profiles (TPPs for needed TB diagnostics. It will be able to accommodate raw datasets and facilitate the verification of promising biomarker candidates and the identification of novel biomarker combinations. As such, the database will simplify data and knowledge sharing, empower collaboration, help in the coordination of efforts and allocation of resources, streamline the verification and validation of biomarker candidates, and ultimately lead to an accelerated translation into clinically useful tools.

  11. Use of Ontologies for Data Integration and Curation

    Directory of Open Access Journals (Sweden)

    Judith Gelernter

    2011-03-01

    Full Text Available Data curation includes the goal of facilitating the re-use and combination of datasets, which is often impeded by incompatible data schema. Can we use ontologies to help with data integration? We suggest a semi-automatic process that involves the use of automatic text searching to help identify overlaps in metadata that accompany data schemas, plus human validation of suggested data matches.Problems include different text used to describe the same concept, different forms of data recording and different organizations of data. Ontologies can help by focussing attention on important words, providing synonyms to assist matching, and indicating in what context words are used. Beyond ontologies, data on the statistical behavior of data can be used to decide which data elements appear to be compatible with which other data elements. When curating data which may have hundreds or even thousands of data labels, semi-automatic assistance with data fusion should be of great help.

  12. Curating the Poster

    DEFF Research Database (Denmark)

    Christensen, Line Hjorth

    2017-01-01

    Parallel to the primary functions performed by posters in the urban environment, we find a range of curatorial practices that tie the poster, a mass-produced graphic design media, to the museum institution. Yet little research has attempted to uncover the diverse subject of curatorial work...... and the process where posters created to live in a real-world environment are relocated in a museum. According to Peter Bil’ak (2006), it creates a situation where ”the entire raison d’être of the work is lost as a side effect of losing the context of the work”. The article investigates how environmental...... structures can work as guidelines for curating posters and graphic design in a museum context. By applying an ecological view to design, specifically the semiotic notion “counter-ability”, it stresses the reciprocal relationship of humans and their built and product-designed environments. It further suggests...

  13. A Phase Blending Study on Rubber Blends Based on the Solubility Preference of Curatives

    NARCIS (Netherlands)

    Guo, R.; Talma, Auke; Datta, Rabin; Dierkes, Wilma K.; Noordermeer, Jacobus W.M.

    2009-01-01

    Using previously obtained data on the solubilities of curatives in SBR, EPDM and in NBR, different mixing procedures were performed on 50/50 SBR/EPDM and NBR/EPDM blends. In contrast to a previous phase-mixing study, the curatives were added to separate phases before final blending, in an attempt to

  14. Smart Mobility Stakeholders - Curating Urban Data & Models

    Energy Technology Data Exchange (ETDEWEB)

    Sperling, Joshua [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-01

    This presentation provides an overview of the curation of urban data and models through engaging SMART mobility stakeholders. SMART Mobility Urban Science Efforts are helping to expose key data sets, models, and roles for the U.S. Department of Energy in engaging across stakeholders to ensure useful insights. This will help to support other Urban Science and broader SMART initiatives.

  15. Image-based query-by-example for big databases of galaxy images

    Science.gov (United States)

    Shamir, Lior; Kuminski, Evan

    2017-01-01

    Very large astronomical databases containing millions or even billions of galaxy images have been becoming increasingly important tools in astronomy research. However, in many cases the very large size makes it more difficult to analyze these data manually, reinforcing the need for computer algorithms that can automate the data analysis process. An example of such task is the identification of galaxies of a certain morphology of interest. For instance, if a rare galaxy is identified it is reasonable to expect that more galaxies of similar morphology exist in the database, but it is virtually impossible to manually search these databases to identify such galaxies. Here we describe computer vision and pattern recognition methodology that receives a galaxy image as an input, and searches automatically a large dataset of galaxies to return a list of galaxies that are visually similar to the query galaxy. The returned list is not necessarily complete or clean, but it provides a substantial reduction of the original database into a smaller dataset, in which the frequency of objects visually similar to the query galaxy is much higher. Experimental results show that the algorithm can identify rare galaxies such as ring galaxies among datasets of 10,000 astronomical objects.

  16. HemaExplorer: a database of mRNA expression profiles in normal and malignant haematopoiesis

    DEFF Research Database (Denmark)

    Bagger, Frederik Otzen; Rapin, Nicolas; Theilgaard-Mönch, Kim

    2013-01-01

    lead to full integrity of the data in the database. The HemaExplorer has comprehensive visualization interface that can make it useful as a daily tool for biologists and cancer researchers to assess the expression patterns of genes encountered in research or literature. HemaExplorer is relevant for all......The HemaExplorer (http://servers.binf.ku.dk/hemaexplorer) is a curated database of processed mRNA Gene expression profiles (GEPs) that provides an easy display of gene expression in haematopoietic cells. HemaExplorer contains GEPs derived from mouse/human haematopoietic stem and progenitor cells...... as well as from more differentiated cell types. Moreover, data from distinct subtypes of human acute myeloid leukemia is included in the database allowing researchers to directly compare gene expression of leukemic cells with those of their closest normal counterpart. Normalization and batch correction...

  17. Database on wind characteristics - Structure and philosophy

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, G.C.; Hansen, K.S.

    2001-11-01

    The main objective of IEA R and D Wind Annex XVII - Database on Wind Characteristics - is to provide wind energy planners and designers, as well as the international wind engineering community in general, with easy access to quality controlled measured wind field time series observed in a wide range of environments. The project partners are Sweden, Norway, U.S.A., The Netherlands, Japan and Denmark, with Denmark as the Operating Agent. The reporting of IEA R and D Annex XVII falls in three separate parts. Part one deals with the overall structure and philosophy behind the database, part two accounts in details for the available data in the established database bank and part three is the Users Manual describing the various ways to access and analyse the data. The present report constitutes the first part of the Annex XVII reporting, and it contains a detailed description of the database structure, the data quality control procedures, the selected indexing of the data and the hardware system. (au)

  18. Activity Management System user reference manual. Revision 1

    International Nuclear Information System (INIS)

    Gates, T.A.; Burdick, M.B.

    1994-01-01

    The Activity Management System (AMS) was developed in response to the need for a simple-to-use, low-cost, user interface system for collecting and logging Hanford Waste Vitrification Plant Project (HWVP) activities. This system needed to run on user workstations and provide common user access to a database stored on a local network file server. Most important, users wanted a system that provided a management tool that supported their individual process for completing activities. Existing system treated the performer as a tool of the system. All AMS data is maintained in encrypted format. Users can feel confident that any activities they have entered into the database are private and that, as the originator, they retain sole control over who can see them. Once entered into the AMS database, the activities cannot be accessed by anyone other than the originator, the designated agent, or by authorized viewers who have been explicitly granted the right to look at specific activities by the originator. This user guide is intended to assist new AMS users in learning how to use the application and, after the initial learning process, will serve as an ongoing reference for experienced users in performing infrequently used functions. Online help screens provide reference to some of the key information in this manual. Additional help screens, encompassing all the applicable material in this manual, will be incorporated into future AMS revisions. A third, and most important, source of help is the AMS administrator(s). This guide describes the initial production version of AMS, which has been designated Revision 1.0

  19. AtomPy: An Open Atomic Data Curation Environment for Astrophysical Applications

    Directory of Open Access Journals (Sweden)

    Claudio Mendoza

    2014-05-01

    Full Text Available We present a cloud-computing environment, referred to as AtomPy, based on Google-Drive Sheets and Pandas (Python Data Analysis Library DataFrames to promote community-driven curation of atomic data for astrophysical applications, a stage beyond database development. The atomic model for each ionic species is contained in a multi-sheet workbook, tabulating representative sets of energy levels, A-values and electron impact effective collision strengths from different sources. The relevant issues that AtomPy intends to address are: (i data quality by allowing open access to both data producers and users; (ii comparisons of different datasets to facilitate accuracy assessments; (iii downloading to local data structures (i.e., Pandas DataFrames for further manipulation and analysis by prospective users; and (iv data preservation by avoiding the discard of outdated sets. Data processing workflows are implemented by means of IPython Notebooks, and collaborative software developments are encouraged and managed within the GitHub social network. The facilities of AtomPy are illustrated with the critical assessment of the transition probabilities for ions in the hydrogen and helium isoelectronic sequences with atomic number Z ≤ 10.

  20. [The randomized controlled trial of the treatment for clavicular fracture by rotatory manual reduction with forceps holder and retrograde percutaneous pinning transfixation].

    Science.gov (United States)

    Bi, Hong-zheng; Yang, Mao-qing; Tan, Yuan-chao; Fu, Song

    2008-07-01

    To study the curative effect and safety of rotatory manual reduction with forceps holder and retrograde percutaneous pinning transfixation in treating clavicular fracture. All 201 cases of clavicular fractures were randomly divided into treatment group (101 cases) and control group (100 cases). The treatment group was treated by rotatory manual reduction with forceps holder and retrograde percutaneous pinning transfixation. The control group was treated by open reduction and internal fixation with Kirschner pin. All cases were followed up for 4 to 21 months (mean 10.6 months). SPSS was used to analyze clinic healing time of fracture and shoulder-joint function in both two groups. After operation, 101 cases of treatment group achieved union of fracture and the clinical healing time was 28 to 49 days (mean 34.5+/-2.7 days). In control group,there were 4 cases with nonunion of fracture,the other 96 cases were union,the clinical healing time was 36 to 92 days (mean 55.3+/-4.8 days). The excellent and good rate of shoulder-joint function was 100% in treatment group and 83% in control group. By t-test and chi2-test, there was significant difference between the two groups in curative effect (Pmanual reduction with forceps holder and retrograde pinning transfixation can be used in various kinds of clavicular shaft fracture, with many virtues such as easy operation, reliable fixation, short union time of fracture, good functional recovery of shoulder-joint and no incision scar affecting appearance.

  1. [Databases for surgical specialists in Cancún, Quintana Roo].

    Science.gov (United States)

    Contla-Hosking, Jorge Eduardo; Ceballos-Martínez, Zoila Inés; Peralta-Bahena, Mónica Esther

    2004-01-01

    Our aim was to identify the level of knowledge of surgical health-area specialists in Cancún, Quintana Roo, Mexico, from the personal productivity database. We carried out an investigation of 37 surgical specialists: 24 belonged to the Mexican Social Security Institute (IMSS), while 13 belonged to the Mexican Health Secretariat (SSA). In our research, we found that 61% of surgical health-area specialist physicians were familiar with some aspects of the institutional surgical registry, including the following: 54% knew of the existence of a personal registry of surgeries carried out, and 43% keep a record of their personal activities. From the latter percentage, 69% of surgical health-area specialist physicians mentioned keeping their records manually, while 44% used the computer. Results of the research suggest that these physicians would like to have some kind of record of the surgeries carried out by each. An important percentage of these specialists do not keep a personal record on a database; due to this lack of knowledge, we obtain incorrect information in institutional records of the reality of what is actually done. We consider it important to inform surgical specialists concerning the existence of personal institutional records in database form or even of record done manually, as well as correct terminology for the International Codification (CIE-9 & 10). We inform here of the need to encourage a culture in records and databases in the formative stage of surgeon specialists.

  2. A site-specific curated database for the microorganisms of activated sludge and anaerobic digesters

    DEFF Research Database (Denmark)

    McIlroy, Simon Jon; Kirkegaard, Rasmus Hansen; McIlroy, Bianca

    RNA gene amplicon sequencing (V1-3 region), including full-scale AS (20 plants, 8 years) and AD systems (36 reactors, 18 plants, 4 years). Surveys also include the Archaea (V3-5 region). The MiDAS field guide is intended as a collaborative platform for researchers and wastewater treatment practitioners...... taxonomy, proposes putative names for each genus-level-taxon that can be used as a common vocabulary for all researchers in the field. The online database covers >250 genera found to be abundant and/or important in biological nutrient removal treatment plants, based on extensive in-house surveys with 16S r...

  3. NORPERM, the Norwegian Permafrost Database - a TSP NORWAY IPY legacy

    Science.gov (United States)

    Juliussen, H.; Christiansen, H. H.; Strand, G. S.; Iversen, S.; Midttømme, K.; Rønning, J. S.

    2010-10-01

    NORPERM, the Norwegian Permafrost Database, was developed at the Geological Survey of Norway during the International Polar Year (IPY) 2007-2009 as the main data legacy of the IPY research project Permafrost Observatory Project: A Contribution to the Thermal State of Permafrost in Norway and Svalbard (TSP NORWAY). Its structural and technical design is described in this paper along with the ground temperature data infrastructure in Norway and Svalbard, focussing on the TSP NORWAY permafrost observatory installations in the North Scandinavian Permafrost Observatory and Nordenskiöld Land Permafrost Observatory, being the primary data providers of NORPERM. Further developments of the database, possibly towards a regional database for the Nordic area, are also discussed. The purpose of NORPERM is to store ground temperature data safely and in a standard format for use in future research. The IPY data policy of open, free, full and timely release of IPY data is followed, and the borehole metadata description follows the Global Terrestrial Network for Permafrost (GTN-P) standard. NORPERM is purely a temperature database, and the data is stored in a relation database management system and made publically available online through a map-based graphical user interface. The datasets include temperature time series from various depths in boreholes and from the air, snow cover, ground-surface or upper ground layer recorded by miniature temperature data-loggers, and temperature profiles with depth in boreholes obtained by occasional manual logging. All the temperature data from the TSP NORWAY research project is included in the database, totalling 32 temperature time series from boreholes, 98 time series of micrometeorological temperature conditions, and 6 temperature depth profiles obtained by manual logging in boreholes. The database content will gradually increase as data from previous and future projects are added. Links to near real-time permafrost temperatures, obtained

  4. MIDAS: a database-searching algorithm for metabolite identification in metabolomics.

    Science.gov (United States)

    Wang, Yingfeng; Kora, Guruprasad; Bowen, Benjamin P; Pan, Chongle

    2014-10-07

    A database searching approach can be used for metabolite identification in metabolomics by matching measured tandem mass spectra (MS/MS) against the predicted fragments of metabolites in a database. Here, we present the open-source MIDAS algorithm (Metabolite Identification via Database Searching). To evaluate a metabolite-spectrum match (MSM), MIDAS first enumerates possible fragments from a metabolite by systematic bond dissociation, then calculates the plausibility of the fragments based on their fragmentation pathways, and finally scores the MSM to assess how well the experimental MS/MS spectrum from collision-induced dissociation (CID) is explained by the metabolite's predicted CID MS/MS spectrum. MIDAS was designed to search high-resolution tandem mass spectra acquired on time-of-flight or Orbitrap mass spectrometer against a metabolite database in an automated and high-throughput manner. The accuracy of metabolite identification by MIDAS was benchmarked using four sets of standard tandem mass spectra from MassBank. On average, for 77% of original spectra and 84% of composite spectra, MIDAS correctly ranked the true compounds as the first MSMs out of all MetaCyc metabolites as decoys. MIDAS correctly identified 46% more original spectra and 59% more composite spectra at the first MSMs than an existing database-searching algorithm, MetFrag. MIDAS was showcased by searching a published real-world measurement of a metabolome from Synechococcus sp. PCC 7002 against the MetaCyc metabolite database. MIDAS identified many metabolites missed in the previous study. MIDAS identifications should be considered only as candidate metabolites, which need to be confirmed using standard compounds. To facilitate manual validation, MIDAS provides annotated spectra for MSMs and labels observed mass spectral peaks with predicted fragments. The database searching and manual validation can be performed online at http://midas.omicsbio.org.

  5. Recovery, Transportation and Acceptance to the Curation Facility of the Hayabusa Re-Entry Capsule

    Science.gov (United States)

    Abe, M.; Fujimura, A.; Yano, H.; Okamoto, C.; Okada, T.; Yada, T.; Ishibashi, Y.; Shirai, K.; Nakamura, T.; Noguchi, T.; hide

    2011-01-01

    The "Hayabusa" re-entry capsule was safely carried into the clean room of Sagamihara Planetary Sample Curation Facility in JAXA on June 18, 2010. After executing computed tomographic (CT) scanning, removal of heat shield, and surface cleaning of sample container, the sample container was enclosed into the clean chamber. After opening the sample container and residual gas sampling in the clean chamber, optical observation, sample recovery, sample separation for initial analysis will be performed. This curation work is continuing for several manths with some selected member of Hayabusa Asteroidal Sample Preliminary Examination Team (HASPET). We report here on the 'Hayabusa' capsule recovery operation, and transportation and acceptance at the curation facility of the Hayabusa re-entry capsule.

  6. Links in a distributed database: Theory and implementation

    International Nuclear Information System (INIS)

    Karonis, N.T.; Kraimer, M.R.

    1991-12-01

    This document addresses the problem of extending database links across Input/Output Controller (IOC) boundaries. It lays a foundation by reviewing the current system and proposing an implementation specification designed to guide all work in this area. The document also describes an implementation that is less ambitious than our formally stated proposal, one that does not extend the reach of all database links across IOC boundaries. Specifically, it introduces an implementation of input and output links and comments on that overall implementation. We include a set of manual pages describing each of the new functions the implementation provides

  7. Discovering New Global Climate Patterns: Curating a 21-Year High Temporal (Hourly) and Spatial (40km) Resolution Reanalysis Dataset

    Science.gov (United States)

    Hou, C. Y.; Dattore, R.; Peng, G. S.

    2014-12-01

    The National Center for Atmospheric Research's Global Climate Four-Dimensional Data Assimilation (CFDDA) Hourly 40km Reanalysis dataset is a dynamically downscaled dataset with high temporal and spatial resolution. The dataset contains three-dimensional hourly analyses in netCDF format for the global atmospheric state from 1985 to 2005 on a 40km horizontal grid (0.4°grid increment) with 28 vertical levels, providing good representation of local forcing and diurnal variation of processes in the planetary boundary layer. This project aimed to make the dataset publicly available, accessible, and usable in order to provide a unique resource to allow and promote studies of new climate characteristics. When the curation project started, it had been five years since the data files were generated. Also, although the Principal Investigator (PI) had generated a user document at the end of the project in 2009, the document had not been maintained. Furthermore, the PI had moved to a new institution, and the remaining team members were reassigned to other projects. These factors made data curation in the areas of verifying data quality, harvest metadata descriptions, documenting provenance information especially challenging. As a result, the project's curation process found that: Data curator's skill and knowledge helped make decisions, such as file format and structure and workflow documentation, that had significant, positive impact on the ease of the dataset's management and long term preservation. Use of data curation tools, such as the Data Curation Profiles Toolkit's guidelines, revealed important information for promoting the data's usability and enhancing preservation planning. Involving data curators during each stage of the data curation life cycle instead of at the end could improve the curation process' efficiency. Overall, the project showed that proper resources invested in the curation process would give datasets the best chance to fulfill their potential to

  8. AllergenOnline: A peer-reviewed, curated allergen database to assess novel food proteins for potential cross-reactivity.

    Science.gov (United States)

    Goodman, Richard E; Ebisawa, Motohiro; Ferreira, Fatima; Sampson, Hugh A; van Ree, Ronald; Vieths, Stefan; Baumert, Joseph L; Bohle, Barbara; Lalithambika, Sreedevi; Wise, John; Taylor, Steve L

    2016-05-01

    Increasingly regulators are demanding evaluation of potential allergenicity of foods prior to marketing. Primary risks are the transfer of allergens or potentially cross-reactive proteins into new foods. AllergenOnline was developed in 2005 as a peer-reviewed bioinformatics platform to evaluate risks of new dietary proteins in genetically modified organisms (GMO) and novel foods. The process used to identify suspected allergens and evaluate the evidence of allergenicity was refined between 2010 and 2015. Candidate proteins are identified from the NCBI database using keyword searches, the WHO/IUIS nomenclature database and peer reviewed publications. Criteria to classify proteins as allergens are described. Characteristics of the protein, the source and human subjects, test methods and results are evaluated by our expert panel and archived. Food, inhalant, salivary, venom, and contact allergens are included. Users access allergen sequences through links to the NCBI database and relevant references are listed online. Version 16 includes 1956 sequences from 778 taxonomic-protein groups that are accepted with evidence of allergic serum IgE-binding and/or biological activity. AllergenOnline provides a useful peer-reviewed tool for identifying the primary potential risks of allergy for GMOs and novel foods based on criteria described by the Codex Alimentarius Commission (2003). © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. MIPS: a database for genomes and protein sequences.

    Science.gov (United States)

    Mewes, H W; Frishman, D; Güldener, U; Mannhaupt, G; Mayer, K; Mokrejs, M; Morgenstern, B; Münsterkötter, M; Rudd, S; Weil, B

    2002-01-01

    The Munich Information Center for Protein Sequences (MIPS-GSF, Neuherberg, Germany) continues to provide genome-related information in a systematic way. MIPS supports both national and European sequencing and functional analysis projects, develops and maintains automatically generated and manually annotated genome-specific databases, develops systematic classification schemes for the functional annotation of protein sequences, and provides tools for the comprehensive analysis of protein sequences. This report updates the information on the yeast genome (CYGD), the Neurospora crassa genome (MNCDB), the databases for the comprehensive set of genomes (PEDANT genomes), the database of annotated human EST clusters (HIB), the database of complete cDNAs from the DHGP (German Human Genome Project), as well as the project specific databases for the GABI (Genome Analysis in Plants) and HNB (Helmholtz-Netzwerk Bioinformatik) networks. The Arabidospsis thaliana database (MATDB), the database of mitochondrial proteins (MITOP) and our contribution to the PIR International Protein Sequence Database have been described elsewhere [Schoof et al. (2002) Nucleic Acids Res., 30, 91-93; Scharfe et al. (2000) Nucleic Acids Res., 28, 155-158; Barker et al. (2001) Nucleic Acids Res., 29, 29-32]. All databases described, the protein analysis tools provided and the detailed descriptions of our projects can be accessed through the MIPS World Wide Web server (http://mips.gsf.de).

  10. Preventive but Not Curative Efficacy of Celecoxib on Bladder Carcinogenesis in a Rat Model

    Directory of Open Access Journals (Sweden)

    José Sereno

    2010-01-01

    Full Text Available To evaluate the effect of a cyclooxygenase 2 inhibitor, celecoxib (CEL, on bladder cancer inhibition in a rat model, when used as preventive versus as curative treatment. The study comprised 52 male Wistar rats, divided in 5 groups, during a 20-week protocol: control: vehicle, carcinogen: 0.05% of N-butyl-N-(4-hydroxybutyl nitrosamine (BBN, CEL: 10 mg/kg/day of the selective COX-2 inhibitor Celebrex, preventive CEL (CEL+BBN-P, and curative CEL (BBN+CEL-C groups. Although tumor growth was markedly inhibited by the preventive application of CEL, it was even aggravated by the curative treatment. The incidence of gross bladder carcinoma was: control 0/8(0%, BBN 13/20(65%, CEL 0/8(0%, CEL+BBN-P 1/8(12.5%, and BBN+CEL-C 6/8(75%. The number and volume of carcinomas were significantly lower in the CEL+BBN-P versus BBN, accompanied by an ample reduction in hyperplasia, dysplasia, and papillary tumors as well as COX-2 immunostaining. In spite of the reduction of tumor volumes in the curative BBN+CEL-C group, tumor malignancy was augmented. An anti-inflammatory and antioxidant profile was encountered only in the group under preventive treatment. In conclusion, preventive, but not curative, celecoxib treatment promoted a striking inhibitory effect on bladder cancer development, reinforcing the potential role of chemopreventive strategies based on cyclooxygenase 2 inhibition.

  11. A computer database system to calculate staff radiation doses and maintain records

    International Nuclear Information System (INIS)

    Clewer, P.

    1985-01-01

    A database has been produced to record the personal dose records of all employees monitored for radiation exposure in the Wessex Health Region. Currently there are more than 2000 personnel in 115 departments but the capacity of the database allows for expansion. The computer is interfaced to a densitometer for film badge reading. The hardware used by the database, which is based on a popular microcomputer, is described, as are the various programs that make up the software. The advantages over the manual card index system that it replaces are discussed. (author)

  12. IAEA international database on irradiated nuclear graphite properties

    International Nuclear Information System (INIS)

    Burchell, T.D.; Clark, R.E.H.; Stephens, J.A.; Eto, M.; Haag, G.; Hacker, P.; Neighbour, G.B.; Janev, R.K.; Wickham, A.J.

    2000-02-01

    This report describes an IAEA database containing data on the properties of irradiated nuclear graphites. Development and implementation of the graphite database followed initial discussions at an IAEA Specialists' Meeting held in September 1995. The design of the database is based upon developments at the University of Bath (United Kingdom), work which the UK Health and Safety Executive initially supported. The database content and data management policies were determined during two IAEA Consultants' Meetings of nuclear reactor graphite specialists held in 1998 and 1999. The graphite data are relevant to the construction and safety case developments required for new and existing HTR nuclear power plants, and to the development of safety cases for continued operation of existing plants. The database design provides a flexible structure for data archiving and retrieval and employs Microsoft Access 97. An instruction manual is provided within this document for new users, including installation instructions for the database on personal computers running Windows 95/NT 4.0 or higher versions. The data management policies and associated responsibilities are contained in the database Working Arrangement which is included as an Appendix to this report. (author)

  13. The development of technical database of advanced spent fuel management process

    Energy Technology Data Exchange (ETDEWEB)

    Ro, Seung Gy; Byeon, Kee Hoh; Song, Dae Yong; Park, Seong Won; Shin, Young Jun

    1999-03-01

    The purpose of this study is to develop the technical database system to provide useful information to researchers who study on the back end nuclear fuel cycle. Technical database of advanced spent fuel management process was developed for a prototype system in 1997. In 1998, this database system is improved into multi-user systems and appended special database which is composed of thermochemical formation data and reaction data. In this report, the detailed specification of our system design is described and the operating methods are illustrated as a user's manual. Also, expanding current system, or interfacing between this system and other system, this report is very useful as a reference. (Author). 10 refs., 18 tabs., 46 fig.

  14. The development of technical database of advanced spent fuel management process

    International Nuclear Information System (INIS)

    Ro, Seung Gy; Byeon, Kee Hoh; Song, Dae Yong; Park, Seong Won; Shin, Young Jun

    1999-03-01

    The purpose of this study is to develop the technical database system to provide useful information to researchers who study on the back end nuclear fuel cycle. Technical database of advanced spent fuel management process was developed for a prototype system in 1997. In 1998, this database system is improved into multi-user systems and appended special database which is composed of thermochemical formation data and reaction data. In this report, the detailed specification of our system design is described and the operating methods are illustrated as a user's manual. Also, expanding current system, or interfacing between this system and other system, this report is very useful as a reference. (Author). 10 refs., 18 tabs., 46 fig

  15. Surveillance Patterns After Curative-Intent Colorectal Cancer Surgery in Ontario

    Directory of Open Access Journals (Sweden)

    Jensen Tan

    2014-01-01

    Full Text Available BACKGROUND: Postoperative surveillance following curative-intent resection of colorectal cancer (CRC is variably performed due to existing guideline differences and to the limited data supporting different strategies.

  16. A Practice and Value Proposal for Doctoral Dissertation Data Curation

    Directory of Open Access Journals (Sweden)

    W. Aaron Collie

    2011-10-01

    Full Text Available The preparation and publication of dissertations can be viewed as a subsystem of scholarly communication, and the treatment of data that support doctoral research can be mapped in a very controlled manner to the data curation lifecycle. Dissertation datasets represent “low-hanging fruit” for universities who are developing institutional data collections. The current workflow for processing electronic theses and dissertations (ETD at a typical American university is presented, and a new practice is proposed that includes datasets in the process of formulating, awarding, and disseminating dissertations in a way that enables them to be linked and curated together. The value proposition and new roles for the university and its student-authors, faculty, graduate programs and librarians are explored.

  17. User’s Manual for the Simulation of Energy Consumption and Emissions from Rail Traffic Software Package

    DEFF Research Database (Denmark)

    Cordiero, Tiago M.; Lindgreen, Erik Bjørn Grønning; Sorenson, Spencer C

    2005-01-01

    The ARTEMIS rail emissions model was implemented in a Microsoft Excel software package that includes data from the GISCO database on railway traffic. This report is the user’s manual for the aforementioned software that includes information on how to run the program and an overview on how...... of Excel Macros (Visual Basic) and database sheets included in one Excel file...

  18. Hmrbase: a database of hormones and their receptors

    Science.gov (United States)

    Rashid, Mamoon; Singla, Deepak; Sharma, Arun; Kumar, Manish; Raghava, Gajendra PS

    2009-01-01

    Background Hormones are signaling molecules that play vital roles in various life processes, like growth and differentiation, physiology, and reproduction. These molecules are mostly secreted by endocrine glands, and transported to target organs through the bloodstream. Deficient, or excessive, levels of hormones are associated with several diseases such as cancer, osteoporosis, diabetes etc. Thus, it is important to collect and compile information about hormones and their receptors. Description This manuscript describes a database called Hmrbase which has been developed for managing information about hormones and their receptors. It is a highly curated database for which information has been collected from the literature and the public databases. The current version of Hmrbase contains comprehensive information about ~2000 hormones, e.g., about their function, source organism, receptors, mature sequences, structures etc. Hmrbase also contains information about ~3000 hormone receptors, in terms of amino acid sequences, subcellular localizations, ligands, and post-translational modifications etc. One of the major features of this database is that it provides data about ~4100 hormone-receptor pairs. A number of online tools have been integrated into the database, to provide the facilities like keyword search, structure-based search, mapping of a given peptide(s) on the hormone/receptor sequence, sequence similarity search. This database also provides a number of external links to other resources/databases in order to help in the retrieving of further related information. Conclusion Owing to the high impact of endocrine research in the biomedical sciences, the Hmrbase could become a leading data portal for researchers. The salient features of Hmrbase are hormone-receptor pair-related information, mapping of peptide stretches on the protein sequences of hormones and receptors, Pfam domain annotations, categorical browsing options, online data submission, Drug

  19. Distribution and utilization of curative primary healthcare services in Lahej, Yemen.

    Science.gov (United States)

    Bawazir, A A; Bin Hawail, T S; Al-Sakkaf, K A Z; Basaleem, H O; Muhraz, A F; Al-Shehri, A M

    2013-09-01

    No evidence-based data exist on the availability, accessibility and utilization of healthcare services in Lahej Governorate, Yemen. The aim of this study was to assess the distribution and utilization of curative services in primary healthcare units and centres in Lahej. Cross-sectional study (clustering sample). This study was conducted in three of the 15 districts in Lahej between December 2009 and August 2010. Household members were interviewed using a questionnaire to determine sociodemographic characteristics and types of healthcare services available in the area. The distribution of health centres, health units and hospitals did not match the size of the populations or areas of the districts included in this study. Geographical accessibility was the main obstacle to utilization. Factors associated with the utilization of curative services were significantly related to the time required to reach the nearest facility, seeking curative services during illness and awareness of the availability of health facilities (P < 0.01). There is an urgent need to look critically and scientifically at the distribution of healthcare services in the region in order to ensure accessibility and quality of services. Copyright © 2013 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  20. Peace Corps Aquaculture Training Manual. Training Manual T0057.

    Science.gov (United States)

    Peace Corps, Washington, DC. Information Collection and Exchange Div.

    This Peace Corps training manual was developed from two existing manuals to provide a comprehensive training program in fish production for Peace Corps volunteers. The manual encompasses the essential elements of the University of Oklahoma program that has been training volunteers in aquaculture for 25 years. The 22 chapters of the manual are…

  1. Digital Curation as a Core Competency in Current Learning and Literacy: A Higher Education Perspective

    Science.gov (United States)

    Ungerer, Leona M.

    2016-01-01

    Digital curation may be regarded as a core competency in higher education since it contributes to establishing a sense of metaliteracy (an essential requirement for optimally functioning in a modern media environment) among students. Digital curation is gradually finding its way into higher education curricula aimed at fostering social media…

  2. IntPath--an integrated pathway gene relationship database for model organisms and important pathogens.

    Science.gov (United States)

    Zhou, Hufeng; Jin, Jingjing; Zhang, Haojun; Yi, Bo; Wozniak, Michal; Wong, Limsoon

    2012-01-01

    Pathway data are important for understanding the relationship between genes, proteins and many other molecules in living organisms. Pathway gene relationships are crucial information for guidance, prediction, reference and assessment in biochemistry, computational biology, and medicine. Many well-established databases--e.g., KEGG, WikiPathways, and BioCyc--are dedicated to collecting pathway data for public access. However, the effectiveness of these databases is hindered by issues such as incompatible data formats, inconsistent molecular representations, inconsistent molecular relationship representations, inconsistent referrals to pathway names, and incomprehensive data from different databases. In this paper, we overcome these issues through extraction, normalization and integration of pathway data from several major public databases (KEGG, WikiPathways, BioCyc, etc). We build a database that not only hosts our integrated pathway gene relationship data for public access but also maintains the necessary updates in the long run. This public repository is named IntPath (Integrated Pathway gene relationship database for model organisms and important pathogens). Four organisms--S. cerevisiae, M. tuberculosis H37Rv, H. Sapiens and M. musculus--are included in this version (V2.0) of IntPath. IntPath uses the "full unification" approach to ensure no deletion and no introduced noise in this process. Therefore, IntPath contains much richer pathway-gene and pathway-gene pair relationships and much larger number of non-redundant genes and gene pairs than any of the single-source databases. The gene relationships of each gene (measured by average node degree) per pathway are significantly richer. The gene relationships in each pathway (measured by average number of gene pairs per pathway) are also considerably richer in the integrated pathways. Moderate manual curation are involved to get rid of errors and noises from source data (e.g., the gene ID errors in WikiPathways and

  3. The latest evidence for possible HIV-1 curative strategies.

    Science.gov (United States)

    Pham, Hanh Thi; Mesplède, Thibault

    2018-01-01

    Human immunodeficiency virus type 1 (HIV-1) infection remains a major health issue worldwide. In developed countries, antiretroviral therapy has extended its reach from treatment of people living with HIV-1 to post-exposure prophylaxis, treatment as prevention, and, more recently, pre-exposure prophylaxis. These healthcare strategies offer the epidemiological tools to curve the epidemic in rich settings and will be concomitantly implemented in developing countries. One of the remaining challenges is to identify an efficacious curative strategy. This review manuscript will focus on some of the current curative strategies aiming at providing a sterilizing or functional cure to HIV-1-positive individuals. These include the following: early treatment initiation in post-treatment controllers as a long-term HIV-1 remission strategy, latency reversal, gene editing with or without stem cell transplantation, and antibodies against either the viral envelope protein or the host integrin α4β7.

  4. Data Preservation and Curation for the Planetary Science Community

    Science.gov (United States)

    Hughes, J. S.; Crichton, D. J.; Joyner, R.; Hardman, S.; Rye, E.

    2013-12-01

    The Planetary Data System (PDS) has just released PDS4 Version 1.0, its next generation data standards for the planetary science archive. These data standards are the result of a multi-year effort to develop an information model based on accepted standards for data preservation, data curation, metadata management, and model development. The resulting information model is subsequently used to drive information system development from the generation of data standards documentation to the configuration of federated registries and search engines. This paper will provide an overview of the development of the PDS4 Information Model and focus on the application of the Open Archive Information System (OAIS) Reference Model - ISO 14721:2003, the Metadata Registry (MDR) Standard - ISO/IEC 11179, and the E-Business XML Standard to help ensure the long-term preservation and curation of planetary science data. Copyright 2013 California Institute of Technology Government sponsorship acknowledged

  5. Virtual Collections: An Earth Science Data Curation Service

    Science.gov (United States)

    Bugbee, Kaylin; Ramachandran, Rahul; Maskey, Manil; Gatlin, Patrick

    2016-01-01

    The role of Earth science data centers has traditionally been to maintain central archives that serve openly available Earth observation data. However, in order to ensure data are as useful as possible to a diverse user community, Earth science data centers must move beyond simply serving as an archive to offering innovative data services to user communities. A virtual collection, the end product of a curation activity that searches, selects, and synthesizes diffuse data and information resources around a specific topic or event, is a data curation service that improves the discoverability, accessibility, and usability of Earth science data and also supports the needs of unanticipated users. Virtual collections minimize the amount of the time and effort needed to begin research by maximizing certainty of reward and by providing a trustworthy source of data for unanticipated users. This presentation will define a virtual collection in the context of an Earth science data center and will highlight a virtual collection case study created at the Global Hydrology Resource Center data center.

  6. Virtual Collections: An Earth Science Data Curation Service

    Science.gov (United States)

    Bugbee, K.; Ramachandran, R.; Maskey, M.; Gatlin, P. N.

    2016-12-01

    The role of Earth science data centers has traditionally been to maintain central archives that serve openly available Earth observation data. However, in order to ensure data are as useful as possible to a diverse user community, Earth science data centers must move beyond simply serving as an archive to offering innovative data services to user communities. A virtual collection, the end product of a curation activity that searches, selects, and synthesizes diffuse data and information resources around a specific topic or event, is a data curation service that improves the discoverability, accessibility and usability of Earth science data and also supports the needs of unanticipated users. Virtual collections minimize the amount of time and effort needed to begin research by maximizing certainty of reward and by providing a trustworthy source of data for unanticipated users. This presentation will define a virtual collection in the context of an Earth science data center and will highlight a virtual collection case study created at the Global Hydrology Resource Center data center.

  7. Interactive searching of facial image databases

    Science.gov (United States)

    Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean

    1995-09-01

    A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.

  8. PC/FRAM, Version 3.2 User Manual

    International Nuclear Information System (INIS)

    Kelley, T.A.; Sampson, T.E.

    1999-01-01

    This manual describes the use of version 3.2 of the PC/FRAM plutonium isotopic analysis software developed in the Safeguards Science and Technology Group, NE-5, Nonproliferation and International Security Division Los Alamos National Laboratory. The software analyzes the gamma ray spectrum from plutonium-bearing items and determines the isotopic distribution of the plutonium 241Am content and concentration of other isotopes in the item. The software can also determine the isotopic distribution of uranium isotopes in items containing only uranium. The body of this manual descenies the generic version of the code. Special facility-specific enhancements, if they apply, will be described in the appendices. The information in this manual applies equally well to version 3.3, which has been licensed to ORTEC. The software can analyze data that is stored in a file on disk. It understands several storage formats including Canberra's S1OO format, ORTEC'S 'chn' and 'SPC' formats, and several ASCII text formats. The software can also control data acquisition using an MCA and then store the results in a file on disk for later analysis or analyze the spectrum directly after the acquisition. The software currently only supports the control of ORTEC MCB'S. Support for Canbema's Genie-2000 Spectroscopy Systems will be added in the future. Support for reading and writing CAM files will also be forthcoming. A versatile parameter fde database structure governs all facets of the data analysis. User editing of the parameter sets allows great flexibility in handling data with different isotopic distributions, interfering isotopes, and different acquisition parameters such as energy calibration, and detector type. This manual is intended for the system supervisor or the local user who is to be the resident expert. Excerpts from this manual may also be appropriate for the system operator who will routinely use the instrument

  9. Database automation of accelerator operation

    International Nuclear Information System (INIS)

    Casstevens, B.J.; Ludemann, C.A.

    1982-01-01

    The Oak Ridge Isochronous Cyclotron (ORIC) is a variable energy, multiparticle accelerator that produces beams of energetic heavy ions which are used as probes to study the structure of the atomic nucleus. To accelerate and transmit a particular ion at a specified energy to an experimenter's apparatus, the electrical currents in up to 82 magnetic field producing coils must be established to accuracies of from 0.1 to 0.001 percent. Mechanical elements must also be positioned by means of motors or pneumatic drives. A mathematical model of this complex system provides a good approximation of operating parameters required to produce an ion beam. However, manual tuning of the system must be performed to optimize the beam quality. The database system was implemented as an on-line query and retrieval system running at a priority lower than the cyclotron real-time software. It was designed for matching beams recorded in the database with beams specified for experiments. The database is relational and permits searching on ranges of any subset of the eleven beam categorizing attributes. A beam file selected from the database is transmitted to the cyclotron general control software which handles the automatic slewing of power supply currents and motor positions to the file values, thereby replicating the desired parameters

  10. RNA STRAND: The RNA Secondary Structure and Statistical Analysis Database

    Directory of Open Access Journals (Sweden)

    Andronescu Mirela

    2008-08-01

    Full Text Available Abstract Background The ability to access, search and analyse secondary structures of a large set of known RNA molecules is very important for deriving improved RNA energy models, for evaluating computational predictions of RNA secondary structures and for a better understanding of RNA folding. Currently there is no database that can easily provide these capabilities for almost all RNA molecules with known secondary structures. Results In this paper we describe RNA STRAND – the RNA secondary STRucture and statistical ANalysis Database, a curated database containing known secondary structures of any type and organism. Our new database provides a wide collection of known RNA secondary structures drawn from public databases, searchable and downloadable in a common format. Comprehensive statistical information on the secondary structures in our database is provided using the RNA Secondary Structure Analyser, a new tool we have developed to analyse RNA secondary structures. The information thus obtained is valuable for understanding to which extent and with which probability certain structural motifs can appear. We outline several ways in which the data provided in RNA STRAND can facilitate research on RNA structure, including the improvement of RNA energy models and evaluation of secondary structure prediction programs. In order to keep up-to-date with new RNA secondary structure experiments, we offer the necessary tools to add solved RNA secondary structures to our database and invite researchers to contribute to RNA STRAND. Conclusion RNA STRAND is a carefully assembled database of trusted RNA secondary structures, with easy on-line tools for searching, analyzing and downloading user selected entries, and is publicly available at http://www.rnasoft.ca/strand.

  11. GRAFLAB 2.3 for UNIX - A MATLAB database, plotting, and analysis tool: User`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, W.N.

    1998-03-01

    This report is a user`s manual for GRAFLAB, which is a new database, analysis, and plotting package that has been written entirely in the MATLAB programming language. GRAFLAB is currently used for data reduction, analysis, and archival. GRAFLAB was written to replace GRAFAID, which is a FORTRAN database, analysis, and plotting package that runs on VAX/VMS.

  12. Antarctic Meteorite Classification and Petrographic Database

    Science.gov (United States)

    Todd, Nancy S.; Satterwhite, C. E.; Righter, Kevin

    2011-01-01

    The Antarctic Meteorite collection, which is comprised of over 18,700 meteorites, is one of the largest collections of meteorites in the world. These meteorites have been collected since the late 1970's as part of a three-agency agreement between NASA, the National Science Foundation, and the Smithsonian Institution [1]. Samples collected each season are analyzed at NASA s Meteorite Lab and the Smithsonian Institution and results are published twice a year in the Antarctic Meteorite Newsletter, which has been in publication since 1978. Each newsletter lists the samples collected and processed and provides more in-depth details on selected samples of importance to the scientific community. Data about these meteorites is also published on the NASA Curation website [2] and made available through the Meteorite Classification Database allowing scientists to search by a variety of parameters

  13. Dreamweaver CS55 The Missing Manual

    CERN Document Server

    McFarland, David

    2011-01-01

    Dreamweaver is the tool most widely used for designing and managing professional-looking websites, but it's a complex program. That's where Dreamweaver CS5.5: The Missing Manual comes in. With its jargon-free explanations, 13 hands-on tutorials, and savvy advice from Dreamweaver expert Dave McFarland, you'll master this versatile program with ease. Get A to Z guidance. Go from building your first web page to creating interactive, database-driven sites.Build skills as you learn. Apply your knowledge through tutorials and downloadable practice files.Create a state-of-the-art website. Use powerf

  14. STAIRS User's Manual

    International Nuclear Information System (INIS)

    Gadjokov, V.; Dragulev, V.; Gove, N.; Schmid, H.

    1976-10-01

    The STorage And Information Retrieval System (STAIRS) of IBM is described from the user's point of view. The description is based on the experimental use of STAIRS at the IAEA computer, with INIS and AGRIS data bases, from June 1975 to May 1976. Special attention is paid to what may be termed the hierarchical approach to retrieval in STAIRS. Such an approach allows for better use of the intrinsic data-base structure and, hence, contributes to higher recall and/or relevance ratios in retrieval. The functions carried out by STAIRS are explained and the communication language between the user and the system outlined. Details are given of the specific structure of the INIS and AGRIS data bases for STAIRS. The manual should enable an inexperienced user to start his first on-line dialogues by means of a CRT or teletype terminal. (author)

  15. STAIRS User's Manual

    Energy Technology Data Exchange (ETDEWEB)

    Gadjokov, V; Dragulev, V; Gove, N; Schmid, H

    1976-10-15

    The STorage And Information Retrieval System (STAIRS) of IBM is described from the user's point of view. The description is based on the experimental use of STAIRS at the IAEA computer, with INIS and AGRIS data bases, from June 1975 to May 1976. Special attention is paid to what may be termed the hierarchical approach to retrieval in STAIRS. Such an approach allows for better use of the intrinsic data-base structure and, hence, contributes to higher recall and/or relevance ratios in retrieval. The functions carried out by STAIRS are explained and the communication language between the user and the system outlined. Details are given of the specific structure of the INIS and AGRIS data bases for STAIRS. The manual should enable an inexperienced user to start his first on-line dialogues by means of a CRT or teletype terminal. (author)

  16. MetaRNA-Seq: An Interactive Tool to Browse and Annotate Metadata from RNA-Seq Studies

    Directory of Open Access Journals (Sweden)

    Pankaj Kumar

    2015-01-01

    Full Text Available The number of RNA-Seq studies has grown in recent years. The design of RNA-Seq studies varies from very simple (e.g., two-condition case-control to very complicated (e.g., time series involving multiple samples at each time point with separate drug treatments. Most of these publically available RNA-Seq studies are deposited in NCBI databases, but their metadata are scattered throughout four different databases: Sequence Read Archive (SRA, Biosample, Bioprojects, and Gene Expression Omnibus (GEO. Although the NCBI web interface is able to provide all of the metadata information, it often requires significant effort to retrieve study- or project-level information by traversing through multiple hyperlinks and going to another page. Moreover, project- and study-level metadata lack manual or automatic curation by categories, such as disease type, time series, case-control, or replicate type, which are vital to comprehending any RNA-Seq study. Here we describe “MetaRNA-Seq,” a new tool for interactively browsing, searching, and annotating RNA-Seq metadata with the capability of semiautomatic curation at the study level.

  17. Determination of protective properties of Bardejovske Kuple spa curative waters by rotational viscometry and ABTS assay

    Directory of Open Access Journals (Sweden)

    TOPOLSKA Dominika

    2014-02-01

    Full Text Available Mineral waters from Bardejovske Kupele spa are natural, strongly mineralized, with healing effects. They are classified as hydrocarbonic - containing chloride and sodium - carbonic, cold, hypotonic, with a relatively high boric acid content. Potential anti-oxidative effects of curative waters from Bardejovske Kupele were investigated against the hyaluronan (HA degradation. High-molar-mass HA was exposed to the action of ascorbate and cupric ions, which initiate free-radical chain degradation. Time-dependent changes of dynamic viscosity (h of the HA solutions were monitored by rotational viscometry. The radical scavenging capacity of curative waters was determined by the ABTS assay. Despite a significantly high content of transition metal ions, especially iron, remarkable protective effects of the two curative spa waters were found, namely Alzbeta and Klara. Even though “Alzbeta´s“ iron content was 3.5-fold higher than “Klara´s“, “Alzbeta“ was shown to have better protective properties against the HA degradation compared to “Klara“. Bolus addition of ferric ions to the reaction system instead of the natural iron-containing curative water caused a significant HA degradation. The ABTS decolorization assay revealed that the curative spa waters were proven as poorly effective scavengers of the ABTS·+ cation radical.

  18. Irinotecan and Oxaliplatin Might Provide Equal Benefit as Adjuvant Chemotherapy for Patients with Resectable Synchronous Colon Cancer and Liver-confined Metastases: A Nationwide Database Study.

    Science.gov (United States)

    Liang, Yi-Hsin; Shao, Yu-Yun; Chen, Ho-Min; Cheng, Ann-Lii; Lai, Mei-Shu; Yeh, Kun-Huei

    2017-12-01

    Although irinotecan and oxaliplatin are both standard treatments for advanced colon cancer, it remains unknown whether either is effective for patients with resectable synchronous colon cancer and liver-confined metastasis (SCCLM) after curative surgery. A population-based cohort of patients diagnosed with de novo SCCLM between 2004 and 2009 was established by searching the database of the Taiwan Cancer Registry and the National Health Insurance Research Database of Taiwan. Patients who underwent curative surgery as their first therapy followed by chemotherapy doublets were classified into the irinotecan group or oxaliplatin group accordingly. Patients who received radiotherapy or did not receive chemotherapy doublets were excluded. We included 6,533 patients with de novo stage IV colon cancer. Three hundred and nine of them received chemotherapy doublets after surgery; 77 patients received irinotecan and 232 patients received oxaliplatin as adjuvant chemotherapy. The patients in both groups exhibited similar overall survival (median: not reached vs. 40.8 months, p=0.151) and time to the next line of treatment (median: 16.5 vs. 14.3 months, p=0.349) in both univariate and multivariate analyses. Additionally, patients with resectable SCCLM had significantly shorter median overall survival than patients with stage III colon cancer who underwent curative surgery and subsequent adjuvant chemotherapy, but longer median overall survival than patients with de novo stage IV colon cancer who underwent surgery only at the primary site followed by standard systemic chemotherapy (p<0.001). Irinotecan and oxaliplatin exhibited similar efficacy in patients who underwent curative surgery for resectable SCCLM. Copyright© 2017, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.

  19. A7DB: a relational database for mutational, physiological and pharmacological data related to the α7 nicotinic acetylcholine receptor

    Directory of Open Access Journals (Sweden)

    Sansom Mark SP

    2005-01-01

    Full Text Available Abstract Background Nicotinic acetylcholine receptors (nAChRs are pentameric proteins that are important drug targets for a variety of diseases including Alzheimer's, schizophrenia and various forms of epilepsy. One of the most intensively studied nAChR subunits in recent years has been α7. This subunit can form functional homomeric pentamers (α75, which can make interpretation of physiological and structural data much simpler. The growing amount of structural, pharmacological and physiological data for these receptors indicates the need for a dedicated and accurate database to provide a means to access this information in a coherent manner. Description A7DB http://www.lgics.org/a7db/ is a new relational database of manually curated experimental physiological data associated with the α7 nAChR. It aims to store as much of the pharmacology, physiology and structural data pertaining to the α7 nAChR. The data is accessed via web interface that allows a user to search the data in multiple ways: 1 a simple text query 2 an incremental query builder 3 an interactive query builder and 4 a file-based uploadable query. It currently holds more than 460 separately reported experiments on over 85 mutations. Conclusions A7DB will be a useful tool to molecular biologists and bioinformaticians not only working on the α7 receptor family of proteins but also in the more general context of nicotinic receptor modelling. Furthermore it sets a precedent for expansion with the inclusion of all nicotinic receptor families and eventually all cys-loop receptor families.

  20. The curative effect analysis of 131I-therapy on patients with Graves' disease

    International Nuclear Information System (INIS)

    Cui Qin; Lu Shujun; Lu Tianhe

    2002-01-01

    To investigate the curative effect of 131 I-therapy on Graves' disease, the authors analyse conditions of patients who have received 131 I-therapy (n -674). These results showed that the incidence of fully recover, improve, Graves' disease and invalid is 80.11%, 7.28%, 11.87% and 0.74% respectively. Therefore, 131 I-therapy on Graves' disease is convenient. It has little side effect, low cost and better curative effect, it is one of the best therapeutic methods to treat hyperthyroidism

  1. EURO-CARES as Roadmap for a European Sample Curation Facility

    Science.gov (United States)

    Brucato, J. R.; Russell, S.; Smith, C.; Hutzler, A.; Meneghin, A.; Aléon, J.; Bennett, A.; Berthoud, L.; Bridges, J.; Debaille, V.; Ferrière, L.; Folco, L.; Foucher, F.; Franchi, I.; Gounelle, M.; Grady, M.; Leuko, S.; Longobardo, A.; Palomba, E.; Pottage, T.; Rettberg, P.; Vrublevskis, J.; Westall, F.; Zipfel, J.; Euro-Cares Team

    2018-04-01

    EURO-CARES is a three-year multinational project funded under the European Commission Horizon2020 research program to develop a roadmap for a European Extraterrestrial Sample Curation Facility for samples returned from solar system missions.

  2. Perioperative and long-term outcome of intrahepatic cholangiocarcinoma involving the hepatic hilus after curative-intent resection: comparison with peripheral intrahepatic cholangiocarcinoma and hilar cholangiocarcinoma.

    Science.gov (United States)

    Zhang, Xu-Feng; Bagante, Fabio; Chen, Qinyu; Beal, Eliza W; Lv, Yi; Weiss, Matthew; Popescu, Irinel; Marques, Hugo P; Aldrighetti, Luca; Maithel, Shishir K; Pulitano, Carlo; Bauer, Todd W; Shen, Feng; Poultsides, George A; Soubrane, Olivier; Martel, Guillaume; Koerkamp, B Groot; Guglielmi, Alfredo; Itaru, Endo; Pawlik, Timothy M

    2018-05-01

    Intrahepatic cholangiocarcinoma with hepatic hilus involvement has been either classified as intrahepatic cholangiocarcinoma or hilar cholangiocarcinoma. The present study aimed to investigate the clinicopathologic characteristics and short- and long-term outcomes after curative resection for hilar type intrahepatic cholangiocarcinoma in comparison with peripheral intrahepatic cholangiocarcinoma and hilar cholangiocarcinoma. A total of 912 patients with mass-forming peripheral intrahepatic cholangiocarcinoma, 101 patients with hilar type intrahepatic cholangiocarcinoma, and 159 patients with hilar cholangiocarcinoma undergoing curative resection from 2000 to 2015 were included from two multi-institutional databases. Clinicopathologic characteristics and short- and long-term outcomes were compared among the 3 groups. Patients with hilar type intrahepatic cholangiocarcinoma had more aggressive tumor characteristics (eg, higher frequency of vascular invasion and lymph nodes metastasis) and experienced more extensive resections in comparison with either peripheral intrahepatic cholangiocarcinoma or hilar cholangiocarcinoma patients. The odds of lymphadenectomy and R0 resection rate among patients with hilar type intrahepatic cholangiocarcinoma were comparable with hilar cholangiocarcinoma patients, but higher than peripheral intrahepatic cholangiocarcinoma patients (lymphadenectomy incidence, 85.1% vs 42.5%, P hilar type intrahepatic cholangiocarcinoma experienced a higher rate of technical-related complications compared with peripheral intrahepatic cholangiocarcinoma patients. Of note, hilar type intrahepatic cholangiocarcinoma was associated with worse disease-specific survival and recurrence-free survival after curative resection versus peripheral intrahepatic cholangiocarcinoma (median disease-specific survival, 26.0 vs 54.0 months, P hilar cholangiocarcinoma (median disease-specific survival, 26.0 vs 49.0 months, P = .003; median recurrence-free survival

  3. MIPS: analysis and annotation of proteins from whole genomes in 2005.

    Science.gov (United States)

    Mewes, H W; Frishman, D; Mayer, K F X; Münsterkötter, M; Noubibou, O; Pagel, P; Rattei, T; Oesterheld, M; Ruepp, A; Stümpflen, V

    2006-01-01

    The Munich Information Center for Protein Sequences (MIPS at the GSF), Neuherberg, Germany, provides resources related to genome information. Manually curated databases for several reference organisms are maintained. Several of these databases are described elsewhere in this and other recent NAR database issues. In a complementary effort, a comprehensive set of >400 genomes automatically annotated with the PEDANT system are maintained. The main goal of our current work on creating and maintaining genome databases is to extend gene centered information to information on interactions within a generic comprehensive framework. We have concentrated our efforts along three lines (i) the development of suitable comprehensive data structures and database technology, communication and query tools to include a wide range of different types of information enabling the representation of complex information such as functional modules or networks Genome Research Environment System, (ii) the development of databases covering computable information such as the basic evolutionary relations among all genes, namely SIMAP, the sequence similarity matrix and the CABiNet network analysis framework and (iii) the compilation and manual annotation of information related to interactions such as protein-protein interactions or other types of relations (e.g. MPCDB, MPPI, CYGD). All databases described and the detailed descriptions of our projects can be accessed through the MIPS WWW server (http://mips.gsf.de).

  4. Curative care through administration of plant-derived medicines in ...

    African Journals Online (AJOL)

    Curative care through administration of plant-derived medicines in Sekhukhune district municipality of Limpopo province, South Africa. ... Sources of medicine were mostly herbs followed by shrubs, trees, creepers and aloe collected from the communal land. The leaves, bark, roots and bulbs were prepared into decoctions ...

  5. DBCG hypo trial validation of radiotherapy parameters from a national data bank versus manual reporting

    DEFF Research Database (Denmark)

    Brink, Carsten; Lorenzen, Ebbe L; Krogh, Simon Long

    2018-01-01

    of dose information, since the two patients had been treated with an electron boost based on a manual calculation, thus data was not exported to the data bank, and this was not detected prior to comparison with the manual data. For a few database fields in the manual data an ambiguity of the parameter...... definition of the specific field is seen in the data. This was not the case for the data bank, which extract all data consistently. CONCLUSIONS: In terms of data quality the data bank is superior to manually reported values. However, there is a need to allocate resources for checking the validity...... of the available data as well as ensuring that all relevant data is present. The data bank contains more detailed information, and thus facilitates research related to the actual dose distribution in the patients....

  6. The Development of a Web Based Database Applications of Procurement, Inventory, and Sales at PT. Interjaya Surya Megah

    OpenAIRE

    Huda, Choirul; Hudyanto, Chendra; Sisillia, Sisillia; Persada, Revin Kencana

    2011-01-01

    The objective of this research is to develop a web based database application for the procurement, inventory and sales at PT. Interjaya Surya Megah. The current system at PT. Interjaya Surya Megah is running manually, so the company has difficulty in carrying out its activities. The methodology, that is used in this research, includes interviews, observation, literature review, conceptual database design, logical database design and physical database design. The results are the establishment ...

  7. Manual therapy in adults with tension-type headache: A systematic review.

    Science.gov (United States)

    Cumplido-Trasmonte, C; Fernández-González, P; Alguacil-Diego, I M; Molina-Rueda, F

    2018-03-07

    Tension-type headache is the most common primary headache, with a high prevalence and a considerable socioeconomic impact. Manual physical therapy techniques are widely used in the clinical field to treat the symptoms associated with tension-type headache. This systematic review aims to determine the effectiveness of manual and non-invasive therapies in the treatment of patients with tension-type headache. We conducted a systematic review of randomised controlled trials in the following databases: Brain, PubMed, Web of Science, PEDro, Scopus, CINAHL, and Science Direct. Ten randomised controlled trials were included for analysis. According to these studies, manual therapy improves symptoms, increasing patients' well-being and improving the outcome measures analysed. Manual therapy has positive effects on pain intensity, pain frequency, disability, overall impact, quality of life, and craniocervical range of motion in adults with tension-type headache. None of the techniques was found to be superior to the others; combining different techniques seems to be the most effective approach. Copyright © 2018 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  8. The reactive metabolite target protein database (TPDB)--a web-accessible resource.

    Science.gov (United States)

    Hanzlik, Robert P; Koen, Yakov M; Theertham, Bhargav; Dong, Yinghua; Fang, Jianwen

    2007-03-16

    The toxic effects of many simple organic compounds stem from their biotransformation to chemically reactive metabolites which bind covalently to cellular proteins. To understand the mechanisms of cytotoxic responses it may be important to know which proteins become adducted and whether some may be common targets of multiple toxins. The literature of this field is widely scattered but expanding rapidly, suggesting the need for a comprehensive, searchable database of reactive metabolite target proteins. The Reactive Metabolite Target Protein Database (TPDB) is a comprehensive, curated, searchable, documented compilation of publicly available information on the protein targets of reactive metabolites of 18 well-studied chemicals and drugs of known toxicity. TPDB software enables i) string searches for author names and proteins names/synonyms, ii) more complex searches by selecting chemical compound, animal species, target tissue and protein names/synonyms from pull-down menus, and iii) commonality searches over multiple chemicals. Tabulated search results provide information, references and links to other databases. The TPDB is a unique on-line compilation of information on the covalent modification of cellular proteins by reactive metabolites of chemicals and drugs. Its comprehensiveness and searchability should facilitate the elucidation of mechanisms of reactive metabolite toxicity. The database is freely available at http://tpdb.medchem.ku.edu/tpdb.html.

  9. Curating for engagement: Identifying the nature and impact of organizational marketing strategies on Pinterest

    OpenAIRE

    Saxton, Gregory D.; Ghosh, Amanda

    2016-01-01

    In an increasingly overloaded information environment sparked by the explosion of digital media, the ability to curate content has taken on greater importance. This study begins with the supposition that businesses that are able to adopt a content curation role and help consumers sort through the daily influx of information may be more successful in attracting, engaging, and retaining customers while fostering brand awareness and word of mouth. Accordingly, this study investigates organizatio...

  10. The value of imaging examinations in diagnosis and curative effect evaluation of breast cancer

    International Nuclear Information System (INIS)

    Xia Xiaotian; Zhang Yongxue

    2010-01-01

    Breast cancer is a serious impact on women's physical and mental health and a life-threatening common disease. Imaging examinations have great significances in diagnosing and evaluating curative effect on breast cancer. This article aims to introduce and comprehensive the value of diagnosis and curative effect evaluation of breast cancer in the context of imaging examinations (ultrasonography, mammography, breast CT, breast MRI, breast 99 Tc m -MIBI imaging, PET, PET-CT, etc). (authors)

  11. Visualizing information across multidimensional post-genomic structured and textual databases.

    Science.gov (United States)

    Tao, Ying; Friedman, Carol; Lussier, Yves A

    2005-04-15

    Visualizing relationships among biological information to facilitate understanding is crucial to biological research during the post-genomic era. Although different systems have been developed to view gene-phenotype relationships for specific databases, very few have been designed specifically as a general flexible tool for visualizing multidimensional genotypic and phenotypic information together. Our goal is to develop a method for visualizing multidimensional genotypic and phenotypic information and a model that unifies different biological databases in order to present the integrated knowledge using a uniform interface. We developed a novel, flexible and generalizable visualization tool, called PhenoGenesviewer (PGviewer), which in this paper was used to display gene-phenotype relationships from a human-curated database (OMIM) and from an automatic method using a Natural Language Processing tool called BioMedLEE. Data obtained from multiple databases were first integrated into a uniform structure and then organized by PGviewer. PGviewer provides a flexible query interface that allows dynamic selection and ordering of any desired dimension in the databases. Based on users' queries, results can be visualized using hierarchical expandable trees that present views specified by users according to their research interests. We believe that this method, which allows users to dynamically organize and visualize multiple dimensions, is a potentially powerful and promising tool that should substantially facilitate biological research. PhenogenesViewer as well as its support and tutorial are available at http://www.dbmi.columbia.edu/pgviewer/ Lussier@dbmi.columbia.edu.

  12. Curating and nudging in virtual CLIL environments

    Directory of Open Access Journals (Sweden)

    Helle Lykke Nielsen

    2014-03-01

    Full Text Available Foreign language teachers can benefit substantially from the notions of curation and nudging when scaffolding CLIL activities on the internet. This article shows how these principles can be integrated into CLILstore, a free multimedia-rich learning tool with seamless access to online dictionaries, and presents feedback from first and second year university students of Arabic as a second language to inform foreign language teachers about students’ needs and preferences in virtual learning environments.

  13. A new version of the RDP (Ribosomal Database Project)

    Science.gov (United States)

    Maidak, B. L.; Cole, J. R.; Parker, C. T. Jr; Garrity, G. M.; Larsen, N.; Li, B.; Lilburn, T. G.; McCaughey, M. J.; Olsen, G. J.; Overbeek, R.; hide

    1999-01-01

    The Ribosomal Database Project (RDP-II), previously described by Maidak et al. [ Nucleic Acids Res. (1997), 25, 109-111], is now hosted by the Center for Microbial Ecology at Michigan State University. RDP-II is a curated database that offers ribosomal RNA (rRNA) nucleotide sequence data in aligned and unaligned forms, analysis services, and associated computer programs. During the past two years, data alignments have been updated and now include >9700 small subunit rRNA sequences. The recent development of an ObjectStore database will provide more rapid updating of data, better data accuracy and increased user access. RDP-II includes phylogenetically ordered alignments of rRNA sequences, derived phylogenetic trees, rRNA secondary structure diagrams, and various software programs for handling, analyzing and displaying alignments and trees. The data are available via anonymous ftp (ftp.cme.msu. edu) and WWW (http://www.cme.msu.edu/RDP). The WWW server provides ribosomal probe checking, approximate phylogenetic placement of user-submitted sequences, screening for possible chimeric rRNA sequences, automated alignment, and a suggested placement of an unknown sequence on an existing phylogenetic tree. Additional utilities also exist at RDP-II, including distance matrix, T-RFLP, and a Java-based viewer of the phylogenetic trees that can be used to create subtrees.

  14. Technology Development and Advanced Planning for Curation of Returned Mars Samples

    Science.gov (United States)

    Lindstrom, David J.; Allen, Carlton C.

    2002-01-01

    NASA Johnson Space Center (JSC) curates extraterrestrial samples, providing the international science community with lunar rock and soil returned by the Apollo astronauts, meteorites collected in Antarctica, cosmic dust collected in the stratosphere, and hardware exposed to the space environment. Curation comprises initial characterization of new samples, preparation and allocation of samples for research, and clean, secure long-term storage. The foundations of this effort are the specialized cleanrooms (class 10 to 10,000) for each of the four types of materials, the supporting facilities, and the people, many of whom have been doing detailed work in clean environments for decades. JSC is also preparing to curate the next generation of extraterrestrial samples. These include samples collected from the solar wind, a comet, and an asteroid. Early planning and R\\&D are underway to support post-mission sample handling and curation of samples returned from Mars. One of the strong scientific reasons for returning samples from Mars is to search for evidence of current or past life in the samples. Because of the remote possibility that the samples may contain life forms that are hazardous to the terrestrial biosphere, the National Research Council has recommended that all samples returned from Mars be kept under strict biological containment until tests show that they can safely be released to other laboratories. It is possible that Mars samples may contain only scarce or subtle traces of life or prebiotic chemistry that could readily be overwhelmed by terrestrial contamination . Thus, the facilities used to contain, process, and analyze samples from Mars must have a combination of high-level biocontainment and organic / inorganic chemical cleanliness that is unprecedented. JSC has been conducting feasibility studies and developing designs for a sample receiving facility that would offer biocontainment at least the equivalent of current maximum containment BSL-4 (Bio

  15. Interactive Multi-Instrument Database of Solar Flares

    Science.gov (United States)

    Ranjan, Shubha S.; Spaulding, Ryan; Deardorff, Donald G.

    2018-01-01

    The fundamental motivation of the project is that the scientific output of solar research can be greatly enhanced by better exploitation of the existing solar/heliosphere space-data products jointly with ground-based observations. Our primary focus is on developing a specific innovative methodology based on recent advances in "big data" intelligent databases applied to the growing amount of high-spatial and multi-wavelength resolution, high-cadence data from NASA's missions and supporting ground-based observatories. Our flare database is not simply a manually searchable time-based catalog of events or list of web links pointing to data. It is a preprocessed metadata repository enabling fast search and automatic identification of all recorded flares sharing a specifiable set of characteristics, features, and parameters. The result is a new and unique database of solar flares and data search and classification tools for the Heliophysics community, enabling multi-instrument/multi-wavelength investigations of flare physics and supporting further development of flare-prediction methodologies.

  16. Hanford External Dosimetry Technical Basis Manual PNL-MA-842

    Energy Technology Data Exchange (ETDEWEB)

    Rathbone, Bruce A.

    2009-08-28

    The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at Hanford. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with 10 CFR 835, DOELAP, DOE-RL, ORP, PNSO, and Hanford contractor requirements. The dosimetry system is operated by PNNL’s Hanford External Dosimetry Program (HEDP) which provides dosimetry services to all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee (HPDAC) which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. The first revision to be released through PNNL’s Electronic Records & Information Capture Architecture (ERICA) database was designated Revision 0. Revision numbers that are whole numbers reflect major revisions typically involving changes to all chapters in the document. Revision numbers that include a decimal fraction reflect minor revisions, usually restricted to selected chapters or selected pages in the document.

  17. Development a GIS Snowstorm Database

    Science.gov (United States)

    Squires, M. F.

    2010-12-01

    This paper describes the development of a GIS Snowstorm Database (GSDB) at NOAA’s National Climatic Data Center. The snowstorm database is a collection of GIS layers and tabular information for 471 snowstorms between 1900 and 2010. Each snowstorm has undergone automated and manual quality control. The beginning and ending date of each snowstorm is specified. The original purpose of this data was to serve as input for NCDC’s new Regional Snowfall Impact Scale (ReSIS). However, this data is being preserved and used to investigate the impacts of snowstorms on society. GSDB is used to summarize the impact of snowstorms on transportation (interstates) and various classes of facilities (roads, schools, hospitals, etc.). GSDB can also be linked to other sources of impacts such as insurance loss information and Storm Data. Thus the snowstorm database is suited for many different types of users including the general public, decision makers, and researchers. This paper summarizes quality control issues associated with using snowfall data, methods used to identify the starting and ending dates of a storm, and examples of the tables that combine snowfall and societal data.

  18. Electronic Commerce user manual

    Energy Technology Data Exchange (ETDEWEB)

    1992-04-10

    This User Manual supports the Electronic Commerce Standard System. The Electronic Commerce Standard System is being developed for the Department of Defense of the Technology Information Systems Program at the Lawrence Livermore National Laboratory, operated by the University of California for the Department of Energy. The Electronic Commerce Standard System, or EC as it is known, provides the capability for organizations to conduct business electronically instead of through paper transactions. Electronic Commerce and Computer Aided Acquisition and Logistics Support, are two major projects under the DoD`s Corporate Information Management program, whose objective is to make DoD business transactions faster and less costly by using computer networks instead of paper forms and postage. EC runs on computers that use the UNIX operating system and provides a standard set of applications and tools that are bound together by a common command and menu system. These applications and tools may vary according to the requirements of the customer or location and may be customized to meet the specific needs of an organization. Local applications can be integrated into the menu system under the Special Databases & Applications option on the EC main menu. These local applications will be documented in the appendices of this manual. This integration capability provides users with a common environment of standard and customized applications.

  19. Electronic Commerce user manual

    Energy Technology Data Exchange (ETDEWEB)

    1992-04-10

    This User Manual supports the Electronic Commerce Standard System. The Electronic Commerce Standard System is being developed for the Department of Defense of the Technology Information Systems Program at the Lawrence Livermore National Laboratory, operated by the University of California for the Department of Energy. The Electronic Commerce Standard System, or EC as it is known, provides the capability for organizations to conduct business electronically instead of through paper transactions. Electronic Commerce and Computer Aided Acquisition and Logistics Support, are two major projects under the DoD's Corporate Information Management program, whose objective is to make DoD business transactions faster and less costly by using computer networks instead of paper forms and postage. EC runs on computers that use the UNIX operating system and provides a standard set of applications and tools that are bound together by a common command and menu system. These applications and tools may vary according to the requirements of the customer or location and may be customized to meet the specific needs of an organization. Local applications can be integrated into the menu system under the Special Databases Applications option on the EC main menu. These local applications will be documented in the appendices of this manual. This integration capability provides users with a common environment of standard and customized applications.

  20. ChlamyCyc: an integrative systems biology database and web-portal for Chlamydomonas reinhardtii.

    Science.gov (United States)

    May, Patrick; Christian, Jan-Ole; Kempa, Stefan; Walther, Dirk

    2009-05-04

    The unicellular green alga Chlamydomonas reinhardtii is an important eukaryotic model organism for the study of photosynthesis and plant growth. In the era of modern high-throughput technologies there is an imperative need to integrate large-scale data sets from high-throughput experimental techniques using computational methods and database resources to provide comprehensive information about the molecular and cellular organization of a single organism. In the framework of the German Systems Biology initiative GoFORSYS, a pathway database and web-portal for Chlamydomonas (ChlamyCyc) was established, which currently features about 250 metabolic pathways with associated genes, enzymes, and compound information. ChlamyCyc was assembled using an integrative approach combining the recently published genome sequence, bioinformatics methods, and experimental data from metabolomics and proteomics experiments. We analyzed and integrated a combination of primary and secondary database resources, such as existing genome annotations from JGI, EST collections, orthology information, and MapMan classification. ChlamyCyc provides a curated and integrated systems biology repository that will enable and assist in systematic studies of fundamental cellular processes in Chlamydomonas. The ChlamyCyc database and web-portal is freely available under http://chlamycyc.mpimp-golm.mpg.de.