WorldWideScience

Sample records for structure database icsd

  1. Interpenetrating metal-organic and inorganic 3D networks: a computer-aided systematic investigation. Part II [1]. Analysis of the Inorganic Crystal Structure Database (ICSD)

    International Nuclear Information System (INIS)

    Baburin, I.A.; Blatov, V.A.; Carlucci, L.; Ciani, G.; Proserpio, D.M.

    2005-01-01

    Interpenetration in metal-organic and inorganic networks has been investigated by a systematic analysis of the crystallographic structural databases. We have used a version of TOPOS (a package for multipurpose crystallochemical analysis) adapted for searching for interpenetration and based on the concept of Voronoi-Dirichlet polyhedra and on the representation of a crystal structure as a reduced finite graph. In this paper, we report comprehensive lists of interpenetrating inorganic 3D structures from the Inorganic Crystal Structure Database (ICSD), inclusive of 144 Collection Codes for equivalent interpenetrating nets, analyzed on the basis of their topologies. Distinct Classes, corresponding to the different modes in which individual identical motifs can interpenetrate, have been attributed to the entangled structures. Interpenetrating nets of different nature as well as interpenetrating H-bonded nets were also examined

  2. Developments in diffraction databases

    International Nuclear Information System (INIS)

    Jenkins, R.

    1999-01-01

    Full text: There are a number of databases available to the diffraction community. Two of the more important of these are the Powder Diffraction File (PDF) maintained by the International Centre for Diffraction Data (ICDD), and the Inorganic Crystal Structure Database (ICSD) maintained by Fachsinformationzentrum (FIZ, Karlsruhe). In application, the PDF has been used as an indispensable tool in phase identification and identification of unknowns. The ICSD database has extensive and explicit reference to the structures of compounds: atomic coordinates, space group and even thermal vibration parameters. A similar database, but for organic compounds, is maintained by the Cambridge Crystallographic Data Centre. These databases are often used as independent sources of information. However, little thought has been given on how to exploit the combined properties of structural database tools. A recently completed agreement between ICDD and FIZ, plus ICDD and Cambridge, provides a first step in complementary use of the PDF and the ICSD databases. The focus of this paper (as indicated below) is to examine ways of exploiting the combined properties of both databases. In 1996, there were approximately 76,000 entries in the PDF and approximately 43,000 entries in the ICSD database. The ICSD database has now been used to calculate entries in the PDF. Thus, to derive d-spacing and peak intensity data requires the synthesis of full diffraction patterns, i.e., we use the structural data in the ICSD database and then add instrumental resolution information. The combined data from PDF and ICSD can be effectively used in many ways. For example, we can calculate PDF data for an ideally random crystal distribution and also in the absence of preferred orientation. Again, we can use systematic studies of intermediate members in solid solutions series to help produce reliable quantitative phase analyses. In some cases, we can study how solid solution properties vary with composition and

  3. The ICSD-3 and DSM-5 guidelines for diagnosing narcolepsy: clinical relevance and practicality.

    Science.gov (United States)

    Ruoff, Chad; Rye, David

    2016-07-20

    Narcolepsy is a chronic neurological disease manifesting as difficulty with maintaining continuous wake and sleep. Clinical presentation varies but requires excessive daytime sleepiness (EDS) occurring alone or together with features of rapid-eye movement (REM) sleep dissociation (e.g., cataplexy, hypnagogic/hypnopompic hallucinations, sleep paralysis), and disrupted nighttime sleep. Narcolepsy with cataplexy is associated with reductions of cerebrospinal fluid (CSF) hypocretin due to destruction of hypocretin peptide-producing neurons in the hypothalamus in individuals with a specific genetic predisposition. Updated diagnostic criteria include the Diagnostic and Statistical Manual of Mental Disorders Fifth Edition (DSM-5) and International Classification of Sleep Disorders Third Edition (ICSD-3). DSM-5 criteria require EDS in association with any one of the following: (1) cataplexy; (2) CSF hypocretin deficiency; (3) REM sleep latency ≤15 minutes on nocturnal polysomnography (PSG); or (4) mean sleep latency ≤8 minutes on multiple sleep latency testing (MSLT) with ≥2 sleep-onset REM-sleep periods (SOREMPs). ICSD-3 relies more upon objective data in addition to EDS, somewhat complicating the diagnostic criteria: 1) cataplexy and either positive MSLT/PSG findings or CSF hypocretin deficiency; (2) MSLT criteria similar to DSM-5 except that a SOREMP on PSG may count as one of the SOREMPs required on MSLT; and (3) distinct division of narcolepsy into type 1, which requires the presence of cataplexy or documented CSF hypocretin deficiency, and type 2, where cataplexy is absent, and CSF hypocretin levels are either normal or undocumented. We discuss limitations of these criteria such as variability in clinical presentation of cataplexy, particularly when cataplexy may be ambiguous, as well as by age; multiple and/or invasive CSF diagnostic test requirements; and lack of normative diagnostic test data (e.g., MSLT) in certain populations. While ICSD-3 criteria

  4. Cross-cultural and comparative epidemiology of insomnia: the Diagnostic and statistical manual (DSM), International classification of diseases (ICD) and International classification of sleep disorders (ICSD).

    Science.gov (United States)

    Chung, Ka-Fai; Yeung, Wing-Fai; Ho, Fiona Yan-Yee; Yung, Kam-Ping; Yu, Yee-Man; Kwok, Chi-Wa

    2015-04-01

    To compare the prevalence of insomnia according to symptoms, quantitative criteria, and Diagnostic and Statistical Manual of Mental Disorders, 4th and 5th Edition (DSM-IV and DSM-5), International Classification of Diseases, 10th Revision (ICD-10), and International Classification of Sleep Disorders, 2nd Edition (ICSD-2), and to compare the prevalence of insomnia disorder between Hong Kong and the United States by adopting a similar methodology used by the America Insomnia Survey (AIS). Population-based epidemiological survey respondents (n = 2011) completed the Brief Insomnia Questionnaire (BIQ), a validated scale generating DSM-IV, DSM-5, ICD-10, and ICSD-2 insomnia disorder. The weighted prevalence of difficulty falling asleep, difficulty staying asleep, waking up too early, and non-restorative sleep that occurred ≥3 days per week was 14.0%, 28.3%, 32.1%, and 39.9%, respectively. When quantitative criteria were included, the prevalence dropped the most from 39.9% to 8.4% for non-restorative sleep, and the least from 14.0% to 12.9% for difficulty falling asleep. The weighted prevalence of DSM-IV, ICD-10, ICSD-2, and any of the three insomnia disorders was 22.1%, 4.7%, 15.1%, and 22.1%, respectively; for DSM-5 insomnia disorder, it was 10.8%. Compared with 22.1%, 3.9%, and 14.7% for DSM-IV, ICD-10, and ICSD-2 in the AIS, cross-cultural difference in the prevalence of insomnia disorder is less than what is expected. The prevalence is reduced by half from DSM-IV to DSM-5. ICD-10 insomnia disorder has the lowest prevalence, perhaps because excessive concern and preoccupation, one of its diagnostic criteria, is not always present in people with insomnia. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. The International Coal Statistics Data Base program maintenance guide

    International Nuclear Information System (INIS)

    1991-06-01

    The International Coal Statistics Data Base (ICSD) is a microcomputer-based system which contains information related to international coal trade. This includes coal production, consumption, imports and exports information. The ICSD is a secondary data base, meaning that information contained therein is derived entirely from other primary sources. It uses dBase III+ and Lotus 1-2-3 to locate, report and display data. The system is used for analysis in preparing the Annual Prospects for World Coal Trade (DOE/EIA-0363) publication. The ICSD system is menu driven and also permits the user who is familiar with dBase and Lotus operations to leave the menu structure to perform independent queries. Documentation for the ICSD consists of three manuals -- the User's Guide, the Operations Manual, and the Program Maintenance Manual. This Program Maintenance Manual provides the information necessary to maintain and update the ICSD system. Two major types of program maintenance documentation are presented in this manual. The first is the source code for the dBase III+ routines and related non-dBase programs used in operating the ICSD. The second is listings of the major component database field structures. A third important consideration for dBase programming, the structure of index files, is presented in the listing of source code for the index maintenance program. 1 fig

  6. HIV Structural Database

    Science.gov (United States)

    SRD 102 HIV Structural Database (Web, free access)   The HIV Protease Structural Database is an archive of experimentally determined 3-D structures of Human Immunodeficiency Virus 1 (HIV-1), Human Immunodeficiency Virus 2 (HIV-2) and Simian Immunodeficiency Virus (SIV) Proteases and their complexes with inhibitors or products of substrate cleavage.

  7. Validation of the ICSD-2 criteria for CSF hypocretin-1 measurements in the diagnosis of narcolepsy in the Danish population

    DEFF Research Database (Denmark)

    Knudsen, Stine; Jennum, Poul J; Alving, Jørgen

    2010-01-01

    STUDY OBJECTIVES: The International Classification of Sleep Disorders (ICSD-2) criteria for low CSF hypocretin-1 levels (CSF hcrt-1) still need validation as a diagnostic tool for narcolepsy in different populations because inter-assay variability and different definitions of hypocretin deficiency...... complicate direct comparisons of study results. DESIGN AND PARTICIPANTS: Interviews, polysomnography, multiple sleep latency test, HLA-typing, and CSF hcrt-1 measurements in Danish patients with narcolepsy with cataplexy (NC) and narcolepsy without cataplexy (NwC), CSF hcrt-1 measurements in other......). MEASUREMENTS AND RESULTS: In Danes, low CSF hcrt-1 was present in 40/46 NC, 3/14 NwC and 0/106 controls (P sleep latency, more sleep...

  8. Discrete Optimization of Internal Part Structure via SLM Unit Structure-Performance Database

    Directory of Open Access Journals (Sweden)

    Li Tang

    2018-01-01

    Full Text Available The structural optimization of the internal structure of parts based on three-dimensional (3D printing has been recognized as being important in the field of mechanical design. The purpose of this paper is to present a creation of a unit structure-performance database based on the selective laser melting (SLM, which contains various structural units with different functions and records their structure and performance characteristics so that we can optimize the internal structure of parts directly, according to the database. The method of creating the unit structure-performance database was introduced in this paper and several structural units of the unit structure-performance database were introduced. The bow structure unit was used to show how to create the structure-performance database of the unit as an example. Some samples of the bow structure unit were designed and manufactured by SLM. These samples were tested in the WDW-100 compression testing machine to obtain their performance characteristics. After this, the paper collected all data regarding unit structure parameters, weight, performance characteristics, and other data; and, established a complete set of data from the bow structure unit for the unit structure-performance database. Furthermore, an aircraft part was reconstructed conveniently to be more lightweight according to the unit structure-performance database. Its weight was reduced by 36.8% when compared with the original structure, while the strength far exceeded the requirements.

  9. Spectroscopic databases - A tool for structure elucidation

    Energy Technology Data Exchange (ETDEWEB)

    Luksch, P [Fachinformationszentrum Karlsruhe, Gesellschaft fuer Wissenschaftlich-Technische Information mbH, Eggenstein-Leopoldshafen (Germany)

    1990-05-01

    Spectroscopic databases have developed to useful tools in the process of structure elucidation. Besides the conventional library searches, new intelligent programs have been added, that are able to predict structural features from measured spectra or to simulate for a given structure. The example of the C13NMR/IR database developed at BASF and available on STN is used to illustrate the present capabilities of online database. New developments in the field of spectrum simulation and methods for the prediction of complete structures from spectroscopic information are reviewed. (author). 10 refs, 5 figs.

  10. Fast Structural Search in Phylogenetic Databases

    Directory of Open Access Journals (Sweden)

    William H. Piel

    2005-01-01

    Full Text Available As the size of phylogenetic databases grows, the need for efficiently searching these databases arises. Thanks to previous and ongoing research, searching by attribute value and by text has become commonplace in these databases. However, searching by topological or physical structure, especially for large databases and especially for approximate matches, is still an art. We propose structural search techniques that, given a query or pattern tree P and a database of phylogenies D, find trees in D that are sufficiently close to P . The “closeness” is a measure of the topological relationships in P that are found to be the same or similar in a tree D in D. We develop a filtering technique that accelerates searches and present algorithms for rooted and unrooted trees where the trees can be weighted or unweighted. Experimental results on comparing the similarity measure with existing tree metrics and on evaluating the efficiency of the search techniques demonstrate that the proposed approach is promising

  11. Molecule database framework: a framework for creating database applications with chemical structure search capability.

    Science.gov (United States)

    Kiener, Joos

    2013-12-11

    Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework

  12. Construction of crystal structure prototype database: methods and applications.

    Science.gov (United States)

    Su, Chuanxun; Lv, Jian; Li, Quan; Wang, Hui; Zhang, Lijun; Wang, Yanchao; Ma, Yanming

    2017-04-26

    Crystal structure prototype data have become a useful source of information for materials discovery in the fields of crystallography, chemistry, physics, and materials science. This work reports the development of a robust and efficient method for assessing the similarity of structures on the basis of their interatomic distances. Using this method, we proposed a simple and unambiguous definition of crystal structure prototype based on hierarchical clustering theory, and constructed the crystal structure prototype database (CSPD) by filtering the known crystallographic structures in a database. With similar method, a program structure prototype analysis package (SPAP) was developed to remove similar structures in CALYPSO prediction results and extract predicted low energy structures for a separate theoretical structure database. A series of statistics describing the distribution of crystal structure prototypes in the CSPD was compiled to provide an important insight for structure prediction and high-throughput calculations. Illustrative examples of the application of the proposed database are given, including the generation of initial structures for structure prediction and determination of the prototype structure in databases. These examples demonstrate the CSPD to be a generally applicable and useful tool for materials discovery.

  13. Construction of crystal structure prototype database: methods and applications

    International Nuclear Information System (INIS)

    Su, Chuanxun; Lv, Jian; Wang, Hui; Wang, Yanchao; Ma, Yanming; Li, Quan; Zhang, Lijun

    2017-01-01

    Crystal structure prototype data have become a useful source of information for materials discovery in the fields of crystallography, chemistry, physics, and materials science. This work reports the development of a robust and efficient method for assessing the similarity of structures on the basis of their interatomic distances. Using this method, we proposed a simple and unambiguous definition of crystal structure prototype based on hierarchical clustering theory, and constructed the crystal structure prototype database (CSPD) by filtering the known crystallographic structures in a database. With similar method, a program structure prototype analysis package (SPAP) was developed to remove similar structures in CALYPSO prediction results and extract predicted low energy structures for a separate theoretical structure database. A series of statistics describing the distribution of crystal structure prototypes in the CSPD was compiled to provide an important insight for structure prediction and high-throughput calculations. Illustrative examples of the application of the proposed database are given, including the generation of initial structures for structure prediction and determination of the prototype structure in databases. These examples demonstrate the CSPD to be a generally applicable and useful tool for materials discovery. (paper)

  14. StraPep: a structure database of bioactive peptides

    Science.gov (United States)

    Wang, Jian; Yin, Tailang; Xiao, Xuwen; He, Dan; Xue, Zhidong; Jiang, Xinnong; Wang, Yan

    2018-01-01

    Abstract Bioactive peptides, with a variety of biological activities and wide distribution in nature, have attracted great research interest in biological and medical fields, especially in pharmaceutical industry. The structural information of bioactive peptide is important for the development of peptide-based drugs. Many databases have been developed cataloguing bioactive peptides. However, to our knowledge, database dedicated to collect all the bioactive peptides with known structure is not available yet. Thus, we developed StraPep, a structure database of bioactive peptides. StraPep holds 3791 bioactive peptide structures, which belong to 1312 unique bioactive peptide sequences. About 905 out of 1312 (68%) bioactive peptides in StraPep contain disulfide bonds, which is significantly higher than that (21%) of PDB. Interestingly, 150 out of 616 (24%) bioactive peptides with three or more disulfide bonds form a structural motif known as cystine knot, which confers considerable structural stability on proteins and is an attractive scaffold for drug design. Detailed information of each peptide, including the experimental structure, the location of disulfide bonds, secondary structure, classification, post-translational modification and so on, has been provided. A wide range of user-friendly tools, such as browsing, sequence and structure-based searching and so on, has been incorporated into StraPep. We hope that this database will be helpful for the research community. Database URL: http://isyslab.info/StraPep PMID:29688386

  15. Protein structure database search and evolutionary classification.

    Science.gov (United States)

    Yang, Jinn-Moon; Tung, Chi-Hua

    2006-01-01

    As more protein structures become available and structural genomics efforts provide structural models in a genome-wide strategy, there is a growing need for fast and accurate methods for discovering homologous proteins and evolutionary classifications of newly determined structures. We have developed 3D-BLAST, in part, to address these issues. 3D-BLAST is as fast as BLAST and calculates the statistical significance (E-value) of an alignment to indicate the reliability of the prediction. Using this method, we first identified 23 states of the structural alphabet that represent pattern profiles of the backbone fragments and then used them to represent protein structure databases as structural alphabet sequence databases (SADB). Our method enhanced BLAST as a search method, using a new structural alphabet substitution matrix (SASM) to find the longest common substructures with high-scoring structured segment pairs from an SADB database. Using personal computers with Intel Pentium4 (2.8 GHz) processors, our method searched more than 10 000 protein structures in 1.3 s and achieved a good agreement with search results from detailed structure alignment methods. [3D-BLAST is available at http://3d-blast.life.nctu.edu.tw].

  16. A new relational database structure and online interface for the HITRAN database

    International Nuclear Information System (INIS)

    Hill, Christian; Gordon, Iouli E.; Rothman, Laurence S.; Tennyson, Jonathan

    2013-01-01

    A new format for the HITRAN database is proposed. By storing the line-transition data in a number of linked tables described by a relational database schema, it is possible to overcome the limitations of the existing format, which have become increasingly apparent over the last few years as new and more varied data are being used by radiative-transfer models. Although the database in the new format can be searched using the well-established Structured Query Language (SQL), a web service, HITRANonline, has been deployed to allow users to make most common queries of the database using a graphical user interface in a web page. The advantages of the relational form of the database to ensuring data integrity and consistency are explored, and the compatibility of the online interface with the emerging standards of the Virtual Atomic and Molecular Data Centre (VAMDC) project is discussed. In particular, the ability to access HITRAN data using a standard query language from other websites, command line tools and from within computer programs is described. -- Highlights: • A new, interactive version of the HITRAN database is presented. • The data is stored in a structured fashion in a relational database. • The new HITRANonline interface offers increased functionality and easier error correction

  17. E-MSD: the European Bioinformatics Institute Macromolecular Structure Database.

    Science.gov (United States)

    Boutselakis, H; Dimitropoulos, D; Fillon, J; Golovin, A; Henrick, K; Hussain, A; Ionides, J; John, M; Keller, P A; Krissinel, E; McNeil, P; Naim, A; Newman, R; Oldfield, T; Pineda, J; Rachedi, A; Copeland, J; Sitnov, A; Sobhany, S; Suarez-Uruena, A; Swaminathan, J; Tagari, M; Tate, J; Tromm, S; Velankar, S; Vranken, W

    2003-01-01

    The E-MSD macromolecular structure relational database (http://www.ebi.ac.uk/msd) is designed to be a single access point for protein and nucleic acid structures and related information. The database is derived from Protein Data Bank (PDB) entries. Relational database technologies are used in a comprehensive cleaning procedure to ensure data uniformity across the whole archive. The search database contains an extensive set of derived properties, goodness-of-fit indicators, and links to other EBI databases including InterPro, GO, and SWISS-PROT, together with links to SCOP, CATH, PFAM and PROSITE. A generic search interface is available, coupled with a fast secondary structure domain search tool.

  18. RNA STRAND: The RNA Secondary Structure and Statistical Analysis Database

    Directory of Open Access Journals (Sweden)

    Andronescu Mirela

    2008-08-01

    Full Text Available Abstract Background The ability to access, search and analyse secondary structures of a large set of known RNA molecules is very important for deriving improved RNA energy models, for evaluating computational predictions of RNA secondary structures and for a better understanding of RNA folding. Currently there is no database that can easily provide these capabilities for almost all RNA molecules with known secondary structures. Results In this paper we describe RNA STRAND – the RNA secondary STRucture and statistical ANalysis Database, a curated database containing known secondary structures of any type and organism. Our new database provides a wide collection of known RNA secondary structures drawn from public databases, searchable and downloadable in a common format. Comprehensive statistical information on the secondary structures in our database is provided using the RNA Secondary Structure Analyser, a new tool we have developed to analyse RNA secondary structures. The information thus obtained is valuable for understanding to which extent and with which probability certain structural motifs can appear. We outline several ways in which the data provided in RNA STRAND can facilitate research on RNA structure, including the improvement of RNA energy models and evaluation of secondary structure prediction programs. In order to keep up-to-date with new RNA secondary structure experiments, we offer the necessary tools to add solved RNA secondary structures to our database and invite researchers to contribute to RNA STRAND. Conclusion RNA STRAND is a carefully assembled database of trusted RNA secondary structures, with easy on-line tools for searching, analyzing and downloading user selected entries, and is publicly available at http://www.rnasoft.ca/strand.

  19. URS DataBase: universe of RNA structures and their motifs.

    Science.gov (United States)

    Baulin, Eugene; Yacovlev, Victor; Khachko, Denis; Spirin, Sergei; Roytberg, Mikhail

    2016-01-01

    The Universe of RNA Structures DataBase (URSDB) stores information obtained from all RNA-containing PDB entries (2935 entries in October 2015). The content of the database is updated regularly. The database consists of 51 tables containing indexed data on various elements of the RNA structures. The database provides a web interface allowing user to select a subset of structures with desired features and to obtain various statistical data for a selected subset of structures or for all structures. In particular, one can easily obtain statistics on geometric parameters of base pairs, on structural motifs (stems, loops, etc.) or on different types of pseudoknots. The user can also view and get information on an individual structure or its selected parts, e.g. RNA-protein hydrogen bonds. URSDB employs a new original definition of loops in RNA structures. That definition fits both pseudoknot-free and pseudoknotted secondary structures and coincides with the classical definition in case of pseudoknot-free structures. To our knowledge, URSDB is the first database supporting searches based on topological classification of pseudoknots and on extended loop classification.Database URL: http://server3.lpm.org.ru/urs/. © The Author(s) 2016. Published by Oxford University Press.

  20. NIMS structural materials databases and cross search engine - MatNavi

    Energy Technology Data Exchange (ETDEWEB)

    Yamazaki, M.; Xu, Y.; Murata, M.; Tanaka, H.; Kamihira, K.; Kimura, K. [National Institute for Materials Science, Tokyo (Japan)

    2007-06-15

    Materials Database Station (MDBS) of National Institute for Materials Science (NIMS) owns the world's largest Internet materials database for academic and industry purpose, which is composed of twelve databases: five concerning structural materials, five concerning basic physical properties, one for superconducting materials and one for polymers. All of theses databases are opened to Internet access at the website of http://mits.nims.go.jp/en. Online tools for predicting properties of polymers and composite materials are also available. The NIMS structural materials databases are composed of structural materials data sheet online version (creep, fatigue, corrosion and space use materials strength), microstructure for crept material database, Pressure vessel materials database and CCT diagram for welding. (orig.)

  1. SKPDB: a structural database of shikimate pathway enzymes

    Directory of Open Access Journals (Sweden)

    de Azevedo Walter F

    2010-01-01

    Full Text Available Abstract Background The functional and structural characterisation of enzymes that belong to microbial metabolic pathways is very important for structure-based drug design. The main interest in studying shikimate pathway enzymes involves the fact that they are essential for bacteria but do not occur in humans, making them selective targets for design of drugs that do not directly impact humans. Description The ShiKimate Pathway DataBase (SKPDB is a relational database applied to the study of shikimate pathway enzymes in microorganisms and plants. The current database is updated regularly with the addition of new data; there are currently 8902 enzymes of the shikimate pathway from different sources. The database contains extensive information on each enzyme, including detailed descriptions about sequence, references, and structural and functional studies. All files (primary sequence, atomic coordinates and quality scores are available for downloading. The modeled structures can be viewed using the Jmol program. Conclusions The SKPDB provides a large number of structural models to be used in docking simulations, virtual screening initiatives and drug design. It is freely accessible at http://lsbzix.rc.unesp.br/skpdb/.

  2. Database on wind characteristics - Structure and philosophy

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, G.C.; Hansen, K.S.

    2001-11-01

    The main objective of IEA R and D Wind Annex XVII - Database on Wind Characteristics - is to provide wind energy planners and designers, as well as the international wind engineering community in general, with easy access to quality controlled measured wind field time series observed in a wide range of environments. The project partners are Sweden, Norway, U.S.A., The Netherlands, Japan and Denmark, with Denmark as the Operating Agent. The reporting of IEA R and D Annex XVII falls in three separate parts. Part one deals with the overall structure and philosophy behind the database, part two accounts in details for the available data in the established database bank and part three is the Users Manual describing the various ways to access and analyse the data. The present report constitutes the first part of the Annex XVII reporting, and it contains a detailed description of the database structure, the data quality control procedures, the selected indexing of the data and the hardware system. (au)

  3. An image database structure for pediatric radiology

    International Nuclear Information System (INIS)

    Mankovich, N.J.

    1987-01-01

    The operation of the Clinical Radiology Imaging System (CRIS) in Pediatric Radiology at UCLA relies on the orderly flow of text and image data among the three basic subsystems including acquisition, storage, and display. CRIS provides the radiologist, clinician, and technician with data at clinical image workstations by maintaining comprehensive database. CRIS is made up of sub-systems, each composed of one more programs or tasks which operate in parallel on a VAX-11/750 microcomputer in Pediatric Radiology. Tasks are coordinated through dynamic data structures that include system event flags and disk-resident queues. This report outlines: (1) the CRIS data model, (2) the flow of information among CRIS components, (3) the underlying database structures which support the acquisition, display, and storage of text and image information, and (4) current database statistics

  4. Synthesis and structural and electrical characterization of new materials Bi3R2FeTi3O15

    International Nuclear Information System (INIS)

    Gil Novoa, O.D.; Landínez Téllez, D.A.; Roa-Rojas, J.

    2012-01-01

    In this work we report the synthesis of polycrystalline samples of Bi 5 FeTi 3 O 15 and Bi 3 R 2 FeTi 3 O 15 new compounds with R=Nd, Sm, Gd, Dy, Ho and Yb. The materials were synthesized by the standard solid state reaction recipe from high purity (99.99%) powders. The structural characteristics of materials were analyzed by X-ray diffraction experiments. Rietveld refinement by the GSAS code was performed, taking the input data from the ICSD 74037 database. Results reveal that materials crystallized in orthorhombic single-phase structures and space group Fmm2. Measurements of polarization as a function of applied electric field were carried out using a Radiant Technology polarimeter. We determine the occurrence of hysteretic behaviors, which are characteristic of ferroelectric materials. The main values of remnant and coercive applied fields were observed for substitutions with Yb and Nd, which have the main atomic radii.

  5. Database structure and file layout of Nuclear Power Plant Database. Database for design information on Light Water Reactors in Japan

    International Nuclear Information System (INIS)

    Yamamoto, Nobuo; Izumi, Fumio.

    1995-12-01

    The Nuclear Power Plant Database (PPD) has been developed at the Japan Atomic Energy Research Institute (JAERI) to provide plant design information on domestic Light Water Reactors (LWRs) to be used for nuclear safety research and so forth. This database can run on the main frame computer in the JAERI Tokai Establishment. The PPD contains the information on the plant design concepts, the numbers, capacities, materials, structures and types of equipment and components, etc, based on the safety analysis reports of the domestic LWRs. This report describes the details of the PPD focusing on the database structure and layout of data files so that the users can utilize it efficiently. (author)

  6. UbSRD: The Ubiquitin Structural Relational Database.

    Science.gov (United States)

    Harrison, Joseph S; Jacobs, Tim M; Houlihan, Kevin; Van Doorslaer, Koenraad; Kuhlman, Brian

    2016-02-22

    The structurally defined ubiquitin-like homology fold (UBL) can engage in several unique protein-protein interactions and many of these complexes have been characterized with high-resolution techniques. Using Rosetta's structural classification tools, we have created the Ubiquitin Structural Relational Database (UbSRD), an SQL database of features for all 509 UBL-containing structures in the PDB, allowing users to browse these structures by protein-protein interaction and providing a platform for quantitative analysis of structural features. We used UbSRD to define the recognition features of ubiquitin (UBQ) and SUMO observed in the PDB and the orientation of the UBQ tail while interacting with certain types of proteins. While some of the interaction surfaces on UBQ and SUMO overlap, each molecule has distinct features that aid in molecular discrimination. Additionally, we find that the UBQ tail is malleable and can adopt a variety of conformations upon binding. UbSRD is accessible as an online resource at rosettadesign.med.unc.edu/ubsrd. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. A framework for analysing relationships between chemical composition and crystal structure in metal oxides

    International Nuclear Information System (INIS)

    Thomas, N.W.

    1991-01-01

    A computer program has been written to characterize the coordination polyhedra of metal cations in terms of their volumes and polyhedral elements, i.e. corners, edges and faces. The sharing of these corners, edges and faces between polyhedra is also quantitatively monitored. In order to develop the methodology, attention is focused on ternary oxides containing the Al 3+ ion, whose structures were retrieved from the Inorganic Crystal Structure Database (ICSD). This also permits an objective assessment of the applicability of Pauling's rules. The influence of ionic valence on the structures of these compounds is examined, by calculating electrostatic bond strengths. Although Pauling's second rule is not supported in detail, the calculation of oxygen-ion valence reveals a basic structural requirement, that the average calculated oxygen-ion valence in any ionic oxide structure is equal to 2. The analysis is further developed to define a general method for the prediction of novel chemical compositions likely to adopt a given desired structure. The polyhedral volumes of this structure are calculated, and use is made of standard ionic radii for cations in sixfold coordination. The electroneutrality principle is invoked to take valence considerations into account. This method can be used to guide the development of new compositions of ceramic materials with certain desirable physical properties. (orig.)

  8. A new relational database structure and online interface for the HITRAN database

    Science.gov (United States)

    Hill, Christian; Gordon, Iouli E.; Rothman, Laurence S.; Tennyson, Jonathan

    2013-11-01

    A new format for the HITRAN database is proposed. By storing the line-transition data in a number of linked tables described by a relational database schema, it is possible to overcome the limitations of the existing format, which have become increasingly apparent over the last few years as new and more varied data are being used by radiative-transfer models. Although the database in the new format can be searched using the well-established Structured Query Language (SQL), a web service, HITRANonline, has been deployed to allow users to make most common queries of the database using a graphical user interface in a web page. The advantages of the relational form of the database to ensuring data integrity and consistency are explored, and the compatibility of the online interface with the emerging standards of the Virtual Atomic and Molecular Data Centre (VAMDC) project is discussed. In particular, the ability to access HITRAN data using a standard query language from other websites, command line tools and from within computer programs is described.

  9. Synthesis and structural and electrical characterization of new materials Bi{sub 3}R{sub 2}FeTi{sub 3}O{sub 15}

    Energy Technology Data Exchange (ETDEWEB)

    Gil Novoa, O.D.; Landinez Tellez, D.A. [Grupo de Fisica de Nuevos Materiales, Departamento de Fisica, Universidad Nacional de Colombia, AA 5997, Bogota DC (Colombia); Roa-Rojas, J., E-mail: jroar@unal.edu.co [Grupo de Fisica de Nuevos Materiales, Departamento de Fisica, Universidad Nacional de Colombia, AA 5997, Bogota DC (Colombia)

    2012-08-15

    In this work we report the synthesis of polycrystalline samples of Bi{sub 5}FeTi{sub 3}O{sub 15} and Bi{sub 3}R{sub 2}FeTi{sub 3}O{sub 15} new compounds with R=Nd, Sm, Gd, Dy, Ho and Yb. The materials were synthesized by the standard solid state reaction recipe from high purity (99.99%) powders. The structural characteristics of materials were analyzed by X-ray diffraction experiments. Rietveld refinement by the GSAS code was performed, taking the input data from the ICSD 74037 database. Results reveal that materials crystallized in orthorhombic single-phase structures and space group Fmm2. Measurements of polarization as a function of applied electric field were carried out using a Radiant Technology polarimeter. We determine the occurrence of hysteretic behaviors, which are characteristic of ferroelectric materials. The main values of remnant and coercive applied fields were observed for substitutions with Yb and Nd, which have the main atomic radii.

  10. The Structural Ceramics Database: Technical Foundations

    Science.gov (United States)

    Munro, R. G.; Hwang, F. Y.; Hubbard, C. R.

    1989-01-01

    The development of a computerized database on advanced structural ceramics can play a critical role in fostering the widespread use of ceramics in industry and in advanced technologies. A computerized database may be the most effective means of accelerating technology development by enabling new materials to be incorporated into designs far more rapidly than would have been possible with traditional information transfer processes. Faster, more efficient access to critical data is the basis for creating this technological advantage. Further, a computerized database provides the means for a more consistent treatment of data, greater quality control and product reliability, and improved continuity of research and development programs. A preliminary system has been completed as phase one of an ongoing program to establish the Structural Ceramics Database system. The system is designed to be used on personal computers. Developed in a modular design, the preliminary system is focused on the thermal properties of monolithic ceramics. The initial modules consist of materials specification, thermal expansion, thermal conductivity, thermal diffusivity, specific heat, thermal shock resistance, and a bibliography of data references. Query and output programs also have been developed for use with these modules. The latter program elements, along with the database modules, will be subjected to several stages of testing and refinement in the second phase of this effort. The goal of the refinement process will be the establishment of this system as a user-friendly prototype. Three primary considerations provide the guidelines to the system’s development: (1) The user’s needs; (2) The nature of materials properties; and (3) The requirements of the programming language. The present report discusses the manner and rationale by which each of these considerations leads to specific features in the design of the system. PMID:28053397

  11. Current Challenges in Development of a Database of Three-Dimensional Chemical Structures

    Science.gov (United States)

    Maeda, Miki H.

    2015-01-01

    We are developing a database named 3DMET, a three-dimensional structure database of natural metabolites. There are two major impediments to the creation of 3D chemical structures from a set of planar structure drawings: the limited accuracy of computer programs and insufficient human resources for manual curation. We have tested some 2D–3D converters to convert 2D structure files from external databases. These automatic conversion processes yielded an excessive number of improper conversions. To ascertain the quality of the conversions, we compared IUPAC Chemical Identifier and canonical SMILES notations before and after conversion. Structures whose notations correspond to each other were regarded as a correct conversion in our present work. We found that chiral inversion is the most serious factor during the improper conversion. In the current stage of our database construction, published books or articles have been resources for additions to our database. Chemicals are usually drawn as pictures on the paper. To save human resources, an optical structure reader was introduced. The program was quite useful but some particular errors were observed during our operation. We hope our trials for producing correct 3D structures will help other developers of chemical programs and curators of chemical databases. PMID:26075200

  12. 1.15 - Structural Chemogenomics Databases to Navigate Protein–Ligand Interaction Space

    NARCIS (Netherlands)

    Kanev, G.K.; Kooistra, A.J.; de Esch, I.J.P.; de Graaf, C.

    2017-01-01

    Structural chemogenomics databases allow the integration and exploration of heterogeneous genomic, structural, chemical, and pharmacological data in order to extract useful information that is applicable for the discovery of new protein targets and biologically active molecules. Integrated databases

  13. Economic and Structural Database for the MEDPRO Project

    OpenAIRE

    Paroussos, Leonidas; Tsani, Stella; Vrontisi, Zoi

    2013-01-01

    This report presents the economic and structural database compiled for the MEDPRO project. The database includes governance, infrastructure, finance, environment, energy, agricultural data and development indicators for the 11 southern and eastern Mediterranean countries (SEMCs) studied in the MEDPRO project. The report further details the data and the methods used for the construction of social accounting, bilateral trade, consumption and investment matrices for each of the SEMCs.

  14. Investigation on structuring the human body function database; Shintai kino database no kochiku ni kansuru chosa kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-03-01

    Based on the concept of human life engineering database, a study was made to know how to technically make such a database fittable to the old people in the age-advancing society. It was then proposed that the old people`s human life engineering database should be prepared to serve for the development and design of life technology to be applied into the age-advancing society. An executive method of structuring the database was established through the `bathing` and `going out` selected as an action to be casestudied in the daily life of old people. As a result of the study, the proposal was made that the old people`s human body function database should be prepared as a R and D base for the life technology in the aged society. Based on the above proposal, a master plan was mapped out to structure this database with the concrete method studied for putting it into action. At the first investigation stage of the above study, documentation was made through utilizing the existing documentary database. Enterprises were also interviewed for the investigation. Pertaining to the function of old people, about 500 documents were extracted with many vague points not clarified yet. The investigation will restart in the next fiscal year. 4 refs., 38 figs., 30 tabs.

  15. [Construction of chemical information database based on optical structure recognition technique].

    Science.gov (United States)

    Lv, C Y; Li, M N; Zhang, L R; Liu, Z M

    2018-04-18

    To create a protocol that could be used to construct chemical information database from scientific literature quickly and automatically. Scientific literature, patents and technical reports from different chemical disciplines were collected and stored in PDF format as fundamental datasets. Chemical structures were transformed from published documents and images to machine-readable data by using the name conversion technology and optical structure recognition tool CLiDE. In the process of molecular structure information extraction, Markush structures were enumerated into well-defined monomer molecules by means of QueryTools in molecule editor ChemDraw. Document management software EndNote X8 was applied to acquire bibliographical references involving title, author, journal and year of publication. Text mining toolkit ChemDataExtractor was adopted to retrieve information that could be used to populate structured chemical database from figures, tables, and textual paragraphs. After this step, detailed manual revision and annotation were conducted in order to ensure the accuracy and completeness of the data. In addition to the literature data, computing simulation platform Pipeline Pilot 7.5 was utilized to calculate the physical and chemical properties and predict molecular attributes. Furthermore, open database ChEMBL was linked to fetch known bioactivities, such as indications and targets. After information extraction and data expansion, five separate metadata files were generated, including molecular structure data file, molecular information, bibliographical references, predictable attributes and known bioactivities. Canonical simplified molecular input line entry specification as primary key, metadata files were associated through common key nodes including molecular number and PDF number to construct an integrated chemical information database. A reasonable construction protocol of chemical information database was created successfully. A total of 174 research

  16. Development of Database Assisted Structure Identification (DASI Methods for Nontargeted Metabolomics

    Directory of Open Access Journals (Sweden)

    Lochana C. Menikarachchi

    2016-05-01

    Full Text Available Metabolite structure identification remains a significant challenge in nontargeted metabolomics research. One commonly used strategy relies on searching biochemical databases using exact mass. However, this approach fails when the database does not contain the unknown metabolite (i.e., for unknown-unknowns. For these cases, constrained structure generation with combinatorial structure generators provides a potential option. Here we evaluated structure generation constraints based on the specification of: (1 substructures required (i.e., seed structures; (2 substructures not allowed; and (3 filters to remove incorrect structures. Our approach (database assisted structure identification, DASI used predictive models in MolFind to find candidate structures with chemical and physical properties similar to the unknown. These candidates were then used for seed structure generation using eight different structure generation algorithms. One algorithm was able to generate correct seed structures for 21/39 test compounds. Eleven of these seed structures were large enough to constrain the combinatorial structure generator to fewer than 100,000 structures. In 35/39 cases, at least one algorithm was able to generate a correct seed structure. The DASI method has several limitations and will require further experimental validation and optimization. At present, it seems most useful for identifying the structure of unknown-unknowns with molecular weights <200 Da.

  17. Inference Attacks and Control on Database Structures

    Directory of Open Access Journals (Sweden)

    Muhamed Turkanovic

    2015-02-01

    Full Text Available Today’s databases store information with sensitivity levels that range from public to highly sensitive, hence ensuring confidentiality can be highly important, but also requires costly control. This paper focuses on the inference problem on different database structures. It presents possible treats on privacy with relation to the inference, and control methods for mitigating these treats. The paper shows that using only access control, without any inference control is inadequate, since these models are unable to protect against indirect data access. Furthermore, it covers new inference problems which rise from the dimensions of new technologies like XML, semantics, etc.

  18. Using the structure-function linkage database to characterize functional domains in enzymes.

    Science.gov (United States)

    Brown, Shoshana; Babbitt, Patricia

    2014-12-12

    The Structure-Function Linkage Database (SFLD; http://sfld.rbvi.ucsf.edu/) is a Web-accessible database designed to link enzyme sequence, structure, and functional information. This unit describes the protocols by which a user may query the database to predict the function of uncharacterized enzymes and to correct misannotated functional assignments. The information in this unit is especially useful in helping a user discriminate functional capabilities of a sequence that is only distantly related to characterized sequences in publicly available databases. Copyright © 2014 John Wiley & Sons, Inc.

  19. Database Description - ConfC | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ConfC Alternative name Database...amotsu Noguchi Tel: 042-495-8736 E-mail: Database classification Structure Database...s - Protein structure Structure Databases - Small molecules Structure Databases - Nucleic acid structure Database... services - Need for user registration - About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Database Description - ConfC | LSDB Archive ...

  20. EDCs DataBank: 3D-Structure database of endocrine disrupting chemicals.

    Science.gov (United States)

    Montes-Grajales, Diana; Olivero-Verbel, Jesus

    2015-01-02

    Endocrine disrupting chemicals (EDCs) are a group of compounds that affect the endocrine system, frequently found in everyday products and epidemiologically associated with several diseases. The purpose of this work was to develop EDCs DataBank, the only database of EDCs with three-dimensional structures. This database was built on MySQL using the EU list of potential endocrine disruptors and TEDX list. It contains the three-dimensional structures available on PubChem, as well as a wide variety of information from different databases and text mining tools, useful for almost any kind of research regarding EDCs. The web platform was developed employing HTML, CSS and PHP languages, with dynamic contents in a graphic environment, facilitating information analysis. Currently EDCs DataBank has 615 molecules, including pesticides, natural and industrial products, cosmetics, drugs and food additives, among other low molecular weight xenobiotics. Therefore, this database can be used to study the toxicological effects of these molecules, or to develop pharmaceuticals targeting hormone receptors, through docking studies, high-throughput virtual screening and ligand-protein interaction analysis. EDCs DataBank is totally user-friendly and the 3D-structures of the molecules can be downloaded in several formats. This database is freely available at http://edcs.unicartagena.edu.co. Copyright © 2014. Published by Elsevier Ireland Ltd.

  1. SHEETSPAIR: A Database of Amino Acid Pairs in Protein Sheet Structures

    Directory of Open Access Journals (Sweden)

    Ning Zhang

    2007-10-01

    Full Text Available Within folded strands of a protein, amino acids (AAs on every adjacent two strands form a pair of AAs. To explore the interactions between strands in a protein sheet structure, we have established an Internet-accessible relational database named SheetsPairs based on SQL Server 2000. The database has collected AAs pairs in proteins with detailed information. Furthermore, it utilizes a non-freetext database structure to store protein sequences and a specific database table with a unique number to store strands, which provides more searching options and rapid and accurate access to data queries. An IIS web server has been set up for data retrieval through a custom web interface, which enables complex data queries. Also searchable are parallel or anti-parallel folded strands and the list of strands in a specified protein.

  2. Nuclear Reaction and Structure Databases of the National Nuclear Data Center

    International Nuclear Information System (INIS)

    Pritychenko, B.; Arcilla, R.; Herman, M. W.; Oblozinsky, P.; Rochman, D.; Sonzogni, A. A.; Tuli, J. K.; Winchell, D. F.

    2006-01-01

    The National Nuclear Data Center (NNDC) collects, evaluates, and disseminates nuclear physics data for basic research and applied nuclear technologies. In 2004, the NNDC migrated all databases into modern relational database software, installed new generation of Linux servers and developed new Java-based Web service. This nuclear database development means much faster, more flexible and more convenient service to all users in the United States. These nuclear reaction and structure database developments as well as related Web services are briefly described

  3. Geologic structure mapping database Spent Fuel Test - Climax, Nevada Test Site

    International Nuclear Information System (INIS)

    Yow, J.L. Jr.

    1984-01-01

    Information on over 2500 discontinuities mapped at the SFT-C is contained in the geologic structure mapping database. Over 1800 of these features include complete descriptions of their orientations. This database is now available for use by other researchers. 6 references, 3 figures, 2 tables

  4. Report on the database structuring project in fiscal 1996 related to the 'surveys on making databases for energy saving (2)'; 1996 nendo database kochiku jigyo hokokusho. Sho energy database system ka ni kansuru chosa 2

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    With an objective to support promotion of energy conservation in such countries as Japan, China, Indonesia, the Philippines, Thailand, Malaysia, Taiwan and Korea, primary information on energy conservation in each country was collected, and the database was structured. This paper summarizes the achievements in fiscal 1996. Based on the survey result on the database project having been progressed to date, and on various data having been collected, this fiscal year has discussed structuring the database for distribution and proliferation of the database. In the discussion, requirements for the functions to be possessed by the database, items of data to be recorded in the database, and processing of the recorded data were put into order referring to propositions on the database circumstances. Demonstrations for the database of a proliferation version were performed in the Philippines, Indonesia and China. Three hundred CDs for distribution in each country were prepared. Adjustments and confirmation on operation of the supplied computers were carried out, and the operation explaining meetings were held in China and the Philippines. (NEDO)

  5. PACSY, a relational database management system for protein structure and chemical shift analysis.

    Science.gov (United States)

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L

    2012-10-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu.

  6. PACSY, a relational database management system for protein structure and chemical shift analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Woonghee, E-mail: whlee@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison, and Biochemistry Department (United States); Yu, Wookyung [Center for Proteome Biophysics, Pusan National University, Department of Physics (Korea, Republic of); Kim, Suhkmann [Pusan National University, Department of Chemistry and Chemistry Institute for Functional Materials (Korea, Republic of); Chang, Iksoo [Center for Proteome Biophysics, Pusan National University, Department of Physics (Korea, Republic of); Lee, Weontae, E-mail: wlee@spin.yonsei.ac.kr [Yonsei University, Structural Biochemistry and Molecular Biophysics Laboratory, Department of Biochemistry (Korea, Republic of); Markley, John L., E-mail: markley@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison, and Biochemistry Department (United States)

    2012-10-15

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.eduhttp://pacsy.nmrfam.wisc.edu.

  7. PACSY, a relational database management system for protein structure and chemical shift analysis

    Science.gov (United States)

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo

    2012-01-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu. PMID:22903636

  8. PACSY, a relational database management system for protein structure and chemical shift analysis

    International Nuclear Information System (INIS)

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L.

    2012-01-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.eduhttp://pacsy.nmrfam.wisc.edu.

  9. Development of conventional fatigue database for structure materials of nuclear power plant

    International Nuclear Information System (INIS)

    Yang Bing

    2002-01-01

    Management system of the conventional fatigue database for structure materials of nuclear power plant (NPP) is developed. The database included the parameters of design curves, i.e., the stress-life, survival probability-stress-life, strain-life, survival probability-strain-life, stress-strain and survival probability-stress-strain curves, and corresponding information of materials and testing conditions. Two ways, by materials name or by the inter-bounds of material mechanical properties, are constructed to search the database. From the searched information it can be conveniently performed of the conventional fatigue design analysis and reliability assessment of structures

  10. Database structures and interfaces for W7-X

    International Nuclear Information System (INIS)

    Heimann, P.; Bluhm, T.; Hennig, Ch.; Kroiss, H.; Kuehner, G.; Maier, J.; Riemann, H.; Zilker, M.

    2008-01-01

    The W7-X experiment of the IPP, under construction in Greifswald Germany, is designed to operate in a quasi-steady-state scenario. The database structures and interfaces used for discharge description and execution have to reflect this continuous mode of operation. In close collaboration between the control group of W7-X and the data acquisition group a combined design of the data structures used for describing the configuration and the operation of the experiment was developed. To guarantee access to this information from all participating stations a TCP/IP portal and a proxy server were developed. This portal enables especially the VxWorks real-time operating systems of the control stations to access the information in the object-oriented database. The database schema includes now a more functional description of the experiment and gives the physicists a more simplified view of the necessary definitions of operational parameters. The scheduling of the long discharges of W7-X will be done by predefining operational parameters in segments and scenarios, where a scenario is a fixed sequence of segments with a common physical background. To hide the specialized information contained in the basic parameters from the experiment leader or physicist an abstraction layer was introduced that only shows physically interesting information. An executable segment will be generated after verifying the consistency of the high-level parameters by using a transformation function for every basic parameter needed. Since the database contains all configurations and discharge definitions necessary to operate the experiment, it is very important to give the user a tool to manipulate this information in an intuitive way. A special editor (ConfiX) was designed and implemented for this task. At the moment the basic functionality for dealing with all kind of objects in the database is available. Future releases will extend the functionality to defining and editing configurations, segments

  11. Ultra-Structure database design methodology for managing systems biology data and analyses

    Directory of Open Access Journals (Sweden)

    Hemminger Bradley M

    2009-08-01

    Full Text Available Abstract Background Modern, high-throughput biological experiments generate copious, heterogeneous, interconnected data sets. Research is dynamic, with frequently changing protocols, techniques, instruments, and file formats. Because of these factors, systems designed to manage and integrate modern biological data sets often end up as large, unwieldy databases that become difficult to maintain or evolve. The novel rule-based approach of the Ultra-Structure design methodology presents a potential solution to this problem. By representing both data and processes as formal rules within a database, an Ultra-Structure system constitutes a flexible framework that enables users to explicitly store domain knowledge in both a machine- and human-readable form. End users themselves can change the system's capabilities without programmer intervention, simply by altering database contents; no computer code or schemas need be modified. This provides flexibility in adapting to change, and allows integration of disparate, heterogenous data sets within a small core set of database tables, facilitating joint analysis and visualization without becoming unwieldy. Here, we examine the application of Ultra-Structure to our ongoing research program for the integration of large proteomic and genomic data sets (proteogenomic mapping. Results We transitioned our proteogenomic mapping information system from a traditional entity-relationship design to one based on Ultra-Structure. Our system integrates tandem mass spectrum data, genomic annotation sets, and spectrum/peptide mappings, all within a small, general framework implemented within a standard relational database system. General software procedures driven by user-modifiable rules can perform tasks such as logical deduction and location-based computations. The system is not tied specifically to proteogenomic research, but is rather designed to accommodate virtually any kind of biological research. Conclusion We find

  12. Engineering method to build the composite structure ply database

    Directory of Open Access Journals (Sweden)

    Qinghua Shi

    Full Text Available In this paper, a new method to build a composite ply database with engineering design constraints is proposed. This method has two levels: the core stacking sequence design and the whole stacking sequence design. The core stacking sequences are obtained by the full permutation algorithm considering the ply ratio requirement and the dispersion character which characterizes the dispersion of ply angles. The whole stacking sequences are the combinations of the core stacking sequences. By excluding the ply sequences which do not meet the engineering requirements, the final ply database is obtained. One example with the constraints that the total layer number is 100 and the ply ratio is 30:60:10 is presented to validate the method. This method provides a new way to set up the ply database based on the engineering requirements without adopting intelligent optimization algorithms. Keywords: Composite ply database, VBA program, Structure design, Stacking sequence

  13. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RPSD Alternative nam...e Rice Protein Structure Database DOI 10.18908/lsdba.nbdc00749-000 Creator Creator Name: Toshimasa Yamazaki ... Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Databas...e classification Structure Databases - Protein structure Organism Taxonomy Name: Or...or name(s): Journal: External Links: Original website information Database maintenance site National Institu

  14. Database Description - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PSCDB Alternative n...rial Science and Technology (AIST) Takayuki Amemiya E-mail: Database classification Structure Databases - Protein structure Database...554-D558. External Links: Original website information Database maintenance site Graduate School of Informat...available URL of Web services - Need for user registration Not available About This Database Database Descri...ption Download License Update History of This Database Site Policy | Contact Us Database Description - PSCDB | LSDB Archive ...

  15. NESSY, a relational PC database for nuclear structure and decay data

    International Nuclear Information System (INIS)

    Boboshin, I.N.; Varlamov, V.V.; Trukhanov, S.K.

    1994-11-01

    The universal relational database NESSY (New ENSDF Search SYstem) based on the international ENSDF system (Evaluated Nuclear Structure Data File) is described. NESSY, which was developed for IBM compatible PC, provides high efficiency processing of ENSDF information for searches and retrievals of nuclear physics data. The principle of the database development and examples of applications are presented. (author)

  16. Protein Structural Change Data - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PSCDB Protein Structural Change Data Data detail Data name Protein Structural Change Data DO...History of This Database Site Policy | Contact Us Protein Structural Change Data - PSCDB | LSDB Archive ...

  17. Structures in the communication between lexicographer and programmer: Database and interface

    DEFF Research Database (Denmark)

    Tarp, Sven

    2015-01-01

    This paper deals exclusively with e-lexicography. It intends to answer the question how much a lexicographer in charge of a new e-dictionary project should know about lexicographical structures, and how this knowledge could be communicated to the IT programmer designing the underlying database...... and the corresponding user interfaces. With this purpose, it first defines the concepts of lexicographical database, e-dictionary and e-lexicographical structure. Then it discusses some of the new ways in which lexicographical structures express themselves in the digital environment. It stresses, above all......, their dynamic character and great complexity which make it extremely difficult and time-consuming for the lexicographer to get an overview of all the structures in an e-lexicographical project. Finally, and based upon the experience from the planning and design of five e-dictionaries, it shows how...

  18. MPID-T2: a database for sequence-structure-function analyses of pMHC and TR/pMHC structures.

    Science.gov (United States)

    Khan, Javed Mohammed; Cheruku, Harish Reddy; Tong, Joo Chuan; Ranganathan, Shoba

    2011-04-15

    Sequence-structure-function information is critical in understanding the mechanism of pMHC and TR/pMHC binding and recognition. A database for sequence-structure-function information on pMHC and TR/pMHC interactions, MHC-Peptide Interaction Database-TR version 2 (MPID-T2), is now available augmented with the latest PDB and IMGT/3Dstructure-DB data, advanced features and new parameters for the analysis of pMHC and TR/pMHC structures. http://biolinfo.org/mpid-t2. shoba.ranganathan@mq.edu.au Supplementary data are available at Bioinformatics online.

  19. TIPdb-3D: the three-dimensional structure database of phytochemicals from Taiwan indigenous plants.

    Science.gov (United States)

    Tung, Chun-Wei; Lin, Ying-Chi; Chang, Hsun-Shuo; Wang, Chia-Chi; Chen, Ih-Sheng; Jheng, Jhao-Liang; Li, Jih-Heng

    2014-01-01

    The rich indigenous and endemic plants in Taiwan serve as a resourceful bank for biologically active phytochemicals. Based on our TIPdb database curating bioactive phytochemicals from Taiwan indigenous plants, this study presents a three-dimensional (3D) chemical structure database named TIPdb-3D to support the discovery of novel pharmacologically active compounds. The Merck Molecular Force Field (MMFF94) was used to generate 3D structures of phytochemicals in TIPdb. The 3D structures could facilitate the analysis of 3D quantitative structure-activity relationship, the exploration of chemical space and the identification of potential pharmacologically active compounds using protein-ligand docking. Database URL: http://cwtung.kmu.edu.tw/tipdb. © The Author(s) 2014. Published by Oxford University Press.

  20. Intelligent Access to Sequence and Structure Databases (IASSD) - an interface for accessing information from major web databases.

    Science.gov (United States)

    Ganguli, Sayak; Gupta, Manoj Kumar; Basu, Protip; Banik, Rahul; Singh, Pankaj Kumar; Vishal, Vineet; Bera, Abhisek Ranjan; Chakraborty, Hirak Jyoti; Das, Sasti Gopal

    2014-01-01

    With the advent of age of big data and advances in high throughput technology accessing data has become one of the most important step in the entire knowledge discovery process. Most users are not able to decipher the query result that is obtained when non specific keywords or a combination of keywords are used. Intelligent access to sequence and structure databases (IASSD) is a desktop application for windows operating system. It is written in Java and utilizes the web service description language (wsdl) files and Jar files of E-utilities of various databases such as National Centre for Biotechnology Information (NCBI) and Protein Data Bank (PDB). Apart from that IASSD allows the user to view protein structure using a JMOL application which supports conditional editing. The Jar file is freely available through e-mail from the corresponding author.

  1. TRANSFORMATION OF DEVELOPMENTAL NEUROTOXICITY DATA INTO STRUCTURE-SEARCHABLE TOXML DATABASE IN SUPPORT OF STRUCTURE-ACTIVITY RELATIONSHIP (SAR) WORKFLOW.

    Science.gov (United States)

    Early hazard identification of new chemicals is often difficult due to lack of data on the novel material for toxicity endpoints, including neurotoxicity. At present, there are no structure searchable neurotoxicity databases. A working group was formed to construct a database to...

  2. The Structure-Function Linkage Database.

    Science.gov (United States)

    Akiva, Eyal; Brown, Shoshana; Almonacid, Daniel E; Barber, Alan E; Custer, Ashley F; Hicks, Michael A; Huang, Conrad C; Lauck, Florian; Mashiyama, Susan T; Meng, Elaine C; Mischel, David; Morris, John H; Ojha, Sunil; Schnoes, Alexandra M; Stryke, Doug; Yunes, Jeffrey M; Ferrin, Thomas E; Holliday, Gemma L; Babbitt, Patricia C

    2014-01-01

    The Structure-Function Linkage Database (SFLD, http://sfld.rbvi.ucsf.edu/) is a manually curated classification resource describing structure-function relationships for functionally diverse enzyme superfamilies. Members of such superfamilies are diverse in their overall reactions yet share a common ancestor and some conserved active site features associated with conserved functional attributes such as a partial reaction. Thus, despite their different functions, members of these superfamilies 'look alike', making them easy to misannotate. To address this complexity and enable rational transfer of functional features to unknowns only for those members for which we have sufficient functional information, we subdivide superfamily members into subgroups using sequence information, and lastly into families, sets of enzymes known to catalyze the same reaction using the same mechanistic strategy. Browsing and searching options in the SFLD provide access to all of these levels. The SFLD offers manually curated as well as automatically classified superfamily sets, both accompanied by search and download options for all hierarchical levels. Additional information includes multiple sequence alignments, tab-separated files of functional and other attributes, and sequence similarity networks. The latter provide a new and intuitively powerful way to visualize functional trends mapped to the context of sequence similarity.

  3. Scheme of database structure on decommissioning of the research reactor

    International Nuclear Information System (INIS)

    Park, H. S.; Park, S. K.; Kim, H. R.; Lee, D. K.; Jung, K. J.

    2001-01-01

    ISP (Information Strategy Planning), which is the first step of the whole database development, has been studied to manage effectively information and data related to the decommissioning activities of the Korea Research Reactor 1 and 2 (KRR-1 and 2). Since Korea has not acquired the technology of the decommissioning database management system, some record management system (RMS) of large nuclear facilities of national experience such as in the U.S.A, Japan, Belgium, and Russian were reviewed. In order to construct the database structure of the whole decommissioning activities such as the working information, radioactive waste treatment, and radiological surveying and analysis has been extracted from the whole dismantling process. These information and data will be used as the basic data to analyzed the matrix to find the entity relationship diagram and will contribute to the establishment of a business system design and the development of a decommissioning database system as well

  4. BtoxDB: a comprehensive database of protein structural data on toxin-antitoxin systems.

    Science.gov (United States)

    Barbosa, Luiz Carlos Bertucci; Garrido, Saulo Santesso; Marchetto, Reinaldo

    2015-03-01

    Toxin-antitoxin (TA) systems are diverse and abundant genetic modules in prokaryotic cells that are typically formed by two genes encoding a stable toxin and a labile antitoxin. Because TA systems are able to repress growth or kill cells and are considered to be important actors in cell persistence (multidrug resistance without genetic change), these modules are considered potential targets for alternative drug design. In this scenario, structural information for the proteins in these systems is highly valuable. In this report, we describe the development of a web-based system, named BtoxDB, that stores all protein structural data on TA systems. The BtoxDB database was implemented as a MySQL relational database using PHP scripting language. Web interfaces were developed using HTML, CSS and JavaScript. The data were collected from the PDB, UniProt and Entrez databases. These data were appropriately filtered using specialized literature and our previous knowledge about toxin-antitoxin systems. The database provides three modules ("Search", "Browse" and "Statistics") that enable searches, acquisition of contents and access to statistical data. Direct links to matching external databases are also available. The compilation of all protein structural data on TA systems in one platform is highly useful for researchers interested in this content. BtoxDB is publicly available at http://www.gurupi.uft.edu.br/btoxdb. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Reactivity of 12-tungstophosphoric acid and its inhibitor potency toward Na+/K+-ATPase: A combined 31P NMR study, ab initio calculations and crystallographic analysis.

    Science.gov (United States)

    Bošnjaković-Pavlović, Nada; Bajuk-Bogdanović, Danica; Zakrzewska, Joanna; Yan, Zeyin; Holclajtner-Antunović, Ivanka; Gillet, Jean-Michel; Spasojević-de Biré, Anne

    2017-11-01

    Influence of 12-tungstophosphoric acid (WPA) on conversion of adenosine triphosphate (ATP) to adenosine diphosphate (ADP) in the presence of Na + /K + -ATPase was monitored by 31 P NMR spectroscopy. It was shown that WPA exhibits inhibitory effect on Na + /K + -ATPase activity. In order to study WPA reactivity and intermolecular interactions between WPA oxygen atoms and different proton donor types (D=O, N, C), we have considered data for WPA based compounds from the Cambridge Structural Database (CSD), the Crystallographic Open Database (COD) and the Inorganic Crystal Structure Database (ICSD). Binding properties of Keggin's anion in biological systems are illustrated using Protein Data Bank (PDB). This work constitutes the first determination of theoretical Bader charges on polyoxotungstate compound via the Atom In Molecule theory. An analysis of electrostatic potential maps at the molecular surface and charge of WPA, resulting from DFT calculations, suggests that the preferred protonation site corresponds to WPA bridging oxygen. These results enlightened WPA chemical reactivity and its potential biological applications such as the inhibition of the ATPase activity. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Structure and needs of global loss databases about natural disaster

    Science.gov (United States)

    Steuer, Markus

    2010-05-01

    Global loss databases are used for trend analyses and statistics in scientific projects, studies for governmental and nongovernmental organizations and for the insurance and finance industry as well. At the moment three global data sets are established: EM-DAT (CRED), Sigma (Swiss Re) and NatCatSERVICE (Munich Re). Together with the Asian Disaster Reduction Center (ADRC) and United Nations Development Program (UNDP) started a collaborative initiative in 2007 with the aim to agreed on and implemented a common "Disaster Category Classification and Peril Terminology for Operational Databases". This common classification has been established through several technical meetings and working groups and represents a first and important step in the development of a standardized international classification of disasters and terminology of perils. This means concrete to set up a common hierarchy and terminology for all global and regional databases on natural disasters and establish a common and agreed definition of disaster groups, main types and sub-types of events. Also the theme of georeferencing, temporal aspects, methodology and sourcing were other issues that have been identified and will be discussed. The implementation of the new and defined structure for global loss databases is already set up for Munich Re NatCatSERVICE. In the following oral session we will show the structure of the global databases as defined and in addition to give more transparency of the data sets behind published statistics and analyses. The special focus will be on the catastrophe classification from a moderate loss event up to a great natural catastrophe, also to show the quality of sources and give inside information about the assessment of overall and insured losses. Keywords: disaster category classification, peril terminology, overall and insured losses, definition

  7. Nippon Storm Study design

    Directory of Open Access Journals (Sweden)

    Takashi Kurita

    2012-10-01

    Full Text Available An understanding of the clinical aspects of electrical storm (E-storms in patients with implantable cardiac shock devices (ICSDs: ICDs or cardiac resynchronization therapy with defibrillator [CRT-D] may provide important information for clinical management of patients with ICSDs. The Nippon Storm Study was organized by the Japanese Heart Rhythm Society (JHRS and Japanese Society of Electrocardiology and was designed to prospectively collect a variety of data from patients with ICSDs, with a focus on the incidence of E-storms and clinical conditions for the occurrence of an E-storm. Forty main ICSD centers in Japan are participating in the present study. From 2002, the JHRS began to collect ICSD patient data using website registration (termed Japanese cardiac defibrillator therapy registration, or JCDTR. This investigation aims to collect data on and investigate the general parameters of patients with ICSDs, such as clinical backgrounds of the patients, purposes of implantation, complications during the implantation procedure, and incidence of appropriate and inappropriate therapies from the ICSD. The Nippon Storm Study was planned as a sub-study of the JCDTR with focus on E-storms. We aim to achieve registration of more than 1000 ICSD patients and complete follow-up data collection, with the assumption of a 5–10% incidence of E-storms during the 2-year follow-up.

  8. NETMARK: A Schema-less Extension for Relational Databases for Managing Semi-structured Data Dynamically

    Science.gov (United States)

    Maluf, David A.; Tran, Peter B.

    2003-01-01

    Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.

  9. Structural pattern recognition methods based on string comparison for fusion databases

    International Nuclear Information System (INIS)

    Dormido-Canto, S.; Farias, G.; Dormido, R.; Vega, J.; Sanchez, J.; Duro, N.; Vargas, H.; Ratta, G.; Pereira, A.; Portas, A.

    2008-01-01

    Databases for fusion experiments are designed to store several million waveforms. Temporal evolution signals show the same patterns under the same plasma conditions and, therefore, pattern recognition techniques allow the identification of similar plasma behaviours. This article is focused on the comparison of structural pattern recognition methods. A pattern can be composed of simpler sub-patterns, where the most elementary sub-patterns are known as primitives. Selection of primitives is an essential issue in structural pattern recognition methods, because they determine what types of structural components can be constructed. However, it should be noted that there is not a general solution to extract structural features (primitives) from data. So, four different ways to compute the primitives of plasma waveforms are compared: (1) constant length primitives, (2) adaptive length primitives, (3) concavity method and (4) concavity method for noisy signals. Each method defines a code alphabet and, in this way, the pattern recognition problem is carried out via string comparisons. Results of the four methods with the TJ-II stellarator databases will be discussed

  10. Structural pattern recognition methods based on string comparison for fusion databases

    Energy Technology Data Exchange (ETDEWEB)

    Dormido-Canto, S. [Dpto. Informatica y Automatica - UNED 28040, Madrid (Spain)], E-mail: sebas@dia.uned.es; Farias, G.; Dormido, R. [Dpto. Informatica y Automatica - UNED 28040, Madrid (Spain); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, 28040, Madrid (Spain); Sanchez, J.; Duro, N.; Vargas, H. [Dpto. Informatica y Automatica - UNED 28040, Madrid (Spain); Ratta, G.; Pereira, A.; Portas, A. [Asociacion EURATOM/CIEMAT para Fusion, 28040, Madrid (Spain)

    2008-04-15

    Databases for fusion experiments are designed to store several million waveforms. Temporal evolution signals show the same patterns under the same plasma conditions and, therefore, pattern recognition techniques allow the identification of similar plasma behaviours. This article is focused on the comparison of structural pattern recognition methods. A pattern can be composed of simpler sub-patterns, where the most elementary sub-patterns are known as primitives. Selection of primitives is an essential issue in structural pattern recognition methods, because they determine what types of structural components can be constructed. However, it should be noted that there is not a general solution to extract structural features (primitives) from data. So, four different ways to compute the primitives of plasma waveforms are compared: (1) constant length primitives, (2) adaptive length primitives, (3) concavity method and (4) concavity method for noisy signals. Each method defines a code alphabet and, in this way, the pattern recognition problem is carried out via string comparisons. Results of the four methods with the TJ-II stellarator databases will be discussed.

  11. A protein relational database and protein family knowledge bases to facilitate structure-based design analyses.

    Science.gov (United States)

    Mobilio, Dominick; Walker, Gary; Brooijmans, Natasja; Nilakantan, Ramaswamy; Denny, R Aldrin; Dejoannis, Jason; Feyfant, Eric; Kowticwar, Rupesh K; Mankala, Jyoti; Palli, Satish; Punyamantula, Sairam; Tatipally, Maneesh; John, Reji K; Humblet, Christine

    2010-08-01

    The Protein Data Bank is the most comprehensive source of experimental macromolecular structures. It can, however, be difficult at times to locate relevant structures with the Protein Data Bank search interface. This is particularly true when searching for complexes containing specific interactions between protein and ligand atoms. Moreover, searching within a family of proteins can be tedious. For example, one cannot search for some conserved residue as residue numbers vary across structures. We describe herein three databases, Protein Relational Database, Kinase Knowledge Base, and Matrix Metalloproteinase Knowledge Base, containing protein structures from the Protein Data Bank. In Protein Relational Database, atom-atom distances between protein and ligand have been precalculated allowing for millisecond retrieval based on atom identity and distance constraints. Ring centroids, centroid-centroid and centroid-atom distances and angles have also been included permitting queries for pi-stacking interactions and other structural motifs involving rings. Other geometric features can be searched through the inclusion of residue pair and triplet distances. In Kinase Knowledge Base and Matrix Metalloproteinase Knowledge Base, the catalytic domains have been aligned into common residue numbering schemes. Thus, by searching across Protein Relational Database and Kinase Knowledge Base, one can easily retrieve structures wherein, for example, a ligand of interest is making contact with the gatekeeper residue.

  12. Structure health monitoring system using internet and database technologies

    International Nuclear Information System (INIS)

    Kwon, Il Bum; Kim, Chi Yeop; Choi, Man Yong; Lee, Seung Seok

    2003-01-01

    Structural health monitoring system should developed to be based on internet and database technology in order to manage efficiently large structures. This system is operated by internet connected with the side of structures. The monitoring system has some functions: self monitoring, self diagnosis, and self control etc. Self monitoring is the function of sensor fault detection. If some sensors are not normally worked, then this system can detect the fault sensors. Also Self diagnosis function repair the abnormal condition of sensors. And self control is the repair function of the monitoring system. Especially, the monitoring system can identify the replacement of sensors. For further study, the real application test will be performed to check some unconvince.

  13. Structural health monitoring system using internet and database technologies

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Chi Yeop; Choi, Man Yong; Kwon, Il Bum; Lee, Seung Seok [Nonstructive Measurment Lab., KRISS, Daejeon (Korea, Republic of)

    2003-07-01

    Structure health monitoring system should develope to be based on internet and database technology in order to manage efficiency large structures. This system is operated by internet connected with the side of structures. The monitoring system has some functions: self monitoring, self diagnosis, and self control etc. Self monitoring is the function of sensor fault detection. If some sensors are not normally worked, then this system can detect the fault sensors. Also Self diagnosis function repair the abnormal condition of sensors. And self control is the repair function of the monitoring system. Especially, the monitoring system can identify the replacement of sensors. For further study, the real application test will be performed to check some unconviniences.

  14. Structure health monitoring system using internet and database technologies

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Il Bum; Kim, Chi Yeop; Choi, Man Yong; Lee, Seung Seok [Smart Measurment Group. Korea Resarch Institute of Standards and Science, Saejeon (Korea, Republic of)

    2003-05-15

    Structural health monitoring system should developed to be based on internet and database technology in order to manage efficiently large structures. This system is operated by internet connected with the side of structures. The monitoring system has some functions: self monitoring, self diagnosis, and self control etc. Self monitoring is the function of sensor fault detection. If some sensors are not normally worked, then this system can detect the fault sensors. Also Self diagnosis function repair the abnormal condition of sensors. And self control is the repair function of the monitoring system. Especially, the monitoring system can identify the replacement of sensors. For further study, the real application test will be performed to check some unconvince.

  15. Structural health monitoring system using internet and database technologies

    International Nuclear Information System (INIS)

    Kim, Chi Yeop; Choi, Man Yong; Kwon, Il Bum; Lee, Seung Seok

    2003-01-01

    Structure health monitoring system should develope to be based on internet and database technology in order to manage efficiency large structures. This system is operated by internet connected with the side of structures. The monitoring system has some functions: self monitoring, self diagnosis, and self control etc. Self monitoring is the function of sensor fault detection. If some sensors are not normally worked, then this system can detect the fault sensors. Also Self diagnosis function repair the abnormal condition of sensors. And self control is the repair function of the monitoring system. Especially, the monitoring system can identify the replacement of sensors. For further study, the real application test will be performed to check some unconviniences.

  16. The Cambridge Structural Database in retrospect and prospect.

    Science.gov (United States)

    Groom, Colin R; Allen, Frank H

    2014-01-13

    The Cambridge Crystallographic Data Centre (CCDC) was established in 1965 to record numerical, chemical and bibliographic data relating to published organic and metal-organic crystal structures. The Cambridge Structural Database (CSD) now stores data for nearly 700,000 structures and is a comprehensive and fully retrospective historical archive of small-molecule crystallography. Nearly 40,000 new structures are added each year. As X-ray crystallography celebrates its centenary as a subject, and the CCDC approaches its own 50th year, this article traces the origins of the CCDC as a publicly funded organization and its onward development into a self-financing charitable institution. Principally, however, we describe the growth of the CSD and its extensive associated software system, and summarize its impact and value as a basis for research in structural chemistry, materials science and the life sciences, including drug discovery and drug development. Finally, the article considers the CCDC's funding model in relation to open access and open data paradigms. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Comprehensive analysis of the N-glycan biosynthetic pathway using bioinformatics to generate UniCorn: A theoretical N-glycan structure database.

    Science.gov (United States)

    Akune, Yukie; Lin, Chi-Hung; Abrahams, Jodie L; Zhang, Jingyu; Packer, Nicolle H; Aoki-Kinoshita, Kiyoko F; Campbell, Matthew P

    2016-08-05

    Glycan structures attached to proteins are comprised of diverse monosaccharide sequences and linkages that are produced from precursor nucleotide-sugars by a series of glycosyltransferases. Databases of these structures are an essential resource for the interpretation of analytical data and the development of bioinformatics tools. However, with no template to predict what structures are possible the human glycan structure databases are incomplete and rely heavily on the curation of published, experimentally determined, glycan structure data. In this work, a library of 45 human glycosyltransferases was used to generate a theoretical database of N-glycan structures comprised of 15 or less monosaccharide residues. Enzyme specificities were sourced from major online databases including Kyoto Encyclopedia of Genes and Genomes (KEGG) Glycan, Consortium for Functional Glycomics (CFG), Carbohydrate-Active enZymes (CAZy), GlycoGene DataBase (GGDB) and BRENDA. Based on the known activities, more than 1.1 million theoretical structures and 4.7 million synthetic reactions were generated and stored in our database called UniCorn. Furthermore, we analyzed the differences between the predicted glycan structures in UniCorn and those contained in UniCarbKB (www.unicarbkb.org), a database which stores experimentally described glycan structures reported in the literature, and demonstrate that UniCorn can be used to aid in the assignment of ambiguous structures whilst also serving as a discovery database. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. MEGADOCK-Web: an integrated database of high-throughput structure-based protein-protein interaction predictions.

    Science.gov (United States)

    Hayashi, Takanori; Matsuzaki, Yuri; Yanagisawa, Keisuke; Ohue, Masahito; Akiyama, Yutaka

    2018-05-08

    Protein-protein interactions (PPIs) play several roles in living cells, and computational PPI prediction is a major focus of many researchers. The three-dimensional (3D) structure and binding surface are important for the design of PPI inhibitors. Therefore, rigid body protein-protein docking calculations for two protein structures are expected to allow elucidation of PPIs different from known complexes in terms of 3D structures because known PPI information is not explicitly required. We have developed rapid PPI prediction software based on protein-protein docking, called MEGADOCK. In order to fully utilize the benefits of computational PPI predictions, it is necessary to construct a comprehensive database to gather prediction results and their predicted 3D complex structures and to make them easily accessible. Although several databases exist that provide predicted PPIs, the previous databases do not contain a sufficient number of entries for the purpose of discovering novel PPIs. In this study, we constructed an integrated database of MEGADOCK PPI predictions, named MEGADOCK-Web. MEGADOCK-Web provides more than 10 times the number of PPI predictions than previous databases and enables users to conduct PPI predictions that cannot be found in conventional PPI prediction databases. In MEGADOCK-Web, there are 7528 protein chains and 28,331,628 predicted PPIs from all possible combinations of those proteins. Each protein structure is annotated with PDB ID, chain ID, UniProt AC, related KEGG pathway IDs, and known PPI pairs. Additionally, MEGADOCK-Web provides four powerful functions: 1) searching precalculated PPI predictions, 2) providing annotations for each predicted protein pair with an experimentally known PPI, 3) visualizing candidates that may interact with the query protein on biochemical pathways, and 4) visualizing predicted complex structures through a 3D molecular viewer. MEGADOCK-Web provides a huge amount of comprehensive PPI predictions based on

  19. Database Description - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ASTRA Alternative n...tics Journal Search: Contact address Database classification Nucleotide Sequence Databases - Gene structure,...3702 Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The database represents classified p...(10):1211-6. External Links: Original website information Database maintenance site National Institute of Ad... for user registration Not available About This Database Database Description Dow

  20. A Reference Database for Circular Dichroism Spectroscopy Covering Fold and Secondary Structure Space

    International Nuclear Information System (INIS)

    Lees, J.; Miles, A.; Wien, F.; Wallace, B.

    2006-01-01

    Circular Dichroism (CD) spectroscopy is a long-established technique for studying protein secondary structures in solution. Empirical analyses of CD data rely on the availability of reference datasets comprised of far-UV CD spectra of proteins whose crystal structures have been determined. This article reports on the creation of a new reference dataset which effectively covers both secondary structure and fold space, and uses the higher information content available in synchrotron radiation circular dichroism (SRCD) spectra to more accurately predict secondary structure than has been possible with existing reference datasets. It also examines the effects of wavelength range, structural redundancy and different means of categorizing secondary structures on the accuracy of the analyses. In addition, it describes a novel use of hierarchical cluster analyses to identify protein relatedness based on spectral properties alone. The databases are shown to be applicable in both conventional CD and SRCD spectroscopic analyses of proteins. Hence, by combining new bioinformatics and biophysical methods, a database has been produced that should have wide applicability as a tool for structural molecular biology

  1. Database Description - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...e databases - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description... Links: Original website information Database maintenance site The Molecular Profiling Research Center for D...stration Not available About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - SAHG | LSDB Archive ...

  2. Structural load inventory database for the Kansas City Plant

    International Nuclear Information System (INIS)

    Hashimoto, P.S.; Johnson, M.W.; Nakaki, D.K.; Wilson, J.J.; Lynch, D.T.; Drury, M.A.

    1993-01-01

    A structural load inventory database (LID) has been developed to support configuration management at the DOE Kansas City Plant (KCP). The objective of the LID is to record loads supported by the plant structures and to provide rapid assessments of the impact of future facility modifications on structural adequacy. Development of the LID was initiated for the KCP's Main Manufacturing Building. Field walkdowns were performed to determine all significant loads supported by the structure, including the weight of piping, service equipment, etc. These loads were compiled in the LID. Structural analyses for natural phenomena hazards were performed in accordance with UCRL-15910. Software to calculate demands on the structural members due to gravity loads, total demands including both gravity and seismic loads, and structural member demand-to-capacity ratios were also developed and integrated into the LID. Operation of the LID is menu-driven. The LID user has options to review and print existing loads and corresponding demand-to-capacity ratios, and to update the supported loads and demand-to-capacity ratios for any future facility modifications

  3. Keyword Search in Databases

    CERN Document Server

    Yu, Jeffrey Xu; Chang, Lijun

    2009-01-01

    It has become highly desirable to provide users with flexible ways to query/search information over databases as simple as keyword search like Google search. This book surveys the recent developments on keyword search over databases, and focuses on finding structural information among objects in a database using a set of keywords. Such structural information to be returned can be either trees or subgraphs representing how the objects, that contain the required keywords, are interconnected in a relational database or in an XML database. The structural keyword search is completely different from

  4. Visualizing information across multidimensional post-genomic structured and textual databases.

    Science.gov (United States)

    Tao, Ying; Friedman, Carol; Lussier, Yves A

    2005-04-15

    Visualizing relationships among biological information to facilitate understanding is crucial to biological research during the post-genomic era. Although different systems have been developed to view gene-phenotype relationships for specific databases, very few have been designed specifically as a general flexible tool for visualizing multidimensional genotypic and phenotypic information together. Our goal is to develop a method for visualizing multidimensional genotypic and phenotypic information and a model that unifies different biological databases in order to present the integrated knowledge using a uniform interface. We developed a novel, flexible and generalizable visualization tool, called PhenoGenesviewer (PGviewer), which in this paper was used to display gene-phenotype relationships from a human-curated database (OMIM) and from an automatic method using a Natural Language Processing tool called BioMedLEE. Data obtained from multiple databases were first integrated into a uniform structure and then organized by PGviewer. PGviewer provides a flexible query interface that allows dynamic selection and ordering of any desired dimension in the databases. Based on users' queries, results can be visualized using hierarchical expandable trees that present views specified by users according to their research interests. We believe that this method, which allows users to dynamically organize and visualize multiple dimensions, is a potentially powerful and promising tool that should substantially facilitate biological research. PhenogenesViewer as well as its support and tutorial are available at http://www.dbmi.columbia.edu/pgviewer/ Lussier@dbmi.columbia.edu.

  5. Structure and representation of data elements on factual database - SIST activity in Japan

    International Nuclear Information System (INIS)

    Nakamoto, H.; Onodera, N.

    1990-05-01

    A factual database has a variety of forms and types of data structure that produces various kinds of records composed of a great number of data items, which differ from file to file. Second, a factual database needs higher speciality in preparation on content analysis, and users wish to process download-ed data successively for analysis, diagnosis, simulation, projecting, design, linguistic processing and so on. A meaningful quantitative datum can be divided into some consistent sub-elements. In addition to this fine structure of data elements, representation of data elements is also very important to integrate factual data on to public files. In this paper we shall discuss problems and thoughts about the structure and representation of data elements contained in numerical information on a practical basis. The guideline discussed here is under draft by sponsorship of the Government and is being implemented to build database of space experiments. The guideline involves expression, unification, notification and handling of data for numerical information in machine readable form, such as numerical value, numerical formula, graphics, semi-quantitative value, significant figures, ranged data, accuracy and precision, conversion of unit, semi-quantitative values, error information and so on. (author)

  6. Structure and representation of data elements on factual database - SIST activity in Japan

    Energy Technology Data Exchange (ETDEWEB)

    Nakamoto, H [Integrated Researches for Information Science, Tokyo (Japan); Onodera, N [Japan Information Center of Science and Technology, Tokyo (Japan)

    1990-05-01

    A factual database has a variety of forms and types of data structure that produces various kinds of records composed of a great number of data items, which differ from file to file. Second, a factual database needs higher speciality in preparation on content analysis, and users wish to process download-ed data successively for analysis, diagnosis, simulation, projecting, design, linguistic processing and so on. A meaningful quantitative datum can be divided into some consistent sub-elements. In addition to this fine structure of data elements, representation of data elements is also very important to integrate factual data on to public files. In this paper we shall discuss problems and thoughts about the structure and representation of data elements contained in numerical information on a practical basis. The guideline discussed here is under draft by sponsorship of the Government and is being implemented to build database of space experiments. The guideline involves expression, unification, notification and handling of data for numerical information in machine readable form, such as numerical value, numerical formula, graphics, semi-quantitative value, significant figures, ranged data, accuracy and precision, conversion of unit, semi-quantitative values, error information and so on. (author).

  7. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  8. LAND-deFeND - An innovative database structure for landslides and floods and their consequences.

    Science.gov (United States)

    Napolitano, Elisabetta; Marchesini, Ivan; Salvati, Paola; Donnini, Marco; Bianchi, Cinzia; Guzzetti, Fausto

    2018-02-01

    Information on historical landslides and floods - collectively called "geo-hydrological hazards - is key to understand the complex dynamics of the events, to estimate the temporal and spatial frequency of damaging events, and to quantify their impact. A number of databases on geo-hydrological hazards and their consequences have been developed worldwide at different geographical and temporal scales. Of the few available database structures that can handle information on both landslides and floods some are outdated and others were not designed to store, organize, and manage information on single phenomena or on the type and monetary value of the damages and the remediation actions. Here, we present the LANDslides and Floods National Database (LAND-deFeND), a new database structure able to store, organize, and manage in a single digital structure spatial information collected from various sources with different accuracy. In designing LAND-deFeND, we defined four groups of entities, namely: nature-related, human-related, geospatial-related, and information-source-related entities that collectively can describe fully the geo-hydrological hazards and their consequences. In LAND-deFeND, the main entities are the nature-related entities, encompassing: (i) the "phenomenon", a single landslide or local inundation, (ii) the "event", which represent the ensemble of the inundations and/or landslides occurred in a conventional geographical area in a limited period, and (iii) the "trigger", which is the meteo-climatic or seismic cause (trigger) of the geo-hydrological hazards. LAND-deFeND maintains the relations between the nature-related entities and the human-related entities even where the information is missing partially. The physical model of the LAND-deFeND contains 32 tables, including nine input tables, 21 dictionary tables, and two association tables, and ten views, including specific views that make the database structure compliant with the EC INSPIRE and the Floods

  9. Database Description - TMFunction | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available sidue (or mutant) in a protein. The experimental data are collected from the literature both by searching th...the sequence database, UniProt, structural database, PDB, and literature database

  10. Accessing and using chemical databases

    DEFF Research Database (Denmark)

    Nikolov, Nikolai Georgiev; Pavlov, Todor; Niemelä, Jay Russell

    2013-01-01

    Computer-based representation of chemicals makes it possible to organize data in chemical databases-collections of chemical structures and associated properties. Databases are widely used wherever efficient processing of chemical information is needed, including search, storage, retrieval......, and dissemination. Structure and functionality of chemical databases are considered. The typical kinds of information found in a chemical database are considered-identification, structural, and associated data. Functionality of chemical databases is presented, with examples of search and access types. More details...... are included about the OASIS database and platform and the Danish (Q)SAR Database online. Various types of chemical database resources are discussed, together with a list of examples....

  11. PubChemQC Project: A Large-Scale First-Principles Electronic Structure Database for Data-Driven Chemistry.

    Science.gov (United States)

    Nakata, Maho; Shimazaki, Tomomi

    2017-06-26

    Large-scale molecular databases play an essential role in the investigation of various subjects such as the development of organic materials, in silico drug design, and data-driven studies with machine learning. We have developed a large-scale quantum chemistry database based on first-principles methods. Our database currently contains the ground-state electronic structures of 3 million molecules based on density functional theory (DFT) at the B3LYP/6-31G* level, and we successively calculated 10 low-lying excited states of over 2 million molecules via time-dependent DFT with the B3LYP functional and the 6-31+G* basis set. To select the molecules calculated in our project, we referred to the PubChem Project, which was used as the source of the molecular structures in short strings using the InChI and SMILES representations. Accordingly, we have named our quantum chemistry database project "PubChemQC" ( http://pubchemqc.riken.jp/ ) and placed it in the public domain. In this paper, we show the fundamental features of the PubChemQC database and discuss the techniques used to construct the data set for large-scale quantum chemistry calculations. We also present a machine learning approach to predict the electronic structure of molecules as an example to demonstrate the suitability of the large-scale quantum chemistry database.

  12. SeqHound: biological sequence and structure database as a platform for bioinformatics research

    Directory of Open Access Journals (Sweden)

    Dumontier Michel

    2002-10-01

    Full Text Available Abstract Background SeqHound has been developed as an integrated biological sequence, taxonomy, annotation and 3-D structure database system. It provides a high-performance server platform for bioinformatics research in a locally-hosted environment. Results SeqHound is based on the National Center for Biotechnology Information data model and programming tools. It offers daily updated contents of all Entrez sequence databases in addition to 3-D structural data and information about sequence redundancies, sequence neighbours, taxonomy, complete genomes, functional annotation including Gene Ontology terms and literature links to PubMed. SeqHound is accessible via a web server through a Perl, C or C++ remote API or an optimized local API. It provides functionality necessary to retrieve specialized subsets of sequences, structures and structural domains. Sequences may be retrieved in FASTA, GenBank, ASN.1 and XML formats. Structures are available in ASN.1, XML and PDB formats. Emphasis has been placed on complete genomes, taxonomy, domain and functional annotation as well as 3-D structural functionality in the API, while fielded text indexing functionality remains under development. SeqHound also offers a streamlined WWW interface for simple web-user queries. Conclusions The system has proven useful in several published bioinformatics projects such as the BIND database and offers a cost-effective infrastructure for research. SeqHound will continue to develop and be provided as a service of the Blueprint Initiative at the Samuel Lunenfeld Research Institute. The source code and examples are available under the terms of the GNU public license at the Sourceforge site http://sourceforge.net/projects/slritools/ in the SLRI Toolkit.

  13. Extending the Intermediate Data Structure (IDS for longitudinal historical databases to include geographic data

    Directory of Open Access Journals (Sweden)

    Finn Hedefalk

    2014-09-01

    Full Text Available The Intermediate Data Structure (IDS is a standardised database structure for longitudinal historical databases. Such a common structure facilitates data sharing and comparative research. In this study, we propose an extended version of IDS, named IDS-Geo, that also includes geographic data. The geographic data that will be stored in IDS-Geo are primarily buildings and/or property units, and the purpose of these geographic data is mainly to link individuals to places in space. When we want to assign such detailed spatial locations to individuals (in times before there were any detailed house addresses available, we often have to create tailored geographic datasets. In those cases, there are benefits of storing geographic data in the same structure as the demographic data. Moreover, we propose the export of data from IDS-Geo using an eXtensible Markup Language (XML Schema. IDS-Geo is implemented in a case study using historical property units, for the period 1804 to 1913, stored in a geographically extended version of the Scanian Economic Demographic Database (SEDD. To fit into the IDS-Geo data structure, we included an object lifeline representation of all of the property units (based on the snapshot time representation of single historical maps and poll-tax registers. The case study verifies that the IDS-Geo model is capable of handling geographic data that can be linked to demographic data.

  14. YAdumper: extracting and translating large information volumes from relational databases to structured flat files.

    Science.gov (United States)

    Fernández, José M; Valencia, Alfonso

    2004-10-12

    Downloading the information stored in relational databases into XML and other flat formats is a common task in bioinformatics. This periodical dumping of information requires considerable CPU time, disk and memory resources. YAdumper has been developed as a purpose-specific tool to deal with the integral structured information download of relational databases. YAdumper is a Java application that organizes database extraction following an XML template based on an external Document Type Declaration. Compared with other non-native alternatives, YAdumper substantially reduces memory requirements and considerably improves writing performance.

  15. Biological knowledge bases using Wikis: combining the flexibility of Wikis with the structure of databases.

    Science.gov (United States)

    Brohée, Sylvain; Barriot, Roland; Moreau, Yves

    2010-09-01

    In recent years, the number of knowledge bases developed using Wiki technology has exploded. Unfortunately, next to their numerous advantages, classical Wikis present a critical limitation: the invaluable knowledge they gather is represented as free text, which hinders their computational exploitation. This is in sharp contrast with the current practice for biological databases where the data is made available in a structured way. Here, we present WikiOpener an extension for the classical MediaWiki engine that augments Wiki pages by allowing on-the-fly querying and formatting resources external to the Wiki. Those resources may provide data extracted from databases or DAS tracks, or even results returned by local or remote bioinformatics analysis tools. This also implies that structured data can be edited via dedicated forms. Hence, this generic resource combines the structure of biological databases with the flexibility of collaborative Wikis. The source code and its documentation are freely available on the MediaWiki website: http://www.mediawiki.org/wiki/Extension:WikiOpener.

  16. A database structure for radiological optimization analyses of decommissioning operations

    International Nuclear Information System (INIS)

    Zeevaert, T.; Van de Walle, B.

    1995-09-01

    The structure of a database for decommissioning experiences is described. Radiological optimization is a major radiation protection principle in practices and interventions, involving radiological protection factors, economic costs, social factors. An important lack of knowledge with respect to these factors exists in the domain of the decommissioning of nuclear power plants, due to the low number of decommissioning operations already performed. Moreover, decommissioning takes place only once for a installation. Tasks, techniques, and procedures are in most cases rather specific, limiting the use of past experiences in the radiological optimization analyses of new decommissioning operations. Therefore, it is important that relevant data or information be acquired from decommissioning experiences. These data have to be stored in a database in a way they can be used efficiently in ALARA analyses of future decommissioning activities

  17. Psychometric properties of the Sleep Condition Indicator and Insomnia Severity Index in the evaluation of insomnia disorder.

    Science.gov (United States)

    Wong, Mark Lawrence; Lau, Kristy Nga Ting; Espie, Colin A; Luik, Annemarie I; Kyle, Simon D; Lau, Esther Yuet Ying

    2017-05-01

    The Sleep Condition Indicator (SCI) and Insomnia Severity Index (ISI) are commonly used instruments to assess insomnia. We evaluated their psychometric properties, particularly their discriminant validity against structured clinical interview (according to DSM-5 and ICSD-3), and their concurrent validity with measures of sleep and daytime functioning. A total of 158 young adults, 16% of whom were diagnosed with DSM-5 insomnia disorder and 13% with ICSD-3 Chronic Insomnia by structured interview, completed the ISI and SCI twice in 7-14 days, in addition to measures of sleep and daytime function. The Chinese version of the SCI was validated with good psychometric properties (ICC = 0.882). A cutoff of ≥8 on the ISI, ≤5 on the SCI short form, and ≤21 on the SCI achieved high discriminant validity (AUC > 0.85) in identifying individuals with insomnia based on both DSM-5 and ICSD-3 criteria. The SCI and ISI had comparable associations with subjective (0.18 sleep (0.31 disorder. Moreover, they showed good concordance with measures of daytime dysfunction, as well as subjective and objective sleep. The SCI and ISI are recommended for use in clinical and research settings. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Structural load inventory database for the Kansas City federal complex

    International Nuclear Information System (INIS)

    Hashimoto, P.S.; Johnson, M.W.; Nakaki, D.K.; Lynch, D.T.; Drury, M.A.

    1995-01-01

    A structural load inventory database (LID) has been developed to support configuration management at the DOE Kansas City Plant (KCP). The objective of the LID is to record loads supported by the plant structures and to provide rapid assessments of the impact of future facility modifications on structural adequacy. Development of the LID was initiated for the KCP's Main Manufacturing Building. Field walkdowns were performed to determine all significant loads supported by the structure, including the weight of piping, service equipment, etc. These loads were compiled in the LID. Structural analyses for natural phenomena hazards were performed in accordance with UCRL-15910. Software to calculate demands on the structural members due to gravity loads, total demands including both gravity and seismic loads, and structural member demand-to-capacity ratios were also developed and integrated into the LID. Operation of the LID is menu-driven. The LID user has options to review and print existing loads and corresponding demand-to-capacity ratios, and to update the supported loads and demand-to-capacity ratios for any future facility modifications

  19. Databases, Repositories, and Other Data Resources in Structural Biology.

    Science.gov (United States)

    Zheng, Heping; Porebski, Przemyslaw J; Grabowski, Marek; Cooper, David R; Minor, Wladek

    2017-01-01

    Structural biology, like many other areas of modern science, produces an enormous amount of primary, derived, and "meta" data with a high demand on data storage and manipulations. Primary data come from various steps of sample preparation, diffraction experiments, and functional studies. These data are not only used to obtain tangible results, like macromolecular structural models, but also to enrich and guide our analysis and interpretation of various biomedical problems. Herein we define several categories of data resources, (a) Archives, (b) Repositories, (c) Databases, and (d) Advanced Information Systems, that can accommodate primary, derived, or reference data. Data resources may be used either as web portals or internally by structural biology software. To be useful, each resource must be maintained, curated, as well as integrated with other resources. Ideally, the system of interconnected resources should evolve toward comprehensive "hubs", or Advanced Information Systems. Such systems, encompassing the PDB and UniProt, are indispensable not only for structural biology, but for many related fields of science. The categories of data resources described herein are applicable well beyond our usual scientific endeavors.

  20. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  1. An information integration system for structured documents, Web, and databases

    OpenAIRE

    Morishima, Atsuyuki

    1998-01-01

    Rapid advance in computer network technology has changed the style of computer utilization. Distributed computing resources over world-wide computer networks are available from our local computers. They include powerful computers and a variety of information sources. This change is raising more advanced requirements. Integration of distributed information sources is one of such requirements. In addition to conventional databases, structured documents have been widely used, and have increasing...

  2. Managing expectations: assessment of chemistry databases generated by automated extraction of chemical structures from patents.

    Science.gov (United States)

    Senger, Stefan; Bartek, Luca; Papadatos, George; Gaulton, Anna

    2015-12-01

    First public disclosure of new chemical entities often takes place in patents, which makes them an important source of information. However, with an ever increasing number of patent applications, manual processing and curation on such a large scale becomes even more challenging. An alternative approach better suited for this large corpus of documents is the automated extraction of chemical structures. A number of patent chemistry databases generated by using the latter approach are now available but little is known that can help to manage expectations when using them. This study aims to address this by comparing two such freely available sources, SureChEMBL and IBM SIIP (IBM Strategic Intellectual Property Insight Platform), with manually curated commercial databases. When looking at the percentage of chemical structures successfully extracted from a set of patents, using SciFinder as our reference, 59 and 51 % were also found in our comparison in SureChEMBL and IBM SIIP, respectively. When performing this comparison with compounds as starting point, i.e. establishing if for a list of compounds the databases provide the links between chemical structures and patents they appear in, we obtained similar results. SureChEMBL and IBM SIIP found 62 and 59 %, respectively, of the compound-patent pairs obtained from Reaxys. In our comparison of automatically generated vs. manually curated patent chemistry databases, the former successfully provided approximately 60 % of links between chemical structure and patents. It needs to be stressed that only a very limited number of patents and compound-patent pairs were used for our comparison. Nevertheless, our results will hopefully help to manage expectations of users of patent chemistry databases of this type and provide a useful framework for more studies like ours as well as guide future developments of the workflows used for the automated extraction of chemical structures from patents. The challenges we have encountered

  3. Database development and management

    CERN Document Server

    Chao, Lee

    2006-01-01

    Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database

  4. Weighted voting-based consensus clustering for chemical structure databases

    Science.gov (United States)

    Saeed, Faisal; Ahmed, Ali; Shamsir, Mohd Shahir; Salim, Naomie

    2014-06-01

    The cluster-based compound selection is used in the lead identification process of drug discovery and design. Many clustering methods have been used for chemical databases, but there is no clustering method that can obtain the best results under all circumstances. However, little attention has been focused on the use of combination methods for chemical structure clustering, which is known as consensus clustering. Recently, consensus clustering has been used in many areas including bioinformatics, machine learning and information theory. This process can improve the robustness, stability, consistency and novelty of clustering. For chemical databases, different consensus clustering methods have been used including the co-association matrix-based, graph-based, hypergraph-based and voting-based methods. In this paper, a weighted cumulative voting-based aggregation algorithm (W-CVAA) was developed. The MDL Drug Data Report (MDDR) benchmark chemical dataset was used in the experiments and represented by the AlogP and ECPF_4 descriptors. The results from the clustering methods were evaluated by the ability of the clustering to separate biologically active molecules in each cluster from inactive ones using different criteria, and the effectiveness of the consensus clustering was compared to that of Ward's method, which is the current standard clustering method in chemoinformatics. This study indicated that weighted voting-based consensus clustering can overcome the limitations of the existing voting-based methods and improve the effectiveness of combining multiple clusterings of chemical structures.

  5. Structural Design of HRA Database using generic task for Quantitative Analysis of Human Performance

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Hwan; Kim, Yo Chan; Choi, Sun Yeong; Park, Jin Kyun; Jung Won Dea [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    This paper describes a design of generic task based HRA database for quantitative analysis of human performance in order to estimate the number of task conductions. The estimation method to get the total task conduction number using direct counting is not easy to realize and maintain its data collection framework. To resolve this problem, this paper suggests an indirect method and a database structure using generic task that enables to estimate the total number of conduction based on instructions of operating procedures of nuclear power plants. In order to reduce human errors, therefore, all information on the human errors taken by operators in the power plant should be systematically collected and examined in its management. Korea Atomic Energy Research Institute (KAERI) is carrying out a research to develop a data collection framework to establish a Human Reliability Analysis (HRA) database that could be employed as technical bases to generate human error probabilities (HEPs) and performance shaping factors (PSFs)]. As a result of the study, the essential table schema was designed to the generic task database which stores generic tasks, procedure lists and task tree structures, and other supporting tables. The number of task conduction based on the operating procedures for HEP estimation was enabled through the generic task database and framework. To verify the framework applicability, case study for the simulated experiments was performed and analyzed using graphic user interfaces developed in this study.

  6. Structural Design of HRA Database using generic task for Quantitative Analysis of Human Performance

    International Nuclear Information System (INIS)

    Kim, Seung Hwan; Kim, Yo Chan; Choi, Sun Yeong; Park, Jin Kyun; Jung Won Dea

    2016-01-01

    This paper describes a design of generic task based HRA database for quantitative analysis of human performance in order to estimate the number of task conductions. The estimation method to get the total task conduction number using direct counting is not easy to realize and maintain its data collection framework. To resolve this problem, this paper suggests an indirect method and a database structure using generic task that enables to estimate the total number of conduction based on instructions of operating procedures of nuclear power plants. In order to reduce human errors, therefore, all information on the human errors taken by operators in the power plant should be systematically collected and examined in its management. Korea Atomic Energy Research Institute (KAERI) is carrying out a research to develop a data collection framework to establish a Human Reliability Analysis (HRA) database that could be employed as technical bases to generate human error probabilities (HEPs) and performance shaping factors (PSFs)]. As a result of the study, the essential table schema was designed to the generic task database which stores generic tasks, procedure lists and task tree structures, and other supporting tables. The number of task conduction based on the operating procedures for HEP estimation was enabled through the generic task database and framework. To verify the framework applicability, case study for the simulated experiments was performed and analyzed using graphic user interfaces developed in this study.

  7. PROCARB: A Database of Known and Modelled Carbohydrate-Binding Protein Structures with Sequence-Based Prediction Tools

    Directory of Open Access Journals (Sweden)

    Adeel Malik

    2010-01-01

    Full Text Available Understanding of the three-dimensional structures of proteins that interact with carbohydrates covalently (glycoproteins as well as noncovalently (protein-carbohydrate complexes is essential to many biological processes and plays a significant role in normal and disease-associated functions. It is important to have a central repository of knowledge available about these protein-carbohydrate complexes as well as preprocessed data of predicted structures. This can be significantly enhanced by tools de novo which can predict carbohydrate-binding sites for proteins in the absence of structure of experimentally known binding site. PROCARB is an open-access database comprising three independently working components, namely, (i Core PROCARB module, consisting of three-dimensional structures of protein-carbohydrate complexes taken from Protein Data Bank (PDB, (ii Homology Models module, consisting of manually developed three-dimensional models of N-linked and O-linked glycoproteins of unknown three-dimensional structure, and (iii CBS-Pred prediction module, consisting of web servers to predict carbohydrate-binding sites using single sequence or server-generated PSSM. Several precomputed structural and functional properties of complexes are also included in the database for quick analysis. In particular, information about function, secondary structure, solvent accessibility, hydrogen bonds and literature reference, and so forth, is included. In addition, each protein in the database is mapped to Uniprot, Pfam, PDB, and so forth.

  8. Collecting Taxes Database

    Data.gov (United States)

    US Agency for International Development — The Collecting Taxes Database contains performance and structural indicators about national tax systems. The database contains quantitative revenue performance...

  9. Serialization and persistent objects turning data structures into efficient databases

    CERN Document Server

    Soukup, Jiri

    2014-01-01

    Recently, the pressure for fast processing and efficient storage of large data with complex?relations increased beyond the capability of traditional databases. Typical examples include iPhone applications, computer aided design - both electrical and mechanical, biochemistry applications, and incremental compilers. Serialization, which is sometimes used in such situations is notoriously tedious and error prone.In this book, Jiri Soukup and Petr Macha?ek show in detail how to write programs which store their internal data automatically and transparently to disk. Together with special data structure libraries which treat relations among objects as first-class entities, and with a UML class-diagram generator, the core application code is much simplified. The benchmark chapter shows a typical example where persistent data is faster by the order of magnitude than with a traditional database, in both traversing and accessing the data.The authors explore and exploit advanced features of object-oriented languages in a...

  10. CMD: A Database to Store the Bonding States of Cysteine Motifs with Secondary Structures

    Directory of Open Access Journals (Sweden)

    Hamed Bostan

    2012-01-01

    Full Text Available Computational approaches to the disulphide bonding state and its connectivity pattern prediction are based on various descriptors. One descriptor is the amino acid sequence motifs flanking the cysteine residue motifs. Despite the existence of disulphide bonding information in many databases and applications, there is no complete reference and motif query available at the moment. Cysteine motif database (CMD is the first online resource that stores all cysteine residues, their flanking motifs with their secondary structure, and propensity values assignment derived from the laboratory data. We extracted more than 3 million cysteine motifs from PDB and UniProt data, annotated with secondary structure assignment, propensity value assignment, and frequency of occurrence and coefficiency of their bonding status. Removal of redundancies generated 15875 unique flanking motifs that are always bonded and 41577 unique patterns that are always nonbonded. Queries are based on the protein ID, FASTA sequence, sequence motif, and secondary structure individually or in batch format using the provided APIs that allow remote users to query our database via third party software and/or high throughput screening/querying. The CMD offers extensive information about the bonded, free cysteine residues, and their motifs that allows in-depth characterization of the sequence motif composition.

  11. KALIMER database development

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.

  12. KALIMER database development

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  13. Machine learning approach to detect intruders in database based on hexplet data structure

    Directory of Open Access Journals (Sweden)

    Saad M. Darwish

    2016-09-01

    Full Text Available Most of valuable information resources for any organization are stored in the database; it is a serious subject to protect this information against intruders. However, conventional security mechanisms are not designed to detect anomalous actions of database users. An intrusion detection system (IDS, delivers an extra layer of security that cannot be guaranteed by built-in security tools, is the ideal solution to defend databases from intruders. This paper suggests an anomaly detection approach that summarizes the raw transactional SQL queries into a compact data structure called hexplet, which can model normal database access behavior (abstract the user's profile and recognize impostors specifically tailored for role-based access control (RBAC database system. This hexplet lets us to preserve the correlation among SQL statements in the same transaction by exploiting the information in the transaction-log entry with the aim to improve detection accuracy specially those inside the organization and behave strange behavior. The model utilizes naive Bayes classifier (NBC as the simplest supervised learning technique for creating profiles and evaluating the legitimacy of a transaction. Experimental results show the performance of the proposed model in the term of detection rate.

  14. Strabo: An App and Database for Structural Geology and Tectonics Data

    Science.gov (United States)

    Newman, J.; Williams, R. T.; Tikoff, B.; Walker, J. D.; Good, J.; Michels, Z. D.; Ash, J.

    2016-12-01

    Strabo is a data system designed to facilitate digital storage and sharing of structural geology and tectonics data. The data system allows researchers to store and share field and laboratory data as well as construct new multi-disciplinary data sets. Strabo is built on graph database technology, as opposed to a relational database, which provides the flexibility to define relationships between objects of any type. This framework allows observations to be linked in a complex and hierarchical manner that is not possible in traditional database topologies. Thus, the advantage of the Strabo data structure is the ability of graph databases to link objects in both numerous and complex ways, in a manner that more accurately reflects the realities of the collecting and organizing of geological data sets. The data system is accessible via a mobile interface (iOS and Android devices) that allows these data to be stored, visualized, and shared during primary collection in the field or the laboratory. The Strabo Data System is underlain by the concept of a "Spot," which we define as any observation that characterizes a specific area. This can be anything from a strike and dip measurement of bedding to cross-cutting relationships between faults in complex dissected terrains. Each of these spots can then contain other Spots and/or measurements (e.g., lithology, slickenlines, displacement magnitude.) Hence, the Spot concept is applicable to all relationships and observation sets. Strabo is therefore capable of quantifying and digitally storing large spatial variations and complex geometries of naturally deformed rocks within hierarchically related maps and images. These approaches provide an observational fidelity comparable to a traditional field book, but with the added benefits of digital data storage, processing, and ease of sharing. This approach allows Strabo to integrate seamlessly into the workflow of most geologists. Future efforts will focus on extending Strabo to

  15. Design of Qualitative HRA Database Structure

    International Nuclear Information System (INIS)

    Kim, Seunghwan; Kim, Yochan; Choi, Sun Yeong; Park, Jinkyun; Jung, Wondea

    2015-01-01

    HRA DB is to collect and store the data in a database form to manage and maintain them from the perspective of human reliability analysis. All information on the human errors taken by operators in the power plant should be systematically collected and documented in its management. KAERI is developing the simulator-based HRA data handbook. In this study, the information required to store and manage the data necessary to perform an HRA as to store the HRA data to be stored in the handbook is identified and summarized. Especially this study is to summarize the collection and classification of qualitative data as the raw data to organize the data required to draw the HEP and its DB process. Qualitative HRA DB is a storehouse of all sub-information needed to receive the human error probability for Pasa. In this study, the requirements for structural design and implementation of qualitative HRA DB must be implemented for HRA DB were summarized. The follow-up study of the quantitative HRA DB implementation should be followed to draw the substantial HEP

  16. A novel approach: chemical relational databases, and the role of the ISSCAN database on assessing chemical carcinogenicity.

    Science.gov (United States)

    Benigni, Romualdo; Bossa, Cecilia; Richard, Ann M; Yang, Chihae

    2008-01-01

    Mutagenicity and carcinogenicity databases are crucial resources for toxicologists and regulators involved in chemicals risk assessment. Until recently, existing public toxicity databases have been constructed primarily as "look-up-tables" of existing data, and most often did not contain chemical structures. Concepts and technologies originated from the structure-activity relationships science have provided powerful tools to create new types of databases, where the effective linkage of chemical toxicity with chemical structure can facilitate and greatly enhance data gathering and hypothesis generation, by permitting: a) exploration across both chemical and biological domains; and b) structure-searchability through the data. This paper reviews the main public databases, together with the progress in the field of chemical relational databases, and presents the ISSCAN database on experimental chemical carcinogens.

  17. The FP4026 Research Database on the fundamental period of RC infilled frame structures.

    Science.gov (United States)

    Asteris, Panagiotis G

    2016-12-01

    The fundamental period of vibration appears to be one of the most critical parameters for the seismic design of buildings because it strongly affects the destructive impact of the seismic forces. In this article, important research data (entitled FP4026 Research Database (Fundamental Period-4026 cases of infilled frames) based on a detailed and in-depth analytical research on the fundamental period of reinforced concrete structures is presented. In particular, the values of the fundamental period which have been analytically determined are presented, taking into account the majority of the involved parameters. This database can be extremely valuable for the development of new code proposals for the estimation of the fundamental period of reinforced concrete structures fully or partially infilled with masonry walls.

  18. The FP4026 Research Database on the fundamental period of RC infilled frame structures

    Directory of Open Access Journals (Sweden)

    Panagiotis G. Asteris

    2016-12-01

    Full Text Available The fundamental period of vibration appears to be one of the most critical parameters for the seismic design of buildings because it strongly affects the destructive impact of the seismic forces. In this article, important research data (entitled FP4026 Research Database (Fundamental Period-4026 cases of infilled frames based on a detailed and in-depth analytical research on the fundamental period of reinforced concrete structures is presented. In particular, the values of the fundamental period which have been analytically determined are presented, taking into account the majority of the involved parameters. This database can be extremely valuable for the development of new code proposals for the estimation of the fundamental period of reinforced concrete structures fully or partially infilled with masonry walls.

  19. MetalS(3), a database-mining tool for the identification of structurally similar metal sites.

    Science.gov (United States)

    Valasatava, Yana; Rosato, Antonio; Cavallaro, Gabriele; Andreini, Claudia

    2014-08-01

    We have developed a database search tool to identify metal sites having structural similarity to a query metal site structure within the MetalPDB database of minimal functional sites (MFSs) contained in metal-binding biological macromolecules. MFSs describe the local environment around the metal(s) independently of the larger context of the macromolecular structure. Such a local environment has a determinant role in tuning the chemical reactivity of the metal, ultimately contributing to the functional properties of the whole system. The database search tool, which we called MetalS(3) (Metal Sites Similarity Search), can be accessed through a Web interface at http://metalweb.cerm.unifi.it/tools/metals3/ . MetalS(3) uses a suitably adapted version of an algorithm that we previously developed to systematically compare the structure of the query metal site with each MFS in MetalPDB. For each MFS, the best superposition is kept. All these superpositions are then ranked according to the MetalS(3) scoring function and are presented to the user in tabular form. The user can interact with the output Web page to visualize the structural alignment or the sequence alignment derived from it. Options to filter the results are available. Test calculations show that the MetalS(3) output correlates well with expectations from protein homology considerations. Furthermore, we describe some usage scenarios that highlight the usefulness of MetalS(3) to obtain mechanistic and functional hints regardless of homology.

  20. Improving decoy databases for protein folding algorithms

    KAUST Repository

    Lindsey, Aaron

    2014-01-01

    Copyright © 2014 ACM. Predicting protein structures and simulating protein folding are two of the most important problems in computational biology today. Simulation methods rely on a scoring function to distinguish the native structure (the most energetically stable) from non-native structures. Decoy databases are collections of non-native structures used to test and verify these functions. We present a method to evaluate and improve the quality of decoy databases by adding novel structures and removing redundant structures. We test our approach on 17 different decoy databases of varying size and type and show significant improvement across a variety of metrics. We also test our improved databases on a popular modern scoring function and show that they contain a greater number of native-like structures than the original databases, thereby producing a more rigorous database for testing scoring functions.

  1. Fiftieth Anniversary of the Cambridge Structural Database and Thirty Years of Its Use in Croatia

    Directory of Open Access Journals (Sweden)

    Kojić-Prodić B.

    2015-07-01

    Full Text Available This article is dedicated to the memory of Dr. F. H. Allen and the 50th anniversary of the Cambridge Crystallographic Data Centre (CCDC; the world-renowned centre for deposition and control of crystallographic data including atomic coordinates that define the three-dimensional structures of organic molecules and metal complexes containing organic ligands. The mission exposed at the web site (http://www.ccdc.cam.ac.uk is clearly stated: “The Cambridge Crystallographic Data Centre (CCDC is dedicated to the advancement of chemistry and crystallography for the public benefit through providing high quality information, software and services.” The Cambridge Structural Database (CSD, one among the first established electronic databases, nowadays is one of the most significant crystallographic databases in the world. In the International Year of Crystallography 2014, the CSD announced in December over 750,000 deposited structures. The use of the extensive and rapidly growing database needs support of sophisticated and efficient software for checking, searching, analysing, and visualising structural data. The seminal role of the CSD in researches related to crystallography, chemistry, materials science, solid state physics and chemistry, (biotechnology, life sciences, and pharmacology is widely known. The important issues of the CCDC are the accuracy of deposited data and development of software for checking the data. Therefore, the Crystallographic Information File (CIF is introduced as the standard text file format for representing crystallographic information. Among the most important software for users is ConQuest, which enables searching all the CSD information fields, and the web implementation WebCSD software. Mercury is available for visualisation of crystal structures and crystal morphology including intra- and intermolecular interactions with graph-set notations of hydrogen bonds, and analysis of geometrical parameters. The CCDC gives even

  2. Applications of the Cambridge Structural Database in chemical education1

    Science.gov (United States)

    Battle, Gary M.; Ferrence, Gregory M.; Allen, Frank H.

    2010-01-01

    The Cambridge Structural Database (CSD) is a vast and ever growing compendium of accurate three-dimensional structures that has massive chemical diversity across organic and metal–organic compounds. For these reasons, the CSD is finding significant uses in chemical education, and these applications are reviewed. As part of the teaching initiative of the Cambridge Crystallographic Data Centre (CCDC), a teaching subset of more than 500 CSD structures has been created that illustrate key chemical concepts, and a number of teaching modules have been devised that make use of this subset in a teaching environment. All of this material is freely available from the CCDC website, and the subset can be freely viewed and interrogated using WebCSD, an internet application for searching and displaying CSD information content. In some cases, however, the complete CSD System is required for specific educational applications, and some examples of these more extensive teaching modules are also discussed. The educational value of visualizing real three-dimensional structures, and of handling real experimental results, is stressed throughout. PMID:20877495

  3. DOE Order 5480.28 Hanford facilities database

    Energy Technology Data Exchange (ETDEWEB)

    Hayenga, J.L., Westinghouse Hanford

    1996-09-01

    This document describes the development of a database of DOE and/or leased Hanford Site Facilities. The completed database will consist of structure/facility parameters essential to the prioritization of these structures for natural phenomena hazard vulnerability in compliance with DOE Order 5480.28, `Natural Phenomena Hazards Mitigation`. The prioritization process will be based upon the structure/facility vulnerability to natural phenomena hazards. The ACCESS based database, `Hanford Facilities Site Database`, is generated from current Hanford Site information and databases.

  4. Logical database design principles

    CERN Document Server

    Garmany, John; Clark, Terry

    2005-01-01

    INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint

  5. Searching the protein structure database for ligand-binding site similarities using CPASS v.2

    Directory of Open Access Journals (Sweden)

    Caprez Adam

    2011-01-01

    Full Text Available Abstract Background A recent analysis of protein sequences deposited in the NCBI RefSeq database indicates that ~8.5 million protein sequences are encoded in prokaryotic and eukaryotic genomes, where ~30% are explicitly annotated as "hypothetical" or "uncharacterized" protein. Our Comparison of Protein Active-Site Structures (CPASS v.2 database and software compares the sequence and structural characteristics of experimentally determined ligand binding sites to infer a functional relationship in the absence of global sequence or structure similarity. CPASS is an important component of our Functional Annotation Screening Technology by NMR (FAST-NMR protocol and has been successfully applied to aid the annotation of a number of proteins of unknown function. Findings We report a major upgrade to our CPASS software and database that significantly improves its broad utility. CPASS v.2 is designed with a layered architecture to increase flexibility and portability that also enables job distribution over the Open Science Grid (OSG to increase speed. Similarly, the CPASS interface was enhanced to provide more user flexibility in submitting a CPASS query. CPASS v.2 now allows for both automatic and manual definition of ligand-binding sites and permits pair-wise, one versus all, one versus list, or list versus list comparisons. Solvent accessible surface area, ligand root-mean square difference, and Cβ distances have been incorporated into the CPASS similarity function to improve the quality of the results. The CPASS database has also been updated. Conclusions CPASS v.2 is more than an order of magnitude faster than the original implementation, and allows for multiple simultaneous job submissions. Similarly, the CPASS database of ligand-defined binding sites has increased in size by ~ 38%, dramatically increasing the likelihood of a positive search result. The modification to the CPASS similarity function is effective in reducing CPASS similarity scores

  6. A database paradigm for the management of DICOM-RT structure sets using a geographic information system

    International Nuclear Information System (INIS)

    Shao, Weber; Kupelian, Patrick A; Wang, Jason; Low, Daniel A; Ruan, Dan

    2014-01-01

    We devise a paradigm for representing the DICOM-RT structure sets in a database management system, in such way that secondary calculations of geometric information can be performed quickly from the existing contour definitions. The implementation of this paradigm is achieved using the PostgreSQL database system and the PostGIS extension, a geographic information system commonly used for encoding geographical map data. The proposed paradigm eliminates the overhead of retrieving large data records from the database, as well as the need to implement various numerical and data parsing routines, when additional information related to the geometry of the anatomy is desired.

  7. A database paradigm for the management of DICOM-RT structure sets using a geographic information system

    Science.gov (United States)

    Shao, Weber; Kupelian, Patrick A.; Wang, Jason; Low, Daniel A.; Ruan, Dan

    2014-03-01

    We devise a paradigm for representing the DICOM-RT structure sets in a database management system, in such way that secondary calculations of geometric information can be performed quickly from the existing contour definitions. The implementation of this paradigm is achieved using the PostgreSQL database system and the PostGIS extension, a geographic information system commonly used for encoding geographical map data. The proposed paradigm eliminates the overhead of retrieving large data records from the database, as well as the need to implement various numerical and data parsing routines, when additional information related to the geometry of the anatomy is desired.

  8. Information structure design for databases a practical guide to data modelling

    CERN Document Server

    Mortimer, Andrew J

    2014-01-01

    Computer Weekly Professional Series: Information Structure Design for Databases: A Practical Guide to Data modeling focuses on practical data modeling covering business and information systems. The publication first offers information on data and information, business analysis, and entity relationship model basics. Discussions cover degree of relationship symbols, relationship rules, membership markers, types of information systems, data driven systems, cost and value of information, importance of data modeling, and quality of information. The book then takes a look at entity relationship mode

  9. Dictionary as Database.

    Science.gov (United States)

    Painter, Derrick

    1996-01-01

    Discussion of dictionaries as databases focuses on the digitizing of The Oxford English dictionary (OED) and the use of Standard Generalized Mark-Up Language (SGML). Topics include the creation of a consortium to digitize the OED, document structure, relational databases, text forms, sequence, and discourse. (LRW)

  10. Agricultural Conservation Planning Framework: 3. Land Use and Field Boundary Database Development and Structure.

    Science.gov (United States)

    Tomer, Mark D; James, David E; Sandoval-Green, Claudette M J

    2017-05-01

    Conservation planning information is important for identifying options for watershed water quality improvement and can be developed for use at field, farm, and watershed scales. Translation across scales is a key issue impeding progress at watershed scales because watershed improvement goals must be connected with implementation of farm- and field-level conservation practices to demonstrate success. This is particularly true when examining alternatives for "trap and treat" practices implemented at agricultural-field edges to control (or influence) water flows through fields, landscapes, and riparian corridors within agricultural watersheds. We propose that database structures used in developing conservation planning information can achieve translation across conservation-planning scales, and we developed the Agricultural Conservation Planning Framework (ACPF) to enable practical planning applications. The ACPF comprises a planning concept, a database to facilitate field-level and watershed-scale analyses, and an ArcGIS toolbox with Python scripts to identify specific options for placement of conservation practices. This paper appends two prior publications and describes the structure of the ACPF database, which contains land use, crop history, and soils information and is available for download for 6091 HUC12 watersheds located across Iowa, Illinois, Minnesota, and parts of Kansas, Missouri, Nebraska, and Wisconsin and comprises information on 2.74 × 10 agricultural fields (available through /). Sample results examining land use trends across Iowa and Illinois are presented here to demonstrate potential uses of the database. While designed for use with the ACPF toolbox, users are welcome to use the ACPF watershed data in a variety of planning and modeling approaches. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  11. An Investigation of the Fine Spatial Structure of Meteor Streams Using the Relational Database ``Meteor''

    Science.gov (United States)

    Karpov, A. V.; Yumagulov, E. Z.

    2003-05-01

    We have restored and ordered the archive of meteor observations carried out with a meteor radar complex ``KGU-M5'' since 1986. A relational database has been formed under the control of the Database Management System (DBMS) Oracle 8. We also improved and tested a statistical method for studying the fine spatial structure of meteor streams with allowance for the specific features of application of the DBMS. Statistical analysis of the results of observations made it possible to obtain information about the substance distribution in the Quadrantid, Geminid, and Perseid meteor streams.

  12. Migration from relational to NoSQL database

    Science.gov (United States)

    Ghotiya, Sunita; Mandal, Juhi; Kandasamy, Saravanakumar

    2017-11-01

    Data generated by various real time applications, social networking sites and sensor devices is of very huge amount and unstructured, which makes it difficult for Relational database management systems to handle the data. Data is very precious component of any application and needs to be analysed after arranging it in some structure. Relational databases are only able to deal with structured data, so there is need of NoSQL Database management System which can deal with semi -structured data also. Relational database provides the easiest way to manage the data but as the use of NoSQL is increasing it is becoming necessary to migrate the data from Relational to NoSQL databases. Various frameworks has been proposed previously which provides mechanisms for migration of data stored at warehouses in SQL, middle layer solutions which can provide facility of data to be stored in NoSQL databases to handle data which is not structured. This paper provides a literature review of some of the recent approaches proposed by various researchers to migrate data from relational to NoSQL databases. Some researchers proposed mechanisms for the co-existence of NoSQL and Relational databases together. This paper provides a summary of mechanisms which can be used for mapping data stored in Relational databases to NoSQL databases. Various techniques for data transformation and middle layer solutions are summarised in the paper.

  13. MINIZOO in de Benelux : Structure and use of a database of skin irritating organisms

    NARCIS (Netherlands)

    Bronswijk, van J.E.M.H.; Reichl, E.R.

    1986-01-01

    MI NIZOO database is structured within the standard software package SIRv2 (= Scientific Information Retrieval version 2). This flexible program is installed on the university mainframe (a CYBER 180). The program dBASE II employed on a microcomputer (MICROSOL), can be used for part of data entry and

  14. Human Performance Event Database

    International Nuclear Information System (INIS)

    Trager, E. A.

    1998-01-01

    The purpose of this paper is to describe several aspects of a Human Performance Event Database (HPED) that is being developed by the Nuclear Regulatory Commission. These include the background, the database structure and basis for the structure, the process for coding and entering event records, the results of preliminary analyses of information in the database, and plans for the future. In 1992, the Office for Analysis and Evaluation of Operational Data (AEOD) within the NRC decided to develop a database for information on human performance during operating events. The database was needed to help classify and categorize the information to help feedback operating experience information to licensees and others. An NRC interoffice working group prepared a list of human performance information that should be reported for events and the list was based on the Human Performance Investigation Process (HPIP) that had been developed by the NRC as an aid in investigating events. The structure of the HPED was based on that list. The HPED currently includes data on events described in augmented inspection team (AIT) and incident investigation team (IIT) reports from 1990 through 1996, AEOD human performance studies from 1990 through 1993, recent NRR special team inspections, and licensee event reports (LERs) that were prepared for the events. (author)

  15. Integrating protein structures and precomputed genealogies in the Magnum database: Examples with cellular retinoid binding proteins

    Directory of Open Access Journals (Sweden)

    Bradley Michael E

    2006-02-01

    Full Text Available Abstract Background When accurate models for the divergent evolution of protein sequences are integrated with complementary biological information, such as folded protein structures, analyses of the combined data often lead to new hypotheses about molecular physiology. This represents an excellent example of how bioinformatics can be used to guide experimental research. However, progress in this direction has been slowed by the lack of a publicly available resource suitable for general use. Results The precomputed Magnum database offers a solution to this problem for ca. 1,800 full-length protein families with at least one crystal structure. The Magnum deliverables include 1 multiple sequence alignments, 2 mapping of alignment sites to crystal structure sites, 3 phylogenetic trees, 4 inferred ancestral sequences at internal tree nodes, and 5 amino acid replacements along tree branches. Comprehensive evaluations revealed that the automated procedures used to construct Magnum produced accurate models of how proteins divergently evolve, or genealogies, and correctly integrated these with the structural data. To demonstrate Magnum's capabilities, we asked for amino acid replacements requiring three nucleotide substitutions, located at internal protein structure sites, and occurring on short phylogenetic tree branches. In the cellular retinoid binding protein family a site that potentially modulates ligand binding affinity was discovered. Recruitment of cellular retinol binding protein to function as a lens crystallin in the diurnal gecko afforded another opportunity to showcase the predictive value of a browsable database containing branch replacement patterns integrated with protein structures. Conclusion We integrated two areas of protein science, evolution and structure, on a large scale and created a precomputed database, known as Magnum, which is the first freely available resource of its kind. Magnum provides evolutionary and structural

  16. The IAEA Illicit Trafficking Database Programme: Operations and Structure

    International Nuclear Information System (INIS)

    2010-01-01

    the IAEA I TDB currently has 90 states participating voluntarily to the database. Information on about 827 incidents of which 500 involved radioactive sources has been reported. States provide information by submitting an Information Notification Form. The incident is assigned an identification number and entered in the database. Information from open sources is collected daily and reviewed. If the information warrants it a new incident is created in the database.

  17. An XCT image database system

    International Nuclear Information System (INIS)

    Komori, Masaru; Minato, Kotaro; Koide, Harutoshi; Hirakawa, Akina; Nakano, Yoshihisa; Itoh, Harumi; Torizuka, Kanji; Yamasaki, Tetsuo; Kuwahara, Michiyoshi.

    1984-01-01

    In this paper, an expansion of X-ray CT (XCT) examination history database to XCT image database is discussed. The XCT examination history database has been constructed and used for daily examination and investigation in our hospital. This database consists of alpha-numeric information (locations, diagnosis and so on) of more than 15,000 cases, and for some of them, we add tree structured image data which has a flexibility for various types of image data. This database system is written by MUMPS database manipulation language. (author)

  18. Improving decoy databases for protein folding algorithms

    KAUST Repository

    Lindsey, Aaron; Yeh, Hsin-Yi (Cindy); Wu, Chih-Peng; Thomas, Shawna; Amato, Nancy M.

    2014-01-01

    energetically stable) from non-native structures. Decoy databases are collections of non-native structures used to test and verify these functions. We present a method to evaluate and improve the quality of decoy databases by adding novel structures and removing

  19. Moving Observer Support for Databases

    DEFF Research Database (Denmark)

    Bukauskas, Linas

    Interactive visual data explorations impose rigid requirements on database and visualization systems. Systems that visualize huge amounts of data tend to request large amounts of memory resources and heavily use the CPU to process and visualize data. Current systems employ a loosely coupled...... architecture to exchange data between database and visualization. Thus, the interaction of the visualizer and the database is kept to the minimum, which most often leads to superfluous data being passed from database to visualizer. This Ph.D. thesis presents a novel tight coupling of database and visualizer....... The thesis discusses the VR-tree, an extension of the R-tree that enables observer relative data extraction. To support incremental observer position relative data extraction the thesis proposes the Volatile Access Structure (VAST). VAST is a main memory structure that caches nodes of the VR-tree. VAST...

  20. New Path Based Index Structure for Processing CAS Queries over XML Database

    Directory of Open Access Journals (Sweden)

    Krishna Asawa

    2017-01-01

    Full Text Available Querying nested data has become one of the most challenging issues for retrieving desired information from the Web. Today diverse applications generate a tremendous amount of data in different formats. These data and information exchanged on the Web are commonly expressed as nested representation such as XML, JSON, etc. Unlike the traditional database system, they don't have a rigid schema. In general, the nested data is managed by storing data and its structures separately which significantly reduces the performance of data retrieving. Ensuring efficiency of processing queries which locates the exact positions of the elements has become a big challenging issue. There are different indexing structures which have been proposed in the literature to improve the performance of the query processing on the nested structure. Most of the past researches on nested structure concentrate on the structure alone. This paper proposes new index structure which combines siblings of the terminal nodes as one path which efficiently processes twig queries with less number of lookups and joins. The proposed approach is compared with some of the existing approaches. The results also show that they are processed with better performance compared to the existing ones.

  1. Discovery of a Red-Emitting Li3RbGe8O18:Mn4+ Phosphor in the Alkali-Germanate System: Structural Determination and Electronic Calculations.

    Science.gov (United States)

    Singh, Satendra Pal; Kim, Minseuk; Park, Woon Bae; Lee, Jin-Woong; Sohn, Kee-Sun

    2016-10-17

    A solid-state combinatorial chemistry approach, which used the A-Ge-O (A = Li, K, Rb) system doped with a small amount of Mn 4+ as an activator, was adopted in a search for novel red-emitting phosphors. The A site may have been composed of either a single alkali metal ion or of a combination of them. This approach led to the discovery of a novel phosphor in the above system with the chemical formula Li 3 RbGe 8 O 18 :Mn 4+ . The crystal structure of this novel phosphor was solved via direct methods, and subsequent Rietveld refinement revealed a trigonal structure in the P3̅1m space group. The discovered phosphor is believed to be novel in the sense that neither the crystal structure nor the chemical formula matches any of the prototype structures available in the crystallographic information database (ICDD or ICSD). The measured photoluminescence intensity that peaked at a wavelength of 667 nm was found to be much higher than the best intensity obtained among all the existing A 2 Ge 4 O 9 (A = Li, K, Rb) compounds in the alkali-germanate system. An ab initio calculation based on density function theory (DFT) was conducted to verify the crystal structure model and compare the calculated value of the optical band gap with the experimental results. The optical band gap obtained from diffuse reflectance measurement (5.26 eV) and DFT calculation (4.64 eV) results were in very good agreement. The emission wavelength of this phosphor that exists in the deep red region of the electromagnetic spectrum may be very useful for increasing the color gamut of LED-based display devices such as ultrahigh-definition television (UHDTV) as per the ITU-R BT.2020-2 recommendations and also for down-converter phosphors that are used in solar-cell applications.

  2. Fine-structure resolved rotational transitions and database for CN+H2 collisions

    Science.gov (United States)

    Burton, Hannah; Mysliwiec, Ryan; Forrey, Robert C.; Yang, B. H.; Stancil, P. C.; Balakrishnan, N.

    2018-06-01

    Cross sections and rate coefficients for CN+H2 collisions are calculated using the coupled states (CS) approximation. The calculations are benchmarked against more accurate close-coupling (CC) calculations for transitions between low-lying rotational states. Comparisons are made between the two formulations for collision energies greater than 10 cm-1. The CS approximation is used to construct a database which includes highly excited rotational states that are beyond the practical limitations of the CC method. The database includes fine-structure resolved rotational quenching transitions for v = 0 and j ≤ 40, where v and j are the vibrational and rotational quantum numbers of the initial state of the CN molecule. Rate coefficients are computed for both para-H2 and ortho-H2 colliders. The results are shown to be in good agreement with previous calculations, however, the rates are substantially different from mass-scaled CN+He rates that are often used in astrophysical models.

  3. Categorical database generalization in GIS

    NARCIS (Netherlands)

    Liu, Y.

    2002-01-01

    Key words: Categorical database, categorical database generalization, Formal data structure, constraints, transformation unit, classification hierarchy, aggregation hierarchy, semantic similarity, data model,

  4. DOE Order 5480.28 natural phenomena hazards mitigation system, structure, component database

    International Nuclear Information System (INIS)

    Conrads, T.J.

    1997-01-01

    This document describes the Prioritization Phase Database that was prepared for the Project Hanford Management Contractors to support the implementation of DOE Order 5480.28. Included within this document are three appendices which contain the prioritized list of applicable Project Hanford Management Contractors Systems, Structures, and Components. These appendices include those assets that comply with the requirements of DOE Order 5480.28, assets for which a waiver will be recommended, and assets requiring additional information before compliance can be ascertained

  5. Database theory and SQL practice using Access

    International Nuclear Information System (INIS)

    Kim, Gyeong Min; Lee, Myeong Jin

    2001-01-01

    This book introduces database theory and SQL practice using Access. It is comprised of seven chapters, which give description of understanding database with basic conception and DMBS, understanding relational database with examples of it, building database table and inputting data using access 2000, structured Query Language with introduction, management and making complex query using SQL, command for advanced SQL with understanding conception of join and virtual table, design on database for online bookstore with six steps and building of application with function, structure, component, understanding of the principle, operation and checking programming source for application menu.

  6. Protein structure determination by exhaustive search of Protein Data Bank derived databases.

    Science.gov (United States)

    Stokes-Rees, Ian; Sliz, Piotr

    2010-12-14

    Parallel sequence and structure alignment tools have become ubiquitous and invaluable at all levels in the study of biological systems. We demonstrate the application and utility of this same parallel search paradigm to the process of protein structure determination, benefitting from the large and growing corpus of known structures. Such searches were previously computationally intractable. Through the method of Wide Search Molecular Replacement, developed here, they can be completed in a few hours with the aide of national-scale federated cyberinfrastructure. By dramatically expanding the range of models considered for structure determination, we show that small (less than 12% structural coverage) and low sequence identity (less than 20% identity) template structures can be identified through multidimensional template scoring metrics and used for structure determination. Many new macromolecular complexes can benefit significantly from such a technique due to the lack of known homologous protein folds or sequences. We demonstrate the effectiveness of the method by determining the structure of a full-length p97 homologue from Trichoplusia ni. Example cases with the MHC/T-cell receptor complex and the EmoB protein provide systematic estimates of minimum sequence identity, structure coverage, and structural similarity required for this method to succeed. We describe how this structure-search approach and other novel computationally intensive workflows are made tractable through integration with the US national computational cyberinfrastructure, allowing, for example, rapid processing of the entire Structural Classification of Proteins protein fragment database.

  7. DSSTOX STRUCTURE-SEARCHABLE PUBLIC TOXICITY DATABASE NETWORK: CURRENT PROGRESS AND NEW INITIATIVES TO IMPROVE CHEMO-BIOINFORMATICS CAPABILITIES

    Science.gov (United States)

    The EPA DSSTox website (http://www/epa.gov/nheerl/dsstox) publishes standardized, structure-annotated toxicity databases, covering a broad range of toxicity disciplines. Each DSSTox database features documentation written in collaboration with the source authors and toxicity expe...

  8. OECD Structural Analysis Databases: Sectoral Principles in the Study of Markets for Goods and Services

    Directory of Open Access Journals (Sweden)

    Marina D. Simonova

    2015-01-01

    Full Text Available This study focuses on the characteristics of the information database of the OECD structural business statistics in the analysis of markets of goods and services, and macroeconomic trends. The system of indicators of structural statistics is presented in OECD publications and on-line access to a wide range of users. Collected data sources generated by the OECD offices are based on the national statistical offices of country-members, Russia and the BRICS. Data on the development of economic sectors are calculated according to the methodology of individual countries, regional and international standards: annual national accounts of countries, annual industry and business surveys, methodology of short-term indicators, statistics of international trade in goods. Data are aggregated on the basis of complex indicators statements of the enterprises' questionnaire and business surveys. Information system of structural statistics which is available and continuously updated, has certain features. It is composed of several subsystems: Structural Statistics on Industry and Services, EU entrepreneurship statistics, Indicators of Industry and Services, International Trade in Commodities Statistics. The grouping of industries is based on the International standard industrial classification of all economic activities (ISIC. Classification of foreign trade flows is made in accordance with the Harmonized system of description and coding of goods. The structural statistics databases comprise four classes of industries' grouping according to the technology intensity. The paper discusses the main reasons for the non-comparability of data in the subsystems in certain time intervals.

  9. A Preliminary Study on the Multiple Mapping Structure of Classification Systems for Heterogeneous Databases

    Directory of Open Access Journals (Sweden)

    Seok-Hyoung Lee

    2012-06-01

    Full Text Available While science and technology information service portals and heterogeneous databases produced in Korea and other countries are integrated, methods of connecting the unique classification systems applied to each database have been studied. Results of technologists' research, such as, journal articles, patent specifications, and research reports, are organically related to each other. In this case, if the most basic and meaningful classification systems are not connected, it is difficult to achieve interoperability of the information and thus not easy to implement meaningful science technology information services through information convergence. This study aims to address the aforementioned issue by analyzing mapping systems between classification systems in order to design a structure to connect a variety of classification systems used in the academic information database of the Korea Institute of Science and Technology Information, which provides science and technology information portal service. This study also aims to design a mapping system for the classification systems to be applied to actual science and technology information services and information management systems.

  10. CamMedNP: building the Cameroonian 3D structural natural products database for virtual screening.

    Science.gov (United States)

    Ntie-Kang, Fidele; Mbah, James A; Mbaze, Luc Meva'a; Lifongo, Lydia L; Scharfe, Michael; Hanna, Joelle Ngo; Cho-Ngwa, Fidelis; Onguéné, Pascal Amoa; Owono Owono, Luc C; Megnassan, Eugene; Sippl, Wolfgang; Efange, Simon M N

    2013-04-16

    Computer-aided drug design (CADD) often involves virtual screening (VS) of large compound datasets and the availability of such is vital for drug discovery protocols. We present CamMedNP - a new database beginning with more than 2,500 compounds of natural origin, along with some of their derivatives which were obtained through hemisynthesis. These are pure compounds which have been previously isolated and characterized using modern spectroscopic methods and published by several research teams spread across Cameroon. In the present study, 224 distinct medicinal plant species belonging to 55 plant families from the Cameroonian flora have been considered. About 80 % of these have been previously published and/or referenced in internationally recognized journals. For each compound, the optimized 3D structure, drug-like properties, plant source, collection site and currently known biological activities are given, as well as literature references. We have evaluated the "drug-likeness" of this database using Lipinski's "Rule of Five". A diversity analysis has been carried out in comparison with the ChemBridge diverse database. CamMedNP could be highly useful for database screening and natural product lead generation programs.

  11. OECD-FIRE PR02. OECD-FIRE database record structure

    International Nuclear Information System (INIS)

    Kolar, L.

    2005-12-01

    In the coding guidelines, the scope, format, and details of any record required to input a real fire event at a nuclear reactor unit to the international OECD-FIRE database are described in detail. The database was set up in the OECD-FIRE-PR02 code

  12. Applications of GIS and database technologies to manage a Karst Feature Database

    Science.gov (United States)

    Gao, Y.; Tipping, R.G.; Alexander, E.C.

    2006-01-01

    This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.

  13. Database citation in full text biomedical articles.

    Science.gov (United States)

    Kafkas, Şenay; Kim, Jee-Hyub; McEntyre, Johanna R

    2013-01-01

    Molecular biology and literature databases represent essential infrastructure for life science research. Effective integration of these data resources requires that there are structured cross-references at the level of individual articles and biological records. Here, we describe the current patterns of how database entries are cited in research articles, based on analysis of the full text Open Access articles available from Europe PMC. Focusing on citation of entries in the European Nucleotide Archive (ENA), UniProt and Protein Data Bank, Europe (PDBe), we demonstrate that text mining doubles the number of structured annotations of database record citations supplied in journal articles by publishers. Many thousands of new literature-database relationships are found by text mining, since these relationships are also not present in the set of articles cited by database records. We recommend that structured annotation of database records in articles is extended to other databases, such as ArrayExpress and Pfam, entries from which are also cited widely in the literature. The very high precision and high-throughput of this text-mining pipeline makes this activity possible both accurately and at low cost, which will allow the development of new integrated data services.

  14. Database on wind characteristics. Contents of database bank

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Hansen, K.S.

    2001-01-01

    for the available data in the established database bank and part three is the Users Manual describing the various ways to access and analyse the data. The present report constitutes the second part of the Annex XVII reporting. Basically, the database bank contains three categories of data, i.e. i) high sampled wind...... field time series; ii) high sampled wind turbine structural response time series; andiii) wind resource data. The main emphasis, however, is on category i). The available data, within each of the three categories, are described in details. The description embraces site characteristics, terrain type...

  15. JICST Factual Database(2)

    Science.gov (United States)

    Araki, Keisuke

    The computer programme, which builds atom-bond connection tables from nomenclatures, is developed. Chemical substances with their nomenclature and varieties of trivial names or experimental code numbers are inputted. The chemical structures of the database are stereospecifically stored and are able to be searched and displayed according to stereochemistry. Source data are from laws and regulations of Japan, RTECS of US and so on. The database plays a central role within the integrated fact database service of JICST and makes interrelational retrieval possible.

  16. Hazard Analysis Database Report

    Energy Technology Data Exchange (ETDEWEB)

    GAULT, G.W.

    1999-10-13

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for the Tank Waste Remediation System (TWRS) Final Safety Analysis Report (FSAR). The FSAR is part of the approved TWRS Authorization Basis (AB). This document describes, identifies, and defines the contents and structure of the TWRS FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The TWRS Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The database supports the preparation of Chapters 3,4, and 5 of the TWRS FSAR and the USQ process and consists of two major, interrelated data sets: (1) Hazard Evaluation Database--Data from the results of the hazard evaluations; and (2) Hazard Topography Database--Data from the system familiarization and hazard identification.

  17. Recommendations for ionization chamber smoke detectors for commercial and industrial fire protection systems (1988)

    International Nuclear Information System (INIS)

    1989-01-01

    Ionization chamber smoke detectors (ICSDs) utilising a radioactive substance as the source of ionization are used to detect the presence of smoke and hence give early warning of a fire. These recommendations are intended to ensure that the use of ICSDs incorporating radium-226 and americium-241 in commercial/industrial fire protection systems does not give rise to any unnecessary radiation exposure

  18. A database of new zeolite-like materials.

    Science.gov (United States)

    Pophale, Ramdas; Cheeseman, Phillip A; Deem, Michael W

    2011-07-21

    We here describe a database of computationally predicted zeolite-like materials. These crystals were discovered by a Monte Carlo search for zeolite-like materials. Positions of Si atoms as well as unit cell, space group, density, and number of crystallographically unique atoms were explored in the construction of this database. The database contains over 2.6 M unique structures. Roughly 15% of these are within +30 kJ mol(-1) Si of α-quartz, the band in which most of the known zeolites lie. These structures have topological, geometrical, and diffraction characteristics that are similar to those of known zeolites. The database is the result of refinement by two interatomic potentials that both satisfy the Pauli exclusion principle. The database has been deposited in the publicly available PCOD database and in www.hypotheticalzeolites.net/database/deem/. This journal is © the Owner Societies 2011

  19. Database thinking development in Context of School Education

    OpenAIRE

    Panský, Mikoláš

    2011-01-01

    The term database thinking is understood as group of Competencies that enables working with Database System. Database thinking development is targeted educational incidence to student with the expected outcome ability working with Database system. Thesis is focused on problematic of purposes, content and methods of database thinking development. Experimental part proposes quantitative metrics for database thinking development. KEYWORDS: Education, Database, Database thinking, Structured Query...

  20. Non-Price Competition and the Structure of the Online Information Industry: Q-Analysis of Medical Databases and Hosts.

    Science.gov (United States)

    Davies, Roy

    1987-01-01

    Discussion of the online information industry emphasizes the effects of non-price competition on its structure and the firms involved. Q-analysis is applied to data on medical databases and hosts, changes over a three-year period are identified, and an optimum structure for the industry based on economic theory is considered. (Author/LRW)

  1. Carotenoids Database: structures, chemical fingerprints and distribution among organisms.

    Science.gov (United States)

    Yabuzaki, Junko

    2017-01-01

    To promote understanding of how organisms are related via carotenoids, either evolutionarily or symbiotically, or in food chains through natural histories, we built the Carotenoids Database. This provides chemical information on 1117 natural carotenoids with 683 source organisms. For extracting organisms closely related through the biosynthesis of carotenoids, we offer a new similarity search system 'Search similar carotenoids' using our original chemical fingerprint 'Carotenoid DB Chemical Fingerprints'. These Carotenoid DB Chemical Fingerprints describe the chemical substructure and the modification details based upon International Union of Pure and Applied Chemistry (IUPAC) semi-systematic names of the carotenoids. The fingerprints also allow (i) easier prediction of six biological functions of carotenoids: provitamin A, membrane stabilizers, odorous substances, allelochemicals, antiproliferative activity and reverse MDR activity against cancer cells, (ii) easier classification of carotenoid structures, (iii) partial and exact structure searching and (iv) easier extraction of structural isomers and stereoisomers. We believe this to be the first attempt to establish fingerprints using the IUPAC semi-systematic names. For extracting close profiled organisms, we provide a new tool 'Search similar profiled organisms'. Our current statistics show some insights into natural history: carotenoids seem to have been spread largely by bacteria, as they produce C30, C40, C45 and C50 carotenoids, with the widest range of end groups, and they share a small portion of C40 carotenoids with eukaryotes. Archaea share an even smaller portion with eukaryotes. Eukaryotes then have evolved a considerable variety of C40 carotenoids. Considering carotenoids, eukaryotes seem more closely related to bacteria than to archaea aside from 16S rRNA lineage analysis. : http://carotenoiddb.jp. © The Author(s) 2017. Published by Oxford University Press.

  2. Report on the achievements in the Sunshine Project in fiscal 1986. Surveys on systems to structure a coal liquefaction database; 1986 nendo sekitan ekika database kochiku no tame no system chosa seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1987-03-01

    Surveys are carried out on the current status of information control systems for development projects being performed or planned in developing coal liquefaction technologies. The conception for structuring a coal liquefaction database (CLDB) is made clear to manage comprehensively and utilize effectively the information in the systems. Section 3 investigated and analyzed the current status of data processing for experimental plants. The data for each experimental plant are processed individually. Therefore, it is preferable that the CLDB shall be provided with accommodating locations to receive respective data. Section 4 put into order the flows of operation and information in the coal liquefaction research, and depicted an overall configuration diagram for the system. Section 5 discusses problems in structuring this system. There is a large number of problems to be discussed from now on, not only in the technological aspect, but in analyzing the organizational roles of NEDO and commissioned business entities, and the needs of users. The last section summarizes the steps and schedule for developing this system. The development steps should preferably be implemented stepwise along the progress of the experimental plants, in such an order as the fundamental database, analysis database and engineering database. (NEDO)

  3. Hazard Analysis Database Report

    CERN Document Server

    Grams, W H

    2000-01-01

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from t...

  4. An integrated photogrammetric and spatial database management system for producing fully structured data using aerial and remote sensing images.

    Science.gov (United States)

    Ahmadi, Farshid Farnood; Ebadi, Hamid

    2009-01-01

    3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs); direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium) standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS) is presented.

  5. An Integrated Photogrammetric and Spatial Database Management System for Producing Fully Structured Data Using Aerial and Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Farshid Farnood Ahmadi

    2009-03-01

    Full Text Available 3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs; direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS is presented.

  6. FDA toxicity databases and real-time data entry

    International Nuclear Information System (INIS)

    Arvidson, Kirk B.

    2008-01-01

    Structure-searchable electronic databases are valuable new tools that are assisting the FDA in its mission to promptly and efficiently review incoming submissions for regulatory approval of new food additives and food contact substances. The Center for Food Safety and Applied Nutrition's Office of Food Additive Safety (CFSAN/OFAS), in collaboration with Leadscope, Inc., is consolidating genetic toxicity data submitted in food additive petitions from the 1960s to the present day. The Center for Drug Evaluation and Research, Office of Pharmaceutical Science's Informatics and Computational Safety Analysis Staff (CDER/OPS/ICSAS) is separately gathering similar information from their submissions. Presently, these data are distributed in various locations such as paper files, microfiche, and non-standardized toxicology memoranda. The organization of the data into a consistent, searchable format will reduce paperwork, expedite the toxicology review process, and provide valuable information to industry that is currently available only to the FDA. Furthermore, by combining chemical structures with genetic toxicity information, biologically active moieties can be identified and used to develop quantitative structure-activity relationship (QSAR) modeling and testing guidelines. Additionally, chemicals devoid of toxicity data can be compared to known structures, allowing for improved safety review through the identification and analysis of structural analogs. Four database frameworks have been created: bacterial mutagenesis, in vitro chromosome aberration, in vitro mammalian mutagenesis, and in vivo micronucleus. Controlled vocabularies for these databases have been established. The four separate genetic toxicity databases are compiled into a single, structurally-searchable database for easy accessibility of the toxicity information. Beyond the genetic toxicity databases described here, additional databases for subchronic, chronic, and teratogenicity studies have been prepared

  7. The magnet components database system

    International Nuclear Information System (INIS)

    Baggett, M.J.; Leedy, R.; Saltmarsh, C.; Tompkins, J.C.

    1990-01-01

    The philosophy, structure, and usage of MagCom, the SSC magnet components database, are described. The database has been implemented in Sybase (a powerful relational database management system) on a UNIX-based workstation at the Superconducting Super Collider Laboratory (SSCL); magnet project collaborators can access the database via network connections. The database was designed to contain the specifications and measured values of important properties for major materials, plus configuration information (specifying which individual items were used in each cable, coil, and magnet) and the test results on completed magnets. The data will facilitate the tracking and control of the production process as well as the correlation of magnet performance with the properties of its constituents. 3 refs., 9 figs

  8. The magnet components database system

    International Nuclear Information System (INIS)

    Baggett, M.J.; Leedy, R.; Saltmarsh, C.; Tompkins, J.C.

    1990-01-01

    The philosophy, structure, and usage MagCom, the SSC magnet components database, are described. The database has been implemented in Sybase (a powerful relational database management system) on a UNIX-based workstation at the Superconducting Super Collider Laboratory (SSCL); magnet project collaborators can access the database via network connections. The database was designed to contain the specifications and measured values of important properties for major materials, plus configuration information (specifying which individual items were used in each cable, coil, and magnet) and the test results on completed magnets. These data will facilitate the tracking and control of the production process as well as the correlation of magnet performance with the properties of its constituents. 3 refs., 10 figs

  9. Structure and contents of a new geomorphological GIS database linked to a geomorphological map — With an example from Liden, central Sweden

    Science.gov (United States)

    Gustavsson, Marcus; Seijmonsbergen, Arie C.; Kolstrup, Else

    2008-03-01

    This paper presents the structure and contents of a standardised geomorphological GIS database that stores comprehensive scientific geomorphological data and constitutes the basis for processing and extracting spatial thematic data. The geodatabase contains spatial information on morphography/morphometry, hydrography, lithology, genesis, processes and age. A unique characteristic of the GIS geodatabase is that it is constructed in parallel with a new comprehensive geomorphological mapping system designed with GIS applications in mind. This close coupling enables easy digitalisation of the information from the geomorphological map into the GIS database for use in both scientific and practical applications. The selected platform, in which the geomorphological vector, raster and tabular data are stored, is the ESRI Personal geodatabase. Additional data such as an image of the original geomorphological map, DEMs or aerial orthographic images are also included in the database. The structure of the geomorphological database presented in this paper is exemplified for a study site around Liden, central Sweden.

  10. RNA FRABASE 2.0: an advanced web-accessible database with the capacity to search the three-dimensional fragments within RNA structures

    Directory of Open Access Journals (Sweden)

    Wasik Szymon

    2010-05-01

    Full Text Available Abstract Background Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. Description RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA

  11. The CATH database

    Directory of Open Access Journals (Sweden)

    Knudsen Michael

    2010-02-01

    Full Text Available Abstract The CATH database provides hierarchical classification of protein domains based on their folding patterns. Domains are obtained from protein structures deposited in the Protein Data Bank and both domain identification and subsequent classification use manual as well as automated procedures. The accompanying website http://www.cathdb.info provides an easy-to-use entry to the classification, allowing for both browsing and downloading of data. Here, we give a brief review of the database, its corresponding website and some related tools.

  12. Critical evaluation of the effect of valerian extract on sleep structure and sleep quality.

    Science.gov (United States)

    Donath, F; Quispe, S; Diefenbach, K; Maurer, A; Fietze, I; Roots, I

    2000-03-01

    A carefully designed study assessed the short-term (single dose) and long-term (14 days with multiple dosage) effects of a valerian extract on both objective and subjective sleep parameters. The investigation was performed as a randomised, double-blind, placebo-controlled, cross-over study. Sixteen patients (4 male, 12 female) with previously established psychophysiological insomnia (ICSD-code 1.A.1.), and with a median age of 49 (range: 22 to 55), were included in the study. The main inclusion criteria were reported primary insomnia according to ICSD criteria, which was confirmed by polysomnographic recording, and the absence of acute diseases. During the study, the patients underwent 8 polysomnographic recordings: i.e., 2 recordings (baseline and study night) at each time point at which the short and long-term effects of placebo and valerian were tested. The target variable of the study was sleep efficiency. Other parameters describing objective sleep structure were the usual features of sleep-stage analysis, based on the rules of Rechtschaffen and Kales (1968), and the arousal index (scored according to ASDA criteria, 1992) as a sleep microstructure parameter. Subjective parameters such as sleep quality, morning feeling, daytime performance, subjectively perceived duration of sleep latency, and sleep period time were assessed by means of questionnaires. After a single dose of valerian, no effects on sleep structure and subjective sleep assessment were observed. After multiple-dose treatment, sleep efficiency showed a significant increase for both the placebo and the valerian condition in comparison with baseline polysomnography. We confirmed significant differences between valerian and placebo for parameters describing slow-wave sleep. In comparison with the placebo, slow-wave sleep latency was reduced after administration of valerian (21.3 vs. 13.5 min respectively, p<0.05). The SWS percentage of time in bed (TIB) was increased after long-term valerian

  13. Database Systems - Present and Future

    Directory of Open Access Journals (Sweden)

    2009-01-01

    Full Text Available The database systems have nowadays an increasingly important role in the knowledge-based society, in which computers have penetrated all fields of activity and the Internet tends to develop worldwide. In the current informatics context, the development of the applications with databases is the work of the specialists. Using databases, reach a database from various applications, and also some of related concepts, have become accessible to all categories of IT users. This paper aims to summarize the curricular area regarding the fundamental database systems issues, which are necessary in order to train specialists in economic informatics higher education. The database systems integrate and interfere with several informatics technologies and therefore are more difficult to understand and use. Thus, students should know already a set of minimum, mandatory concepts and their practical implementation: computer systems, programming techniques, programming languages, data structures. The article also presents the actual trends in the evolution of the database systems, in the context of economic informatics.

  14. Performance Enhancements for Advanced Database Management Systems

    OpenAIRE

    Helmer, Sven

    2000-01-01

    New applications have emerged, demanding database management systems with enhanced functionality. However, high performance is a necessary precondition for the acceptance of such systems by end users. In this context we developed, implemented, and tested algorithms and index structures for improving the performance of advanced database management systems. We focused on index structures and join algorithms for set-valued attributes.

  15. HCSD: the human cancer secretome database

    DEFF Research Database (Denmark)

    Feizi, Amir; Banaei-Esfahani, Amir; Nielsen, Jens

    2015-01-01

    The human cancer secretome database (HCSD) is a comprehensive database for human cancer secretome data. The cancer secretome describes proteins secreted by cancer cells and structuring information about the cancer secretome will enable further analysis of how this is related with tumor biology...... database is limiting the ability to query the increasing community knowledge. We therefore developed the Human Cancer Secretome Database (HCSD) to fulfil this gap. HCSD contains >80 000 measurements for about 7000 nonredundant human proteins collected from up to 35 high-throughput studies on 17 cancer...

  16. Relational databases for SSC design and control

    International Nuclear Information System (INIS)

    Barr, E.; Peggs, S.; Saltmarsh, C.

    1989-01-01

    Most people agree that a database is A Good Thing, but there is much confusion in the jargon used, and in what jobs a database management system and its peripheral software can and cannot do. During the life cycle of an enormous project like the SSC, from conceptual and theoretical design, through research and development, to construction, commissioning and operation, an enormous amount of data will be generated. Some of these data, originating in the early parts of the project, will be needed during commissioning or operation, many years in the future. Two of these pressing data management needs-from the magnet research and industrialization programs and the lattice design-have prompted work on understanding and adapting commercial database practices for scientific projects. Modern relational database management systems (rDBMS's) cope naturally with a large proportion of the requirements of data structures, like the SSC database structure built for the superconduction cable supplies, uses, and properties. This application is similar to the commercial applications for which these database systems were developed. The SSC application has further requirements not immediately satisfied by the commercial systems. These derive from the diversity of the data structures to be managed, the changing emphases and uses during the project lifetime, and the large amount of scientific data processing to be expected. 4 refs., 5 figs

  17. Ionizing chamber smoke detectors in implementation of radiation protection standards

    International Nuclear Information System (INIS)

    1977-01-01

    In 1977 the NEA Steering Committee adopted a series of Recommendations for Ionizing Chamber Smoke Detectors (ICSDs) in Implementation of Radiation Protection Standards. The purpose of these recommendations is to permit adoption of a harmonized policy by the competent national authorities concerning the issue of licenses for the manufacture, import, use and disposal of ICSDs while insuring that individual and collective exposure doses are kept as low as is reasonably achievable [fr

  18. De-identifying an EHR Database

    DEFF Research Database (Denmark)

    Lauesen, Søren; Pantazos, Kostas; Lippert, Søren

    2011-01-01

    -identified a Danish EHR database with 437,164 patients. The goal was to generate a version with real medical records, but related to artificial persons. We developed a de-identification algorithm that uses lists of named entities, simple language analysis, and special rules. Our algorithm consists of 3 steps: collect...... lists of identifiers from the database and external resources, define a replacement for each identifier, and replace identifiers in structured data and free text. Some patient records could not be safely de-identified, so the de-identified database has 323,122 patient records with an acceptable degree...... of anonymity, readability and correctness (F-measure of 95%). The algorithm has to be adjusted for each culture, language and database....

  19. The Design of Lexical Database for Indonesian Language

    Science.gov (United States)

    Gunawan, D.; Amalia, A.

    2017-03-01

    Kamus Besar Bahasa Indonesia (KBBI), an official dictionary for Indonesian language, provides lists of words with their meaning. The online version can be accessed via Internet network. Another online dictionary is Kateglo. KBBI online and Kateglo only provides an interface for human. A machine cannot retrieve data from the dictionary easily without using advanced techniques. Whereas, lexical of words is required in research or application development which related to natural language processing, text mining, information retrieval or sentiment analysis. To address this requirement, we need to build a lexical database which provides well-defined structured information about words. A well-known lexical database is WordNet, which provides the relation among words in English. This paper proposes the design of a lexical database for Indonesian language based on the combination of KBBI 4th edition, Kateglo and WordNet structure. Knowledge representation by utilizing semantic networks depict the relation among words and provide the new structure of lexical database for Indonesian language. The result of this design can be used as the foundation to build the lexical database for Indonesian language.

  20. Databases for INDUS-1 and INDUS-2

    International Nuclear Information System (INIS)

    Merh, Bhavna N.; Fatnani, Pravin

    2003-01-01

    The databases for Indus are relational databases designed to store various categories of data related to the accelerator. The data archiving and retrieving system in Indus is based on a client/sever model. A general purpose commercial database is used to store parameters and equipment data for the whole machine. The database manages configuration, on-line and historical databases. On line and off line applications distributed in several systems can store and retrieve the data from the database over the network. This paper describes the structure of databases for Indus-1 and Indus-2 and their integration within the software architecture. The data analysis, design, resulting data-schema and implementation issues are discussed. (author)

  1. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The... Article title: Author name(s): Journal: External Links: Original website information Database maintenance s...DB (Protein Data Bank) KEGG PATHWAY Database DrugPort Entry list Available Query search Available Web servic

  2. Clinical Databases for Chest Physicians.

    Science.gov (United States)

    Courtwright, Andrew M; Gabriel, Peter E

    2018-04-01

    A clinical database is a repository of patient medical and sociodemographic information focused on one or more specific health condition or exposure. Although clinical databases may be used for research purposes, their primary goal is to collect and track patient data for quality improvement, quality assurance, and/or actual clinical management. This article aims to provide an introduction and practical advice on the development of small-scale clinical databases for chest physicians and practice groups. Through example projects, we discuss the pros and cons of available technical platforms, including Microsoft Excel and Access, relational database management systems such as Oracle and PostgreSQL, and Research Electronic Data Capture. We consider approaches to deciding the base unit of data collection, creating consensus around variable definitions, and structuring routine clinical care to complement database aims. We conclude with an overview of regulatory and security considerations for clinical databases. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  3. Database for environmental monitoring in nuclear facilities

    International Nuclear Information System (INIS)

    Raceanu, Mircea; Varlam, Carmen; Iliescu, Mariana; Enache, Adrian; Faurescu, Ionut

    2006-01-01

    To ensure that an assessment could be made of the impact of nuclear facilities on the local environment, a program of environmental monitoring must be established well before of nuclear facility commissioning. Enormous amount of data must be stored and correlated starting with: location, meteorology, type sample characterization from water to different kind of foods, radioactivity measurement and isotopic measurement (e.g. for C-14 determination, C-13 isotopic correction it is a must). Data modelling is a well known mechanism describing data structures at a high level of abstraction. Such models are often used to automatically create database structures, and to generate the code structures used to access the databases. This has the disadvantage of losing data constraints that might be specified in data models for data checking. Embodiment of the system of the present application includes a computer-readable memory for storing a definitional data table for defining variable symbols representing the corresponding measurable physical quantities. Developing a database system implies setting up well-established rules of how the data should be stored and accessed what is commonly called the Relational Database Theory. This consists of guidelines regarding issues as how to avoid duplicating data using the technique called normalization and how to identify the unique identifier for a database record. (authors)

  4. An Integrated Enterprise Accelerator Database for the SLC Control System

    International Nuclear Information System (INIS)

    2002-01-01

    Since its inception in the early 1980's, the SLC Control System has been driven by a highly structured memory-resident real-time database. While efficient, its rigid structure and file-based sources makes it difficult to maintain and extract relevant information. The goal of transforming the sources for this database into a relational form is to enable it to be part of a Control System Enterprise Database that is an integrated central repository for SLC accelerator device and Control System data with links to other associated databases. We have taken the concepts developed for the NLC Enterprise Database and used them to create and load a relational model of the online SLC Control System database. This database contains data and structure to allow querying and reporting on beamline devices, their associations and parameters. In the future this will be extended to allow generation of EPICS and SLC database files, setup of applications and links to other databases such as accelerator maintenance, archive data, financial and personnel records, cabling information, documentation etc. The database is implemented using Oracle 8i. In the short term it will be updated daily in batch from the online SLC database. In the longer term, it will serve as the primary source for Control System static data, an R and D platform for the NLC, and contribute to SLC Control System operations

  5. Profiling of Escherichia coli Chromosome database.

    Science.gov (United States)

    Yamazaki, Yukiko; Niki, Hironori; Kato, Jun-ichi

    2008-01-01

    The Profiling of Escherichia coli Chromosome (PEC) database (http://www.shigen.nig.ac.jp/ecoli/pec/) is designed to allow E. coli researchers to efficiently access information from functional genomics studies. The database contains two principal types of data: gene essentiality and a large collection of E. coli genetic research resources. The essentiality data are based on data compilation from published single-gene essentiality studies and on cell growth studies of large-deletion mutants. Using the circular and linear viewers for both whole genomes and the minimal genome, users can not only gain an overview of the genome structure but also retrieve information on contigs, gene products, mutants, deletions, and so forth. In particular, genome-wide exhaustive mutants are an essential resource for studying E. coli gene functions. Although the genomic database was constructed independently from the genetic resources database, users may seamlessly access both types of data. In addition to these data, the PEC database also provides a summary of homologous genes of other bacterial genomes and of protein structure information, with a comprehensive interface. The PEC is thus a convenient and useful platform for contemporary E. coli researchers.

  6. Design and implementation of typical target image database system

    International Nuclear Information System (INIS)

    Qin Kai; Zhao Yingjun

    2010-01-01

    It is necessary to provide essential background data and thematic data timely in image processing and application. In fact, application is an integrating and analyzing procedure with different kinds of data. In this paper, the authors describe an image database system which classifies, stores, manages and analyzes database of different types, such as image database, vector database, spatial database, spatial target characteristics database, its design and structure. (authors)

  7. Respiratory cancer database: An open access database of respiratory cancer gene and miRNA.

    Science.gov (United States)

    Choubey, Jyotsna; Choudhari, Jyoti Kant; Patel, Ashish; Verma, Mukesh Kumar

    2017-01-01

    Respiratory cancer database (RespCanDB) is a genomic and proteomic database of cancer of respiratory organ. It also includes the information of medicinal plants used for the treatment of various respiratory cancers with structure of its active constituents as well as pharmacological and chemical information of drug associated with various respiratory cancers. Data in RespCanDB has been manually collected from published research article and from other databases. Data has been integrated using MySQL an object-relational database management system. MySQL manages all data in the back-end and provides commands to retrieve and store the data into the database. The web interface of database has been built in ASP. RespCanDB is expected to contribute to the understanding of scientific community regarding respiratory cancer biology as well as developments of new way of diagnosing and treating respiratory cancer. Currently, the database consist the oncogenomic information of lung cancer, laryngeal cancer, and nasopharyngeal cancer. Data for other cancers, such as oral and tracheal cancers, will be added in the near future. The URL of RespCanDB is http://ridb.subdic-bioinformatics-nitrr.in/.

  8. Database Description - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database

  9. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  10. Search for 5'-leader regulatory RNA structures based on gene annotation aided by the RiboGap database.

    Science.gov (United States)

    Naghdi, Mohammad Reza; Smail, Katia; Wang, Joy X; Wade, Fallou; Breaker, Ronald R; Perreault, Jonathan

    2017-03-15

    The discovery of noncoding RNAs (ncRNAs) and their importance for gene regulation led us to develop bioinformatics tools to pursue the discovery of novel ncRNAs. Finding ncRNAs de novo is challenging, first due to the difficulty of retrieving large numbers of sequences for given gene activities, and second due to exponential demands on calculation needed for comparative genomics on a large scale. Recently, several tools for the prediction of conserved RNA secondary structure were developed, but many of them are not designed to uncover new ncRNAs, or are too slow for conducting analyses on a large scale. Here we present various approaches using the database RiboGap as a primary tool for finding known ncRNAs and for uncovering simple sequence motifs with regulatory roles. This database also can be used to easily extract intergenic sequences of eubacteria and archaea to find conserved RNA structures upstream of given genes. We also show how to extend analysis further to choose the best candidate ncRNAs for experimental validation. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Data collection for improved follow-up of operating experiences. SKI damage database. Contents and aims with database

    International Nuclear Information System (INIS)

    Gott, Karen

    1997-01-01

    The Stryk database is presented and discussed in conjunction with the Swedish regulations concerning structural components in nuclear installations. The database acts as a reference library for reported cracks and degradation and can be used to retrieve information about individual events or for compiling statistics and performing trend analyses

  12. YMDB: the Yeast Metabolome Database

    Science.gov (United States)

    Jewison, Timothy; Knox, Craig; Neveu, Vanessa; Djoumbou, Yannick; Guo, An Chi; Lee, Jacqueline; Liu, Philip; Mandal, Rupasri; Krishnamurthy, Ram; Sinelnikov, Igor; Wilson, Michael; Wishart, David S.

    2012-01-01

    The Yeast Metabolome Database (YMDB, http://www.ymdb.ca) is a richly annotated ‘metabolomic’ database containing detailed information about the metabolome of Saccharomyces cerevisiae. Modeled closely after the Human Metabolome Database, the YMDB contains >2000 metabolites with links to 995 different genes/proteins, including enzymes and transporters. The information in YMDB has been gathered from hundreds of books, journal articles and electronic databases. In addition to its comprehensive literature-derived data, the YMDB also contains an extensive collection of experimental intracellular and extracellular metabolite concentration data compiled from detailed Mass Spectrometry (MS) and Nuclear Magnetic Resonance (NMR) metabolomic analyses performed in our lab. This is further supplemented with thousands of NMR and MS spectra collected on pure, reference yeast metabolites. Each metabolite entry in the YMDB contains an average of 80 separate data fields including comprehensive compound description, names and synonyms, structural information, physico-chemical data, reference NMR and MS spectra, intracellular/extracellular concentrations, growth conditions and substrates, pathway information, enzyme data, gene/protein sequence data, as well as numerous hyperlinks to images, references and other public databases. Extensive searching, relational querying and data browsing tools are also provided that support text, chemical structure, spectral, molecular weight and gene/protein sequence queries. Because of S. cervesiae's importance as a model organism for biologists and as a biofactory for industry, we believe this kind of database could have considerable appeal not only to metabolomics researchers, but also to yeast biologists, systems biologists, the industrial fermentation industry, as well as the beer, wine and spirit industry. PMID:22064855

  13. JT-60 database system, 1

    International Nuclear Information System (INIS)

    Kurihara, Kenichi; Kimura, Toyoaki; Itoh, Yasuhiro.

    1987-07-01

    Naturally, sufficient software circumstance makes it possible to analyse the discharge result data effectively. JT-60 discharge result data, collected by the supervisor, are transferred to the general purpose computer through the new linkage channel, and are converted to ''database''. Datafile in the database was designed to be surrounded by various interfaces. This structure is able to preserve the datafile reliability and does not expect the user's information about the datafile structure. In addition, the support system for graphic processing was developed so that the user may easily obtain the figures with some calculations. This paper reports on the basic concept and system design. (author)

  14. Why Save Your Course as a Relational Database?

    Science.gov (United States)

    Hamilton, Gregory C.; Katz, David L.; Davis, James E.

    2000-01-01

    Describes a system that stores course materials for computer-based training programs in a relational database called Of Course! Outlines the basic structure of the databases; explains distinctions between Of Course! and other authoring languages; and describes how data is retrieved from the database and presented to the student. (Author/LRW)

  15. Palingol: a declarative programming language to describe nucleic acids' secondary structures and to scan sequence database.

    Science.gov (United States)

    Billoud, B; Kontic, M; Viari, A

    1996-01-01

    At the DNA/RNA level, biological signals are defined by a combination of spatial structures and sequence motifs. Until now, few attempts had been made in writing general purpose search programs that take into account both sequence and structure criteria. Indeed, the most successful structure scanning programs are usually dedicated to particular structures and are written using general purpose programming languages through a complex and time consuming process where the biological problem of defining the structure and the computer engineering problem of looking for it are intimately intertwined. In this paper, we describe a general representation of structures, suitable for database scanning, together with a programming language, Palingol, designed to manipulate it. Palingol has specific data types, corresponding to structural elements-basically helices-that can be arranged in any way to form a complex structure. As a consequence of the declarative approach used in Palingol, the user should only focus on 'what to search for' while the language engine takes care of 'how to look for it'. Therefore, it becomes simpler to write a scanning program and the structural constraints that define the required structure are more clearly identified. PMID:8628670

  16. Beyond the ENDF format: A modern nuclear database structure. SG38 meeting, NEA Headquarters, 20-22 May 2015

    International Nuclear Information System (INIS)

    Brown, David; Jouanne, Cedric; Malvagi, Fausto; Coste-Delclaux, Mireille; Hawari, Ayman I.; Mattoon, Caleb; Zerkin, Viktor; ); Cornock, Mark; Conlin, Jeremy; Lloydge, Zhigang; Oleynik, Dmitry S.; Cabellos, Oscar; Mills, Robert W.; Kim, Do Heon; Leal, Luiz Carlos; White, Morgan c.; Dunn, Michael; Kahler, Albert C. skip; Mcnabb, Dennis P.; Roubtsov, Danila; Cho, Young-Sik; Beck, Bret; Haeck, Wim

    2015-05-01

    WPEC subgroup 38 (SG38) was formed to develop a new structure for storing nuclear reaction data, that is meant to eventually replace ENDF-6 as the standard way to store and share evaluations. The work of SG38 covers the following tasks: Designing flexible, general-purpose data containers; Determining a logical and easy-to-understand top-level hierarchy for storing evaluated nuclear reaction data; Creating a particle database for storing particles, masses and level schemes; Specifying the infrastructure (plotting, processing, etc.) that must accompany the new structure; Developing an Application Programming Interface or API to allow other codes to access data stored in the new structure; Specifying what tests need to be implemented for quality assurance of the new structure and associated infrastructure; Ensuring documentation and governance of the structure and associated infrastructure. This document is the proceedings of the SG38 meeting which took place at the NEA Headquarters, on 20-22 May 2015. It comprises all the available presentations (slides) given by the participants: - 1) Recent updates to the top-level hierarchy, review draft requirements document (D. Brown); - 2) Properties of Particles (PoPs) database (C.M. Mattoon); - 3) Proposed General Purpose Data Containers (B. Beck); - 4) Feedback on GND Specifications (C. Jouanne); - 5) Using XML in the IAEA-NDS: status, feedback and proposals (V. Zerkin); - 6) Observations Related to Thermal Neutron Scattering Law Data (A.I. Hawari); - 7) Meeting Notes (D.P. Mcnabb)

  17. Prediction methods and databases within chemoinformatics

    DEFF Research Database (Denmark)

    Jónsdóttir, Svava Osk; Jørgensen, Flemming Steen; Brunak, Søren

    2005-01-01

    MOTIVATION: To gather information about available databases and chemoinformatics methods for prediction of properties relevant to the drug discovery and optimization process. RESULTS: We present an overview of the most important databases with 2-dimensional and 3-dimensional structural information...... about drugs and drug candidates, and of databases with relevant properties. Access to experimental data and numerical methods for selecting and utilizing these data is crucial for developing accurate predictive in silico models. Many interesting predictive methods for classifying the suitability...

  18. Geroprotectors.org: a new, structured and curated database of current therapeutic interventions in aging and age-related disease

    Science.gov (United States)

    Moskalev, Alexey; Chernyagina, Elizaveta; de Magalhães, João Pedro; Barardo, Diogo; Thoppil, Harikrishnan; Shaposhnikov, Mikhail; Budovsky, Arie; Fraifeld, Vadim E.; Garazha, Andrew; Tsvetkov, Vasily; Bronovitsky, Evgeny; Bogomolov, Vladislav; Scerbacov, Alexei; Kuryan, Oleg; Gurinovich, Roman; Jellen, Leslie C.; Kennedy, Brian; Mamoshina, Polina; Dobrovolskaya, Evgeniya; Aliper, Alex; Kaminsky, Dmitry; Zhavoronkov, Alex

    2015-01-01

    As the level of interest in aging research increases, there is a growing number of geroprotectors, or therapeutic interventions that aim to extend the healthy lifespan and repair or reduce aging-related damage in model organisms and, eventually, in humans. There is a clear need for a manually-curated database of geroprotectors to compile and index their effects on aging and age-related diseases and link these effects to relevant studies and multiple biochemical and drug databases. Here, we introduce the first such resource, Geroprotectors (http://geroprotectors.org). Geroprotectors is a public, rapidly explorable database that catalogs over 250 experiments involving over 200 known or candidate geroprotectors that extend lifespan in model organisms. Each compound has a comprehensive profile complete with biochemistry, mechanisms, and lifespan effects in various model organisms, along with information ranging from chemical structure, side effects, and toxicity to FDA drug status. These are presented in a visually intuitive, efficient framework fit for casual browsing or in-depth research alike. Data are linked to the source studies or databases, providing quick and convenient access to original data. The Geroprotectors database facilitates cross-study, cross-organism, and cross-discipline analysis and saves countless hours of inefficient literature and web searching. Geroprotectors is a one-stop, knowledge-sharing, time-saving resource for researchers seeking healthy aging solutions. PMID:26342919

  19. Observational database for studies of nearby universe

    Science.gov (United States)

    Kaisina, E. I.; Makarov, D. I.; Karachentsev, I. D.; Kaisin, S. S.

    2012-01-01

    We present the description of a database of galaxies of the Local Volume (LVG), located within 10 Mpc around the Milky Way. It contains more than 800 objects. Based on an analysis of functional capabilities, we used the PostgreSQL DBMS as a management system for our LVG database. Applying semantic modelling methods, we developed a physical ER-model of the database. We describe the developed architecture of the database table structure, and the implemented web-access, available at http://www.sao.ru/lv/lvgdb.

  20. An online database of nuclear electromagnetic moments

    International Nuclear Information System (INIS)

    Mertzimekis, T.J.; Stamou, K.; Psaltis, A.

    2016-01-01

    Measurements of nuclear magnetic dipole and electric quadrupole moments are considered quite important for the understanding of nuclear structure both near and far from the valley of stability. The recent advent of radioactive beams has resulted in a plethora of new, continuously flowing, experimental data on nuclear structure – including nuclear moments – which hinders the information management. A new, dedicated, public and user friendly online database ( (http://magneticmoments.info)) has been created comprising experimental data of nuclear electromagnetic moments. The present database supersedes existing printed compilations, including also non-evaluated series of data and relevant meta-data, while putting strong emphasis on bimonthly updates. The scope, features and extensions of the database are reported.

  1. The Xeno-glycomics database (XDB): a relational database of qualitative and quantitative pig glycome repertoire.

    Science.gov (United States)

    Park, Hae-Min; Park, Ju-Hyeong; Kim, Yoon-Woo; Kim, Kyoung-Jin; Jeong, Hee-Jin; Jang, Kyoung-Soon; Kim, Byung-Gee; Kim, Yun-Gon

    2013-11-15

    In recent years, the improvement of mass spectrometry-based glycomics techniques (i.e. highly sensitive, quantitative and high-throughput analytical tools) has enabled us to obtain a large dataset of glycans. Here we present a database named Xeno-glycomics database (XDB) that contains cell- or tissue-specific pig glycomes analyzed with mass spectrometry-based techniques, including a comprehensive pig glycan information on chemical structures, mass values, types and relative quantities. It was designed as a user-friendly web-based interface that allows users to query the database according to pig tissue/cell types or glycan masses. This database will contribute in providing qualitative and quantitative information on glycomes characterized from various pig cells/organs in xenotransplantation and might eventually provide new targets in the α1,3-galactosyltransferase gene-knock out pigs era. The database can be accessed on the web at http://bioinformatics.snu.ac.kr/xdb.

  2. Synthesis of MoVTeNb Oxide Catalysts with Tunable Particle Dimensions

    DEFF Research Database (Denmark)

    Kolenko, Yury V.; Zhang, Wei; d'Alnoncourt, Raoul Naumann

    2011-01-01

    Reliable procedures for the controlled synthesis of phase-pure MoVTeNb mixed oxides with M1 structure (ICSD 55097) and tunable crystal dimensions were developed to study the structure sensitivity of the selective oxidation of propane to acrylic acid. A series of powdered M1 catalysts...... catalysts were studied in the selective oxidation of propane to acrylic acid, revealing that active sites appear on the entire M1 surface and illustrating the high sensitivity of catalyst performance on the catalyst synthesis method....

  3. UET: a database of evolutionarily-predicted functional determinants of protein sequences that cluster as functional sites in protein structures.

    Science.gov (United States)

    Lua, Rhonald C; Wilson, Stephen J; Konecki, Daniel M; Wilkins, Angela D; Venner, Eric; Morgan, Daniel H; Lichtarge, Olivier

    2016-01-04

    The structure and function of proteins underlie most aspects of biology and their mutational perturbations often cause disease. To identify the molecular determinants of function as well as targets for drugs, it is central to characterize the important residues and how they cluster to form functional sites. The Evolutionary Trace (ET) achieves this by ranking the functional and structural importance of the protein sequence positions. ET uses evolutionary distances to estimate functional distances and correlates genotype variations with those in the fitness phenotype. Thus, ET ranks are worse for sequence positions that vary among evolutionarily closer homologs but better for positions that vary mostly among distant homologs. This approach identifies functional determinants, predicts function, guides the mutational redesign of functional and allosteric specificity, and interprets the action of coding sequence variations in proteins, people and populations. Now, the UET database offers pre-computed ET analyses for the protein structure databank, and on-the-fly analysis of any protein sequence. A web interface retrieves ET rankings of sequence positions and maps results to a structure to identify functionally important regions. This UET database integrates several ways of viewing the results on the protein sequence or structure and can be found at http://mammoth.bcm.tmc.edu/uet/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. A Study on Graph Storage Database of NOSQL

    OpenAIRE

    Smita Agrawal; Atul Patel

    2016-01-01

    Big Data is used to store huge volume of both structured and unstructured data which is so large and is hard to process using current / traditional database tools and software technologies. The goal of Big Data Storage Management is to ensure a high level of data quality and availability for business intellect and big data analytics applications. Graph database which is not most popular NoSQL database compare to relational database yet but it is a most powerful NoSQL database which can handle...

  5. Computational Screening of Materials for Water Splitting Applications

    DEFF Research Database (Denmark)

    Castelli, Ivano Eligio

    Design new materials for energy production in a photoelectrochemical cell, where water is split into hydrogen and oxygen by solar light, is one possible solution to the problem of increasing energy demand and storage. A screening procedure based on ab-initio density functional theory calculations...... Project database, which is based on the experimental ICSD database, and the bandgaps were calculated with focus on finding materials with potential as light harvesters. 24 materials have been proposed for the one-photon water splitting and 23 for the two-photon mechanism. Another method to obtain energy...... from Sun is using a photovoltaic cell that converts solar light into electricity. The absorption spectra of 70 experimentally known compounds, that are expected to be useful for light-to-electricity generation, have been calculated. 17 materials have been predicted to be promising for a single...

  6. Translation from the collaborative OSM database to cartography

    Science.gov (United States)

    Hayat, Flora

    2018-05-01

    The OpenStreetMap (OSM) database includes original items very useful for geographical analysis and for creating thematic maps. Contributors record in the open database various themes regarding amenities, leisure, transports, buildings and boundaries. The Michelin mapping department develops map prototypes to test the feasibility of mapping based on OSM. To translate the OSM database structure into a database structure fitted with Michelin graphic guidelines a research project is in development. It aims at defining the right structure for the Michelin uses. The research project relies on the analysis of semantic and geometric heterogeneities in OSM data. In that order, Michelin implements methods to transform the input geographical database into a cartographic image dedicated for specific uses (routing and tourist maps). The paper focuses on the mapping tools available to produce a personalised spatial database. Based on processed data, paper and Web maps can be displayed. Two prototypes are described in this article: a vector tile web map and a mapping method to produce paper maps on a regional scale. The vector tile mapping method offers an easy navigation within the map and within graphic and thematic guide- lines. Paper maps can be partly automatically drawn. The drawing automation and data management are part of the mapping creation as well as the final hand-drawing phase. Both prototypes have been set up using the OSM technical ecosystem.

  7. LoopX: A Graphical User Interface-Based Database for Comprehensive Analysis and Comparative Evaluation of Loops from Protein Structures.

    Science.gov (United States)

    Kadumuri, Rajashekar Varma; Vadrevu, Ramakrishna

    2017-10-01

    Due to their crucial role in function, folding, and stability, protein loops are being targeted for grafting/designing to create novel or alter existing functionality and improve stability and foldability. With a view to facilitate a thorough analysis and effectual search options for extracting and comparing loops for sequence and structural compatibility, we developed, LoopX a comprehensively compiled library of sequence and conformational features of ∼700,000 loops from protein structures. The database equipped with a graphical user interface is empowered with diverse query tools and search algorithms, with various rendering options to visualize the sequence- and structural-level information along with hydrogen bonding patterns, backbone φ, ψ dihedral angles of both the target and candidate loops. Two new features (i) conservation of the polar/nonpolar environment and (ii) conservation of sequence and conformation of specific residues within the loops have also been incorporated in the search and retrieval of compatible loops for a chosen target loop. Thus, the LoopX server not only serves as a database and visualization tool for sequence and structural analysis of protein loops but also aids in extracting and comparing candidate loops for a given target loop based on user-defined search options.

  8. Database Description - TP Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available tform for Drug Discovery, Informatics, and Structural Life Science Research Organization of Information and ...3(3):145-54. External Links: Original website information Database maintenance site National Institute of Genetics, Research Organiza...tion of Information and Systems (ROIS) URL of the original website http://www.tanpa

  9. Structure and function design for nuclear facilities decommissioning information database

    International Nuclear Information System (INIS)

    Liu Yongkuo; Song Yi; Wu Xiaotian; Liu Zhen

    2014-01-01

    The decommissioning of nuclear facilities is a radioactive and high-risk project which has to consider the effect of radiation and nuclear waste disposal, so the information system of nuclear facilities decommissioning project must be established to ensure the safety of the project. In this study, by collecting the decommissioning activity data, the decommissioning database was established, and based on the database, the decommissioning information database (DID) was developed. The DID can perform some basic operations, such as input, delete, modification and query of the decommissioning information data, and in accordance with processing characteristics of various types of information data, it can also perform information management with different function models. On this basis, analysis of the different information data will be done. The system is helpful for enhancing the management capability of the decommissioning process and optimizing the arrangements of the project, it also can reduce radiation dose of the workers, so the system is quite necessary for safe decommissioning of nuclear facilities. (authors)

  10. Ambiguity of non-systematic chemical identifiers within and between small-molecule databases.

    Science.gov (United States)

    Akhondi, Saber A; Muresan, Sorel; Williams, Antony J; Kors, Jan A

    2015-01-01

    A wide range of chemical compound databases are currently available for pharmaceutical research. To retrieve compound information, including structures, researchers can query these chemical databases using non-systematic identifiers. These are source-dependent identifiers (e.g., brand names, generic names), which are usually assigned to the compound at the point of registration. The correctness of non-systematic identifiers (i.e., whether an identifier matches the associated structure) can only be assessed manually, which is cumbersome, but it is possible to automatically check their ambiguity (i.e., whether an identifier matches more than one structure). In this study we have quantified the ambiguity of non-systematic identifiers within and between eight widely used chemical databases. We also studied the effect of chemical structure standardization on reducing the ambiguity of non-systematic identifiers. The ambiguity of non-systematic identifiers within databases varied from 0.1 to 15.2 % (median 2.5 %). Standardization reduced the ambiguity only to a small extent for most databases. A wide range of ambiguity existed for non-systematic identifiers that are shared between databases (17.7-60.2 %, median of 40.3 %). Removing stereochemistry information provided the largest reduction in ambiguity across databases (median reduction 13.7 percentage points). Ambiguity of non-systematic identifiers within chemical databases is generally low, but ambiguity of non-systematic identifiers that are shared between databases, is high. Chemical structure standardization reduces the ambiguity to a limited extent. Our findings can help to improve database integration, curation, and maintenance.

  11. GOBASE: an organelle genome database

    OpenAIRE

    O?Brien, Emmet A.; Zhang, Yue; Wang, Eric; Marie, Veronique; Badejoko, Wole; Lang, B. Franz; Burger, Gertraud

    2008-01-01

    The organelle genome database GOBASE, now in its 21st release (June 2008), contains all published mitochondrion-encoded sequences (?913 000) and chloroplast-encoded sequences (?250 000) from a wide range of eukaryotic taxa. For all sequences, information on related genes, exons, introns, gene products and taxonomy is available, as well as selected genome maps and RNA secondary structures. Recent major enhancements to database functionality include: (i) addition of an interface for RNA editing...

  12. A manufacturing database of advanced materials used in spacecraft structures

    Science.gov (United States)

    Bao, Han P.

    1994-12-01

    Cost savings opportunities over the life cycle of a product are highest in the early exploratory phase when different design alternatives are evaluated not only for their performance characteristics but also their methods of fabrication which really control the ultimate manufacturing costs of the product. In the past, Design-To-Cost methodologies for spacecraft design concentrated on the sizing and weight issues more than anything else at the early so-called 'Vehicle Level' (Ref: DOD/NASA Advanced Composites Design Guide). Given the impact of manufacturing cost, the objective of this study is to identify the principal cost drivers for each materials technology and propose a quantitative approach to incorporating these cost drivers into the family of optimization tools used by the Vehicle Analysis Branch of NASA LaRC to assess various conceptual vehicle designs. The advanced materials being considered include aluminum-lithium alloys, thermoplastic graphite-polyether etherketone composites, graphite-bismaleimide composites, graphite- polyimide composites, and carbon-carbon composites. Two conventional materials are added to the study to serve as baseline materials against which the other materials are compared. These two conventional materials are aircraft aluminum alloys series 2000 and series 7000, and graphite-epoxy composites T-300/934. The following information is available in the database. For each material type, the mechanical, physical, thermal, and environmental properties are first listed. Next the principal manufacturing processes are described. Whenever possible, guidelines for optimum processing conditions for specific applications are provided. Finally, six categories of cost drivers are discussed. They include, design features affecting processing, tooling, materials, fabrication, joining/assembly, and quality assurance issues. It should be emphasized that this database is not an exhaustive database. Its primary use is to make the vehicle designer

  13. Design of SMART alarm system using main memory database

    International Nuclear Information System (INIS)

    Jang, Kue Sook; Seo, Yong Seok; Park, Keun Oak; Lee, Jong Bok; Kim, Dong Hoon

    2001-01-01

    To achieve design goal of SMART alarm system, first of all we have to decide on how to handle and manage alarm information and how to use database. So this paper analyses concepts and deficiencies of main memory database applied in real time system. And this paper sets up structure and processing principles of main memory database using nonvolatile memory such as flash memory and develops recovery strategy and process board structures using these. Therefore this paper shows design of SMART alarm system is suited functions and requirements

  14. XML databases and the semantic web

    CERN Document Server

    Thuraisingham, Bhavani

    2002-01-01

    Efficient access to data, sharing data, extracting information from data, and making use of the information have become urgent needs for today''s corporations. With so much data on the Web, managing it with conventional tools is becoming almost impossible. New tools and techniques are necessary to provide interoperability as well as warehousing between multiple data sources and systems, and to extract information from the databases. XML Databases and the Semantic Web focuses on critical and new Web technologies needed for organizations to carry out transactions on the Web, to understand how to use the Web effectively, and to exchange complex documents on the Web.This reference for database administrators, database designers, and Web designers working in tandem with database technologists covers three emerging technologies of significant impact for electronic business: Extensible Markup Language (XML), semi-structured databases, and the semantic Web. The first two parts of the book explore these emerging techn...

  15. BIOSPIDA: A Relational Database Translator for NCBI.

    Science.gov (United States)

    Hagen, Matthew S; Lee, Eva K

    2010-11-13

    As the volume and availability of biological databases continue widespread growth, it has become increasingly difficult for research scientists to identify all relevant information for biological entities of interest. Details of nucleotide sequences, gene expression, molecular interactions, and three-dimensional structures are maintained across many different databases. To retrieve all necessary information requires an integrated system that can query multiple databases with minimized overhead. This paper introduces a universal parser and relational schema translator that can be utilized for all NCBI databases in Abstract Syntax Notation (ASN.1). The data models for OMIM, Entrez-Gene, Pubmed, MMDB and GenBank have been successfully converted into relational databases and all are easily linkable helping to answer complex biological questions. These tools facilitate research scientists to locally integrate databases from NCBI without significant workload or development time.

  16. BioMagResBank databases DOCR and FRED containing converted and filtered sets of experimental NMR restraints and coordinates from over 500 protein PDB structures

    Energy Technology Data Exchange (ETDEWEB)

    Doreleijers, Jurgen F. [University of Wisconsin-Madison, BioMagResBank, Department of Biochemistry (United States); Nederveen, Aart J. [Utrecht University, Bijvoet Center for Biomolecular Research (Netherlands); Vranken, Wim [European Bioinformatics Institute, Macromolecular Structure Database group (United Kingdom); Lin Jundong [University of Wisconsin-Madison, BioMagResBank, Department of Biochemistry (United States); Bonvin, Alexandre M.J.J.; Kaptein, Robert [Utrecht University, Bijvoet Center for Biomolecular Research (Netherlands); Markley, John L.; Ulrich, Eldon L. [University of Wisconsin-Madison, BioMagResBank, Department of Biochemistry (United States)], E-mail: elu@bmrb.wisc.edu

    2005-05-15

    We present two new databases of NMR-derived distance and dihedral angle restraints: the Database Of Converted Restraints (DOCR) and the Filtered Restraints Database (FRED). These databases currently correspond to 545 proteins with NMR structures deposited in the Protein Databank (PDB). The criteria for inclusion were that these should be unique, monomeric proteins with author-provided experimental NMR data and coordinates available from the PDB capable of being parsed and prepared in a consistent manner. The Wattos program was used to parse the files, and the CcpNmr FormatConverter program was used to prepare them semi-automatically. New modules, including a new implementation of Aqua in the BioMagResBank (BMRB) software Wattos were used to analyze the sets of distance restraints (DRs) for inconsistencies, redundancies, NOE completeness, classification and violations with respect to the original coordinates. Restraints that could not be associated with a known nomenclature were flagged. The coordinates of hydrogen atoms were recalculated from the positions of heavy atoms to allow for a full restraint analysis. The DOCR database contains restraint and coordinate data that is made consistent with each other and with IUPAC conventions. The FRED database is based on the DOCR data but is filtered for use by test calculation protocols and longitudinal analyses and validations. These two databases are available from websites of the BMRB and the Macromolecular Structure Database (MSD) in various formats: NMR-STAR, CCPN XML, and in formats suitable for direct use in the software packages CNS and CYANA.

  17. BioMagResBank databases DOCR and FRED containing converted and filtered sets of experimental NMR restraints and coordinates from over 500 protein PDB structures

    International Nuclear Information System (INIS)

    Doreleijers, Jurgen F.; Nederveen, Aart J.; Vranken, Wim; Lin Jundong; Bonvin, Alexandre M.J.J.; Kaptein, Robert; Markley, John L.; Ulrich, Eldon L.

    2005-01-01

    We present two new databases of NMR-derived distance and dihedral angle restraints: the Database Of Converted Restraints (DOCR) and the Filtered Restraints Database (FRED). These databases currently correspond to 545 proteins with NMR structures deposited in the Protein Databank (PDB). The criteria for inclusion were that these should be unique, monomeric proteins with author-provided experimental NMR data and coordinates available from the PDB capable of being parsed and prepared in a consistent manner. The Wattos program was used to parse the files, and the CcpNmr FormatConverter program was used to prepare them semi-automatically. New modules, including a new implementation of Aqua in the BioMagResBank (BMRB) software Wattos were used to analyze the sets of distance restraints (DRs) for inconsistencies, redundancies, NOE completeness, classification and violations with respect to the original coordinates. Restraints that could not be associated with a known nomenclature were flagged. The coordinates of hydrogen atoms were recalculated from the positions of heavy atoms to allow for a full restraint analysis. The DOCR database contains restraint and coordinate data that is made consistent with each other and with IUPAC conventions. The FRED database is based on the DOCR data but is filtered for use by test calculation protocols and longitudinal analyses and validations. These two databases are available from websites of the BMRB and the Macromolecular Structure Database (MSD) in various formats: NMR-STAR, CCPN XML, and in formats suitable for direct use in the software packages CNS and CYANA

  18. Electronic database of arterial aneurysms

    Directory of Open Access Journals (Sweden)

    Fabiano Luiz Erzinger

    2014-12-01

    Full Text Available Background:The creation of an electronic database facilitates the storage of information, as well as streamlines the exchange of data, making easier the exchange of knowledge for future research.Objective:To construct an electronic database containing comprehensive and up-to-date clinical and surgical data on the most common arterial aneurysms, to help advance scientific research.Methods:The most important specialist textbooks and articles found in journals and on internet databases were reviewed in order to define the basic structure of the protocol. Data were computerized using the SINPE© system for integrated electronic protocols and tested in a pilot study.Results:The data entered onto the system was first used to create a Master protocol, organized into a structure of top-level directories covering a large proportion of the content on vascular diseases as follows: patient history; physical examination; supplementary tests and examinations; diagnosis; treatment; and clinical course. By selecting items from the Master protocol, Specific protocols were then created for the 22 arterial sites most often involved by aneurysms. The program provides a method for collection of data on patients including clinical characteristics (patient history and physical examination, supplementary tests and examinations, treatments received and follow-up care after treatment. Any information of interest on these patients that is contained in the protocol can then be used to query the database and select data for studies.Conclusions:It proved possible to construct a database of clinical and surgical data on the arterial aneurysms of greatest interest and, by adapting the data to specific software, the database was integrated into the SINPE© system, thereby providing a standardized method for collection of data on these patients and tools for retrieving this information in an organized manner for use in scientific studies.

  19. Secure Distributed Databases Using Cryptography

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2006-01-01

    Full Text Available The computational encryption is used intensively by different databases management systems for ensuring privacy and integrity of information that are physically stored in files. Also, the information is sent over network and is replicated on different distributed systems. It is proved that a satisfying level of security is achieved if the rows and columns of tables are encrypted independently of table or computer that sustains the data. Also, it is very important that the SQL - Structured Query Language query requests and responses to be encrypted over the network connection between the client and databases server. All this techniques and methods must be implemented by the databases administrators, designer and developers in a consistent security policy.

  20. Database Description - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Database Description General information of database Database... name Yeast Interacting Proteins Database Alternative name - DOI 10.18908/lsdba.nbdc00742-000 Creator C...-ken 277-8561 Tel: +81-4-7136-3989 FAX: +81-4-7136-3979 E-mail : Database classif...s cerevisiae Taxonomy ID: 4932 Database description Information on interactions and related information obta...l Acad Sci U S A. 2001 Apr 10;98(8):4569-74. Epub 2001 Mar 13. External Links: Original website information Database

  1. PAMDB: a comprehensive Pseudomonas aeruginosa metabolome database.

    Science.gov (United States)

    Huang, Weiliang; Brewer, Luke K; Jones, Jace W; Nguyen, Angela T; Marcu, Ana; Wishart, David S; Oglesby-Sherrouse, Amanda G; Kane, Maureen A; Wilks, Angela

    2018-01-04

    The Pseudomonas aeruginosaMetabolome Database (PAMDB, http://pseudomonas.umaryland.edu) is a searchable, richly annotated metabolite database specific to P. aeruginosa. P. aeruginosa is a soil organism and significant opportunistic pathogen that adapts to its environment through a versatile energy metabolism network. Furthermore, P. aeruginosa is a model organism for the study of biofilm formation, quorum sensing, and bioremediation processes, each of which are dependent on unique pathways and metabolites. The PAMDB is modelled on the Escherichia coli (ECMDB), yeast (YMDB) and human (HMDB) metabolome databases and contains >4370 metabolites and 938 pathways with links to over 1260 genes and proteins. The database information was compiled from electronic databases, journal articles and mass spectrometry (MS) metabolomic data obtained in our laboratories. For each metabolite entered, we provide detailed compound descriptions, names and synonyms, structural and physiochemical information, nuclear magnetic resonance (NMR) and MS spectra, enzymes and pathway information, as well as gene and protein sequences. The database allows extensive searching via chemical names, structure and molecular weight, together with gene, protein and pathway relationships. The PAMBD and its future iterations will provide a valuable resource to biologists, natural product chemists and clinicians in identifying active compounds, potential biomarkers and clinical diagnostics. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. Computerized nuclear material database management system for power reactors

    International Nuclear Information System (INIS)

    Cheng Binghao; Zhu Rongbao; Liu Daming; Cao Bin; Liu Ling; Tan Yajun; Jiang Jincai

    1994-01-01

    The software packages for nuclear material database management for power reactors are described. The database structure, data flow and model for management of the database are analysed. Also mentioned are the main functions and characterizations of the software packages, which are successfully installed and used at both the Daya Bay Nuclear Power Plant and the Qinshan Nuclear Power Plant for the purposed of handling nuclear material database automatically

  3. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  4. A Database Interface for Complex Objects

    NARCIS (Netherlands)

    Holsheimer, Marcel; de By, Rolf A.; de By, R.A.; Ait-Kaci, Hassan

    We describe a formal design for a logical query language using psi-terms as data structures to interact effectively and efficiently with a relational database. The structure of psi-terms provides an adequate representation for so-called complex objects. They generalize conventional terms used in

  5. Uses and limitations of registry and academic databases.

    Science.gov (United States)

    Williams, William G

    2010-01-01

    A database is simply a structured collection of information. A clinical database may be a Registry (a limited amount of data for every patient undergoing heart surgery) or Academic (an organized and extensive dataset of an inception cohort of carefully selected subset of patients). A registry and an academic database have different purposes and cost. The data to be collected for a database is defined by its purpose and the output reports required for achieving that purpose. A Registry's purpose is to ensure quality care, an Academic Database, to discover new knowledge through research. A database is only as good as the data it contains. Database personnel must be exceptionally committed and supported by clinical faculty. A system to routinely validate and verify data integrity is essential to ensure database utility. Frequent use of the database improves its accuracy. For congenital heart surgeons, routine use of a Registry Database is an essential component of clinical practice. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  6. XML: James Webb Space Telescope Database Issues, Lessons, and Status

    Science.gov (United States)

    Detter, Ryan; Mooney, Michael; Fatig, Curtis

    2003-01-01

    This paper will present the current concept using extensible Markup Language (XML) as the underlying structure for the James Webb Space Telescope (JWST) database. The purpose of using XML is to provide a JWST database, independent of any portion of the ground system, yet still compatible with the various systems using a variety of different structures. The testing of the JWST Flight Software (FSW) started in 2002, yet the launch is scheduled for 2011 with a planned 5-year mission and a 5-year follow on option. The initial database and ground system elements, including the commands, telemetry, and ground system tools will be used for 19 years, plus post mission activities. During the Integration and Test (I&T) phases of the JWST development, 24 distinct laboratories, each geographically dispersed, will have local database tools with an XML database. Each of these laboratories database tools will be used for the exporting and importing of data both locally and to a central database system, inputting data to the database certification process, and providing various reports. A centralized certified database repository will be maintained by the Space Telescope Science Institute (STScI), in Baltimore, Maryland, USA. One of the challenges for the database is to be flexible enough to allow for the upgrade, addition or changing of individual items without effecting the entire ground system. Also, using XML should allow for the altering of the import and export formats needed by the various elements, tracking the verification/validation of each database item, allow many organizations to provide database inputs, and the merging of the many existing database processes into one central database structure throughout the JWST program. Many National Aeronautics and Space Administration (NASA) projects have attempted to take advantage of open source and commercial technology. Often this causes a greater reliance on the use of Commercial-Off-The-Shelf (COTS), which is often limiting

  7. Virtual materials design using databases of calculated materials properties

    International Nuclear Information System (INIS)

    Munter, T R; Landis, D D; Abild-Pedersen, F; Jones, G; Wang, S; Bligaard, T

    2009-01-01

    Materials design is most commonly carried out by experimental trial and error techniques. Current trends indicate that the increased complexity of newly developed materials, the exponential growth of the available computational power, and the constantly improving algorithms for solving the electronic structure problem, will continue to increase the relative importance of computational methods in the design of new materials. One possibility for utilizing electronic structure theory in the design of new materials is to create large databases of materials properties, and subsequently screen these for new potential candidates satisfying given design criteria. We utilize a database of more than 81 000 electronic structure calculations. This alloy database is combined with other published materials properties to form the foundation of a virtual materials design framework (VMDF). The VMDF offers a flexible collection of materials databases, filters, analysis tools and visualization methods, which are particularly useful in the design of new functional materials and surface structures. The applicability of the VMDF is illustrated by two examples. One is the determination of the Pareto-optimal set of binary alloy methanation catalysts with respect to catalytic activity and alloy stability; the other is the search for new alloy mercury absorbers.

  8. Assessment of the structural and functional impact of in-frame mutations of the DMD gene, using the tools included in the eDystrophin online database

    Directory of Open Access Journals (Sweden)

    Nicolas Aurélie

    2012-07-01

    Full Text Available Abstract Background Dystrophin is a large essential protein of skeletal and heart muscle. It is a filamentous scaffolding protein with numerous binding domains. Mutations in the DMD gene, which encodes dystrophin, mostly result in the deletion of one or several exons and cause Duchenne (DMD and Becker (BMD muscular dystrophies. The most common DMD mutations are frameshift mutations resulting in an absence of dystrophin from tissues. In-frame DMD mutations are less frequent and result in a protein with partial wild-type dystrophin function. The aim of this study was to highlight structural and functional modifications of dystrophin caused by in-frame mutations. Methods and results We developed a dedicated database for dystrophin, the eDystrophin database. It contains 209 different non frame-shifting mutations found in 945 patients from a French cohort and previous studies. Bioinformatics tools provide models of the three-dimensional structure of the protein at deletion sites, making it possible to determine whether the mutated protein retains the typical filamentous structure of dystrophin. An analysis of the structure of mutated dystrophin molecules showed that hybrid repeats were reconstituted at the deletion site in some cases. These hybrid repeats harbored the typical triple coiled-coil structure of native repeats, which may be correlated with better function in muscle cells. Conclusion This new database focuses on the dystrophin protein and its modification due to in-frame deletions in BMD patients. The observation of hybrid repeat reconstitution in some cases provides insight into phenotype-genotype correlations in dystrophin diseases and possible strategies for gene therapy. The eDystrophin database is freely available: http://edystrophin.genouest.org/.

  9. Software Engineering Laboratory (SEL) database organization and user's guide

    Science.gov (United States)

    So, Maria; Heller, Gerard; Steinberg, Sandra; Spiegel, Douglas

    1989-01-01

    The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base tables is described. In addition, techniques for accessing the database, through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL), are discussed.

  10. Solid Waste Projection Model: Database (Version 1.3)

    International Nuclear Information System (INIS)

    Blackburn, C.L.

    1991-11-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.3 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement

  11. Database Description - BodyParts3D | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available lternative name 3D structure database for anatomical concepts DOI 10.18908/lsdba.nbdc00837-000 Creator Creator Name: Kousaku Okubo...da K, Tamura T, Kawamoto S, Takagi T, Okubo K. Journal: Nucleic Acids Res. 2008 Oct 3. External Links: Origi

  12. Toward An Unstructured Mesh Database

    Science.gov (United States)

    Rezaei Mahdiraji, Alireza; Baumann, Peter Peter

    2014-05-01

    Unstructured meshes are used in several application domains such as earth sciences (e.g., seismology), medicine, oceanography, cli- mate modeling, GIS as approximate representations of physical objects. Meshes subdivide a domain into smaller geometric elements (called cells) which are glued together by incidence relationships. The subdivision of a domain allows computational manipulation of complicated physical structures. For instance, seismologists model earthquakes using elastic wave propagation solvers on hexahedral meshes. The hexahedral con- tains several hundred millions of grid points and millions of hexahedral cells. Each vertex node in the hexahedrals stores a multitude of data fields. To run simulation on such meshes, one needs to iterate over all the cells, iterate over incident cells to a given cell, retrieve coordinates of cells, assign data values to cells, etc. Although meshes are used in many application domains, to the best of our knowledge there is no database vendor that support unstructured mesh features. Currently, the main tool for querying and manipulating unstructured meshes are mesh libraries, e.g., CGAL and GRAL. Mesh li- braries are dedicated libraries which includes mesh algorithms and can be run on mesh representations. The libraries do not scale with dataset size, do not have declarative query language, and need deep C++ knowledge for query implementations. Furthermore, due to high coupling between the implementations and input file structure, the implementations are less reusable and costly to maintain. A dedicated mesh database offers the following advantages: 1) declarative querying, 2) ease of maintenance, 3) hiding mesh storage structure from applications, and 4) transparent query optimization. To design a mesh database, the first challenge is to define a suitable generic data model for unstructured meshes. We proposed ImG-Complexes data model as a generic topological mesh data model which extends incidence graph model to multi

  13. Quality assurance database for the CBM silicon tracking system

    Energy Technology Data Exchange (ETDEWEB)

    Lymanets, Anton [Physikalisches Institut, Universitaet Tuebingen (Germany); Collaboration: CBM-Collaboration

    2015-07-01

    The Silicon Tracking System is a main tracking device of the CBM Experiment at FAIR. Its construction includes production, quality assurance and assembly of large number of components, e.g., 106 carbon fiber support structures, 1300 silicon microstrip sensors, 16.6k readout chips, analog microcables, etc. Detector construction is distributed over several production and assembly sites and calls for a database that would be extensible and allow tracing the components, integrating the test data, monitoring the component statuses and data flow. A possible implementation of the above-mentioned requirements is being developed at GSI (Darmstadt) based on the FAIR DB Virtual Database Library that provides connectivity to common SQL-Database engines (PostgreSQL, Oracle, etc.). Data structure, database architecture as well as status of implementation are discussed.

  14. DAD - Distributed Adamo Database system at Hermes

    International Nuclear Information System (INIS)

    Wander, W.; Dueren, M.; Ferstl, M.; Green, P.; Potterveld, D.; Welch, P.

    1996-01-01

    Software development for the HERMES experiment faces the challenges of many other experiments in modern High Energy Physics: Complex data structures and relationships have to be processed at high I/O rate. Experimental control and data analysis are done on a distributed environment of CPUs with various operating systems and requires access to different time dependent databases like calibration and geometry. Slow and experimental control have a need for flexible inter-process-communication. Program development is done in different programming languages where interfaces to the libraries should not restrict the capacities of the language. The needs of handling complex data structures are fulfilled by the ADAMO entity relationship model. Mixed language programming can be provided using the CFORTRAN package. DAD, the Distributed ADAMO Database library, was developed to provide the I/O and database functionality requirements. (author)

  15. JT-60 database system, 2

    International Nuclear Information System (INIS)

    Itoh, Yasuhiro; Kurihara, Kenichi; Kimura, Toyoaki.

    1987-07-01

    The JT-60 central control system, ''ZENKEI'' collects the control and instrumentation data relevant to discharge and device status data for plant monitoring. The former of the engineering data amounts to about 3 Mbytes per shot of discharge. The ''ZENKEI'' control system which consists of seven minicomputers for on-line real-time control has little performance of handling such a large amount of data for physical and engineering analysis. In order to solve this problem, it was planned to establish the experimental database on the Front-end Processor (FEP) of general purpose large computer in JAERI Computer Center. The database management system (DBMS), therefore, has been developed for creating the database during the shot interval. The engineering data are shipped up from ''ZENKEI'' to FEP through the dedicated communication line after the shot. The hierarchical data model has been adopted in this database, which consists of the data files with tree structure of three keys of system, discharge type and shot number. The JT-60 DBMS provides the data handling packages of subroutines for interfacing the database with user's application programs. The subroutine packages for supporting graphic processing and the function of access control for security of the database are also prepared in this DBMS. (author)

  16. SSC lattice database and graphical interface

    International Nuclear Information System (INIS)

    Trahern, C.G.; Zhou, J.

    1991-11-01

    When completed the Superconducting Super Collider will be the world's largest accelerator complex. In order to build this system on schedule, the use of database technologies will be essential. In this paper we discuss one of the database efforts underway at the SSC, the lattice database. The SSC lattice database provides a centralized source for the design of each major component of the accelerator complex. This includes the two collider rings, the High Energy Booster, Medium Energy Booster, Low Energy Booster, and the LINAC as well as transfer and test beam lines. These designs have been created using a menagerie of programs such as SYNCH, DIMAD, MAD, TRANSPORT, MAGIC, TRACE3D AND TEAPOT. However, once a design has been completed, it is entered into a uniform database schema in the database system. In this paper we discuss the reasons for creating the lattice database and its implementation via the commercial database system SYBASE. Each lattice in the lattice database is composed of a set of tables whose data structure can describe any of the SSC accelerator lattices. In order to allow the user community access to the databases, a programmatic interface known as dbsf (for database to several formats) has been written. Dbsf creates ascii input files appropriate to the above mentioned accelerator design programs. In addition it has a binary dataset output using the Self Describing Standard data discipline provided with the Integrated Scientific Tool Kit software tools. Finally we discuss the graphical interfaces to the lattice database. The primary interface, known as OZ, is a simulation environment as well as a database browser

  17. Beyond the ENDF format: A modern nuclear database structure. SG38 meeting, OECD Conference Centre, 9-11 May 2016

    International Nuclear Information System (INIS)

    Herman, Michal Wladyslaw; Cabellos De Francisco, Oscar; Beck, Bret; Trkov, Andrej; Conlin, Jeremy Lloyd; Mcnabb, Dennis P.; Malvagi, Fausto; Grudzevich, Oleg T.; Mattoon, Caleb; Wiarda, Dorothea; Brown, David; Chadwick, Mark; Roubtsov, Danila; Iwamoto, Osamu; Yokoyama, Kenji; White, Morgan C.; Coste-Delclaux, Mireille; Fiorito, Luca; Haeck, Wim; Dunn, Michael; Jouanne, Cedric

    2016-05-01

    WPEC subgroup 38 (SG38) was formed to develop a new structure for storing nuclear reaction data, that is meant to eventually replace ENDF-6 as the standard way to store and share evaluations. The work of SG38 covers the following tasks: Designing flexible, general-purpose data containers; Determining a logical and easy-to-understand top-level hierarchy for storing evaluated nuclear reaction data; Creating a particle database for storing particles, masses and level schemes; Specifying the infrastructure (plotting, processing, etc.) that must accompany the new structure; Developing an Application Programming Interface or API to allow other codes to access data stored in the new structure; Specifying what tests need to be implemented for quality assurance of the new structure and associated infrastructure; Ensuring documentation and governance of the structure and associated infrastructure. This document is the proceedings of the SG38 meeting which took place at the OECD Conference Centre, on 9-11 May 2016. It comprises all the available presentations (slides) given by the participants: - 1) Beyond the ENDF format: Working toward the first specifications (Dennis P. McNabb); - 2) Summary of LLNL/LANL/ORNL/BNL discussions on SG38 (Caleb M. Mattoon); - 3) Status of Top Level Hierarchy Requirements Document (D.A. Brown); - 4) GND: Storing multiple representations of a quantity using forms, components and styles (Bret Beck); - 5) Particle Database update (Caleb M. Mattoon); - 6) General Purpose Data Containers (Jeremy Lloyd Conlin); - 7) Functional data containers (Bret Beck); - 8) Top-level hierarchy specifications (D.A. Brown)

  18. Database Application Schema Forensics

    Directory of Open Access Journals (Sweden)

    Hector Quintus Beyers

    2014-12-01

    Full Text Available The application schema layer of a Database Management System (DBMS can be modified to deliver results that may warrant a forensic investigation. Table structures can be corrupted by changing the metadata of a database or operators of the database can be altered to deliver incorrect results when used in queries. This paper will discuss categories of possibilities that exist to alter the application schema with some practical examples. Two forensic environments are introduced where a forensic investigation can take place in. Arguments are provided why these environments are important. Methods are presented how these environments can be achieved for the application schema layer of a DBMS. A process is proposed on how forensic evidence should be extracted from the application schema layer of a DBMS. The application schema forensic evidence identification process can be applied to a wide range of forensic settings.

  19. Database Vs Data Warehouse

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Data warehouse technology includes a set of concepts and methods that offer the users useful information for decision making. The necessity to build a data warehouse arises from the necessity to improve the quality of information in the organization. The date proceeding from different sources, having a variety of forms - both structured and unstructured, are filtered according to business rules and are integrated in a single large data collection. Using informatics solutions, managers have understood that data stored in operational systems - including databases, are an informational gold mine that must be exploited. Data warehouses have been developed to answer the increasing demands for complex analysis, which could not be properly achieved with operational databases. The present paper emphasizes some of the criteria that information application developers can use in order to choose between a database solution or a data warehouse one.

  20. Update History of This Database - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27 Arabidopsis Phenome Data...base English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...

  1. Database for environmental monitoring at nuclear facilities

    International Nuclear Information System (INIS)

    Raceanu, M.; Varlam, C.; Enache, A.; Faurescu, I.

    2006-01-01

    To ensure that an assessment could be made of the impact of nuclear facilities on the local environment, a program of environmental monitoring must be established well in advance of nuclear facilities operation. Enormous amount of data must be stored and correlated starting with: location, meteorology, type sample characterization from water to different kind of food, radioactivity measurement and isotopic measurement (e.g. for C-14 determination, C-13 isotopic correction it is a must). Data modelling is a well known mechanism describing data structures at a high level of abstraction. Such models are often used to automatically create database structures, and to generate code structures used to access databases. This has the disadvantage of losing data constraints that might be specified in data models for data checking. Embodiment of the system of the present application includes a computer-readable memory for storing a definitional data table for defining variable symbols representing respective measurable physical phenomena. The definitional data table uniquely defines the variable symbols by relating them to respective data domains for the respective phenomena represented by the symbols. Well established rules of how the data should be stored and accessed, are given in the Relational Database Theory. The theory comprise of guidelines such as the avoidance of duplicating data using technique call normalization and how to identify the unique identifier for a database record. (author)

  2. A trending database for human performance events

    International Nuclear Information System (INIS)

    Harrison, D.

    1993-01-01

    An effective Operations Experience program includes a standardized methodology for the investigation of unplanned events and a tool capable of retaining investigation data for the purpose of trending analysis. A database used in conjunction with a formalized investigation procedure for the purpose of trending unplanning event data is described. The database follows the structure of INPO's Human Performance Enhancement System for investigations. The database screens duplicate on-line the HPES evaluation Forms. All information pertaining to investigations is collected, retained and entered into the database using these forms. The database will be used for trending analysis to determine if any significant patterns exist, for tracking progress over time both within AECL and against industry standards, and for evaluating the success of corrective actions. Trending information will be used to help prevent similar occurrences

  3. A subsoil compaction database: its development, structure and content

    NARCIS (Netherlands)

    Trautner, A.; Akker, van den J.J.H.; Fleige, H.; Arvidsson, J.; Horn, R.

    2003-01-01

    A database which holds results of field and laboratory experiments on the impact of subsoil compaction on physical and mechanical soil parameters and on crop yields and environmental impact is being developed within the EU-sponsored concerted action (CA) project "Experiences with the impact of

  4. Design research of uranium mine borehole database

    International Nuclear Information System (INIS)

    Xie Huaming; Hu Guangdao; Zhu Xianglin; Chen Dehua; Chen Miaoshun

    2008-01-01

    With short supply of energy sources, exploration of uranium mine have been enhanced, but data storage, analysis and usage of exploration data of uranium mine are not highly computerized currently in China, the data is poor shared and used that it can not adapt the need of production and research. It will be well done, if the data are stored and managed in a database system. The concept structure design, logic structure design and data integrity checks are discussed according to the demand of applications and the analysis of exploration data of uranium mine. An application of the database is illustrated finally. (authors)

  5. Update History of This Database - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Update History of This Database Date Update contents 2017/03/13 SKIP Stemcell Database... English archive site is opened. 2013/03/29 SKIP Stemcell Database ( https://www.skip.med.k...eio.ac.jp/SKIPSearch/top?lang=en ) is opened. About This Database Database Description Download License Update History of This Databa...se Site Policy | Contact Us Update History of This Database - SKIP Stemcell Database | LSDB Archive ...

  6. Modeling livestock population structure: a geospatial database for Ontario swine farms.

    Science.gov (United States)

    Khan, Salah Uddin; O'Sullivan, Terri L; Poljak, Zvonimir; Alsop, Janet; Greer, Amy L

    2018-01-30

    Infectious diseases in farmed animals have economic, social, and health consequences. Foreign animal diseases (FAD) of swine are of significant concern. Mathematical and simulation models are often used to simulate FAD outbreaks and best practices for control. However, simulation outcomes are sensitive to the population structure used. Within Canada, access to individual swine farm population data with which to parameterize models is a challenge because of privacy concerns. Our objective was to develop a methodology to model the farmed swine population in Ontario, Canada that could represent the existing population structure and improve the efficacy of simulation models. We developed a swine population model based on the factors such as facilities supporting farm infrastructure, land availability, zoning and local regulations, and natural geographic barriers that could affect swine farming in Ontario. Assigned farm locations were equal to the swine farm density described in the 2011 Canadian Census of Agriculture. Farms were then randomly assigned to farm types proportional to the existing swine herd types. We compared the swine population models with a known database of swine farm locations in Ontario and found that the modeled population was representative of farm locations with a high accuracy (AUC: 0.91, Standard deviation: 0.02) suggesting that our algorithm generated a reasonable approximation of farm locations in Ontario. In the absence of a readily accessible dataset providing details of the relative locations of swine farms in Ontario, development of a model livestock population that captures key characteristics of the true population structure while protecting privacy concerns is an important methodological advancement. This methodology will be useful for individuals interested in modeling the spread of pathogens between farms across a landscape and using these models to evaluate disease control strategies.

  7. A Support Database System for Integrated System Health Management (ISHM)

    Science.gov (United States)

    Schmalzel, John; Figueroa, Jorge F.; Turowski, Mark; Morris, John

    2007-01-01

    The development, deployment, operation and maintenance of Integrated Systems Health Management (ISHM) applications require the storage and processing of tremendous amounts of low-level data. This data must be shared in a secure and cost-effective manner between developers, and processed within several heterogeneous architectures. Modern database technology allows this data to be organized efficiently, while ensuring the integrity and security of the data. The extensibility and interoperability of the current database technologies also allows for the creation of an associated support database system. A support database system provides additional capabilities by building applications on top of the database structure. These applications can then be used to support the various technologies in an ISHM architecture. This presentation and paper propose a detailed structure and application description for a support database system, called the Health Assessment Database System (HADS). The HADS provides a shared context for organizing and distributing data as well as a definition of the applications that provide the required data-driven support to ISHM. This approach provides another powerful tool for ISHM developers, while also enabling novel functionality. This functionality includes: automated firmware updating and deployment, algorithm development assistance and electronic datasheet generation. The architecture for the HADS has been developed as part of the ISHM toolset at Stennis Space Center for rocket engine testing. A detailed implementation has begun for the Methane Thruster Testbed Project (MTTP) in order to assist in developing health assessment and anomaly detection algorithms for ISHM. The structure of this implementation is shown in Figure 1. The database structure consists of three primary components: the system hierarchy model, the historical data archive and the firmware codebase. The system hierarchy model replicates the physical relationships between

  8. Utilizing Structural Knowledge for Information Retrieval in XML Databases

    NARCIS (Netherlands)

    Mihajlovic, V.; Hiemstra, Djoerd; Blok, H.E.; Apers, Peter M.G.

    In this paper we address the problem of immediate translation of eXtensible Mark-up Language (XML) information retrieval (IR) queries to relational database expressions and stress the benefits of using an intermediate XML-specific algebra over relational algebra. We show how adding an XML-specific

  9. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  10. An approach in building a chemical compound search engine in oracle database.

    Science.gov (United States)

    Wang, H; Volarath, P; Harrison, R

    2005-01-01

    A searching or identifying of chemical compounds is an important process in drug design and in chemistry research. An efficient search engine involves a close coupling of the search algorithm and database implementation. The database must process chemical structures, which demands the approaches to represent, store, and retrieve structures in a database system. In this paper, a general database framework for working as a chemical compound search engine in Oracle database is described. The framework is devoted to eliminate data type constrains for potential search algorithms, which is a crucial step toward building a domain specific query language on top of SQL. A search engine implementation based on the database framework is also demonstrated. The convenience of the implementation emphasizes the efficiency and simplicity of the framework.

  11. Locating relevant patient information in electronic health record data using representations of clinical concepts and database structures.

    Science.gov (United States)

    Pan, Xuequn; Cimino, James J

    2014-01-01

    Clinicians and clinical researchers often seek information in electronic health records (EHRs) that are relevant to some concept of interest, such as a disease or finding. The heterogeneous nature of EHRs can complicate retrieval, risking incomplete results. We frame this problem as the presence of two gaps: 1) a gap between clinical concepts and their representations in EHR data and 2) a gap between data representations and their locations within EHR data structures. We bridge these gaps with a knowledge structure that comprises relationships among clinical concepts (including concepts of interest and concepts that may be instantiated in EHR data) and relationships between clinical concepts and the database structures. We make use of available knowledge resources to develop a reproducible, scalable process for creating a knowledge base that can support automated query expansion from a clinical concept to all relevant EHR data.

  12. Examples how to use atomic and molecular databases

    International Nuclear Information System (INIS)

    Murakami, Izumi

    2012-01-01

    As examples how to use atomic and molecular databases, atomic spectra database (ASD) and molecular chemical kinetics database of National Institute of Standards and Technology (NIST), collision cross sections of National Institute of Fusion Science (NIFS), Open-Atomic Data and Analysis Structure (ADAS) and chemical reaction rate coefficients of GRI-Mech were presented. Sorting method differed in each database and several options were prepared. Atomic wavelengths/transition probabilities and electron collision ionization, excitation and recombination cross sections/rate coefficients were simply searched with just specifying atom or ion using a general internet search engine (GENIE) of IAEA. (T. Tanaka)

  13. Adaptive Constrained Optimal Control Design for Data-Based Nonlinear Discrete-Time Systems With Critic-Only Structure.

    Science.gov (United States)

    Luo, Biao; Liu, Derong; Wu, Huai-Ning

    2018-06-01

    Reinforcement learning has proved to be a powerful tool to solve optimal control problems over the past few years. However, the data-based constrained optimal control problem of nonaffine nonlinear discrete-time systems has rarely been studied yet. To solve this problem, an adaptive optimal control approach is developed by using the value iteration-based Q-learning (VIQL) with the critic-only structure. Most of the existing constrained control methods require the use of a certain performance index and only suit for linear or affine nonlinear systems, which is unreasonable in practice. To overcome this problem, the system transformation is first introduced with the general performance index. Then, the constrained optimal control problem is converted to an unconstrained optimal control problem. By introducing the action-state value function, i.e., Q-function, the VIQL algorithm is proposed to learn the optimal Q-function of the data-based unconstrained optimal control problem. The convergence results of the VIQL algorithm are established with an easy-to-realize initial condition . To implement the VIQL algorithm, the critic-only structure is developed, where only one neural network is required to approximate the Q-function. The converged Q-function obtained from the critic-only VIQL method is employed to design the adaptive constrained optimal controller based on the gradient descent scheme. Finally, the effectiveness of the developed adaptive control method is tested on three examples with computer simulation.

  14. Linking genotypes database with locus-specific database and genotype-phenotype correlation in phenylketonuria.

    Science.gov (United States)

    Wettstein, Sarah; Underhaug, Jarl; Perez, Belen; Marsden, Brian D; Yue, Wyatt W; Martinez, Aurora; Blau, Nenad

    2015-03-01

    The wide range of metabolic phenotypes in phenylketonuria is due to a large number of variants causing variable impairment in phenylalanine hydroxylase function. A total of 834 phenylalanine hydroxylase gene variants from the locus-specific database PAHvdb and genotypes of 4181 phenylketonuria patients from the BIOPKU database were characterized using FoldX, SIFT Blink, Polyphen-2 and SNPs3D algorithms. Obtained data was correlated with residual enzyme activity, patients' phenotype and tetrahydrobiopterin responsiveness. A descriptive analysis of both databases was compiled and an interactive viewer in PAHvdb database was implemented for structure visualization of missense variants. We found a quantitative relationship between phenylalanine hydroxylase protein stability and enzyme activity (r(s) = 0.479), between protein stability and allelic phenotype (r(s) = -0.458), as well as between enzyme activity and allelic phenotype (r(s) = 0.799). Enzyme stability algorithms (FoldX and SNPs3D), allelic phenotype and enzyme activity were most powerful to predict patients' phenotype and tetrahydrobiopterin response. Phenotype prediction was most accurate in deleterious genotypes (≈ 100%), followed by homozygous (92.9%), hemizygous (94.8%), and compound heterozygous genotypes (77.9%), while tetrahydrobiopterin response was correctly predicted in 71.0% of all cases. To our knowledge this is the largest study using algorithms for the prediction of patients' phenotype and tetrahydrobiopterin responsiveness in phenylketonuria patients, using data from the locus-specific and genotypes database.

  15. Structure and contents of a new geomorphological GIS database linked to a geomorphological map — With an example from Liden, central Sweden

    NARCIS (Netherlands)

    Gustavsson, M.; Seijmonsbergen, A.C.; Kolstrup, E.

    2008-01-01

    This paper presents the structure and contents of a standardised geomorphological GIS database that stores comprehensive scientific geomorphological data and constitutes the basis for processing and extracting spatial thematic data. The geodatabase contains spatial information on

  16. Database Description - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Open TG-GATEs Pathological Image Database Database Description General information of database Database... name Open TG-GATEs Pathological Image Database Alternative name - DOI 10.18908/lsdba.nbdc00954-0...iomedical Innovation 7-6-8, Saito-asagi, Ibaraki-city, Osaka 567-0085, Japan TEL:81-72-641-9826 Email: Database... classification Toxicogenomics Database Organism Taxonomy Name: Rattus norvegi... Article title: Author name(s): Journal: External Links: Original website information Database

  17. Visual Querying in Chemical Databases using SMARTS Patterns

    OpenAIRE

    Šípek, Vojtěch

    2014-01-01

    The purpose of this thesis is to create framework for visual querying in chemical databases which will be implemented as a web application. By using graphical editor, which is a part of client side, the user creates queries which are translated into chemical query language SMARTS. This query is parsed on the application server which is connected to the chemical database. This framework also contains tooling for creating the database and index structure above it. 1

  18. Sting_RDB: a relational database of structural parameters for protein analysis with support for data warehousing and data mining.

    Science.gov (United States)

    Oliveira, S R M; Almeida, G V; Souza, K R R; Rodrigues, D N; Kuser-Falcão, P R; Yamagishi, M E B; Santos, E H; Vieira, F D; Jardine, J G; Neshich, G

    2007-10-05

    An effective strategy for managing protein databases is to provide mechanisms to transform raw data into consistent, accurate and reliable information. Such mechanisms will greatly reduce operational inefficiencies and improve one's ability to better handle scientific objectives and interpret the research results. To achieve this challenging goal for the STING project, we introduce Sting_RDB, a relational database of structural parameters for protein analysis with support for data warehousing and data mining. In this article, we highlight the main features of Sting_RDB and show how a user can explore it for efficient and biologically relevant queries. Considering its importance for molecular biologists, effort has been made to advance Sting_RDB toward data quality assessment. To the best of our knowledge, Sting_RDB is one of the most comprehensive data repositories for protein analysis, now also capable of providing its users with a data quality indicator. This paper differs from our previous study in many aspects. First, we introduce Sting_RDB, a relational database with mechanisms for efficient and relevant queries using SQL. Sting_rdb evolved from the earlier, text (flat file)-based database, in which data consistency and integrity was not guaranteed. Second, we provide support for data warehousing and mining. Third, the data quality indicator was introduced. Finally and probably most importantly, complex queries that could not be posed on a text-based database, are now easily implemented. Further details are accessible at the Sting_RDB demo web page: http://www.cbi.cnptia.embrapa.br/StingRDB.

  19. A database for transmutation of nuclear materials on internet

    Energy Technology Data Exchange (ETDEWEB)

    Fujita, Mitsutane; Utsumi, Misako; Noda, Tetsuji [National Research Inst. for Metals, Tsukuba, Ibaraki (Japan)

    1998-03-01

    A database system on Internet for nuclear material design and selection used in various reactors are developed in NRIM site of `Data-Free-Way`. In order to retrieve and maintain the database, the user interface for the data retrieval was developed where special knowledge on handling of the database or the machine structure is not required for end-user. It is indicated that using the database, the possibility of nuclides and radioactivity in a material can be easily retrieved though the evaluation is qualitatively. (author)

  20. The NAGRA/PSI thermochemical database: new developments

    International Nuclear Information System (INIS)

    Hummel, W.; Berner, U.; Thoenen, T.; Pearson, F.J.Jr.

    2000-01-01

    The development of a high quality thermochemical database for performance assessment is a scientifically fascinating and demanding task, and is not simply collecting and recording numbers. The final product can by visualised as a complex building with different storeys representing different levels of complexity. The present status report illustrates the various building blocks which we believe are integral to such a database structure. (authors)

  1. The NAGRA/PSI thermochemical database: new developments

    Energy Technology Data Exchange (ETDEWEB)

    Hummel, W.; Berner, U.; Thoenen, T. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Pearson, F.J.Jr. [Ground-Water Geochemistry, New Bern, NC (United States)

    2000-07-01

    The development of a high quality thermochemical database for performance assessment is a scientifically fascinating and demanding task, and is not simply collecting and recording numbers. The final product can by visualised as a complex building with different storeys representing different levels of complexity. The present status report illustrates the various building blocks which we believe are integral to such a database structure. (authors)

  2. Study of developing a database of energy statistics

    Energy Technology Data Exchange (ETDEWEB)

    Park, T.S. [Korea Energy Economics Institute, Euiwang (Korea, Republic of)

    1997-08-01

    An integrated energy database should be prepared in advance for managing energy statistics comprehensively. However, since much manpower and budget is required for developing an integrated energy database, it is difficult to establish a database within a short period of time. Therefore, this study sets the purpose in drawing methods to analyze existing statistical data lists and to consolidate insufficient data as first stage work for the energy database, and at the same time, in analyzing general concepts and the data structure of the database. I also studied the data content and items of energy databases in operation in international energy-related organizations such as IEA, APEC, Japan, and the USA as overseas cases as well as domestic conditions in energy databases, and the hardware operating systems of Japanese databases. I analyzed the making-out system of Korean energy databases, discussed the KEDB system which is representative of total energy databases, and present design concepts for new energy databases. In addition, I present the establishment directions and their contents of future Korean energy databases, data contents that should be collected by supply and demand statistics, and the establishment of data collection organization, etc. by analyzing the Korean energy statistical data and comparing them with the system of OECD/IEA. 26 refs., 15 figs., 11 tabs.

  3. Update History of This Database - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Update History of This Database Date Update contents 201...0/03/29 Yeast Interacting Proteins Database English archive site is opened. 2000/12/4 Yeast Interacting Proteins Database...( http://itolab.cb.k.u-tokyo.ac.jp/Y2H/ ) is released. About This Database Database Description... Download License Update History of This Database Site Policy | Contact Us Update History of This Database... - Yeast Interacting Proteins Database | LSDB Archive ...

  4. Management system of instrument database

    International Nuclear Information System (INIS)

    Zhang Xin

    1997-01-01

    The author introduces a management system of instrument database. This system has been developed using with Foxpro on network. The system has some characters such as clear structure, easy operation, flexible and convenient query, as well as the data safety and reliability

  5. IAEA/NDS requirements related to database software

    International Nuclear Information System (INIS)

    Pronyaev, V.; Zerkin, V.

    2001-01-01

    Full text: The Nuclear Data Section of the IAEA disseminates data to the NDS users through Internet or on CD-ROMs and diskettes. OSU Web-server on DEC Alpha with Open VMS and Oracle/DEC DBMS provides via CGI scripts and FORTRAN retrieval programs access to the main nuclear databases supported by the networks of Nuclear Reactions Data Centres and Nuclear Structure and Decay Data Centres (CINDA, EXFOR, ENDF, NSR, ENSDF). For Web-access to data from other libraries and files, hyper-links to the files stored in ASCII text or other formats are used. Databases on CD-ROM are usually provided with some retrieval system. They are distributed in the run-time mode and comply with all license requirements for software used in their development. Although major development work is done now at the PC with MS-Windows and Linux, NDS may not at present, due to some institutional conditions, use these platforms for organization of the Web access to the data. Starting the end of 1999, the NDS, in co-operation with other data centers, began to work out the strategy of migration of main network nuclear data bases onto platforms other than DEC Alpha/Open VMS/DBMS. Because the different co-operating centers have their own preferences for hardware and software, the requirement to provide maximum platform independence for nuclear databases is the most important and desirable feature. This requirement determined some standards for the nuclear database software development. Taking into account the present state and future development, these standards can be formulated as follows: 1. All numerical data (experimental, evaluated, recommended values and their uncertainties) prepared for inclusion in the IAEA/NDS nuclear database should be submitted in the form of the ASCII text files and will be kept at NDS as a master file. 2. Databases with complex structure should be submitted in the form of the files with standard SQL statements describing all its components. All extensions of standard SQL

  6. Modelling antibody side chain conformations using heuristic database search.

    Science.gov (United States)

    Ritchie, D W; Kemp, G J

    1997-01-01

    We have developed a knowledge-based system which models the side chain conformations of residues in the variable domains of antibody Fv fragments. The system is written in Prolog and uses an object-oriented database of aligned antibody structures in conjunction with a side chain rotamer library. The antibody database provides 3-dimensional clusters of side chain conformations which can be copied en masse into the model structure. The object-oriented database architecture facilitates a navigational style of database access, necessary to assemble side chains clusters. Around 60% of the model is built using side chain clusters and this eliminates much of the combinatorial complexity associated with many other side chain placement algorithms. Construction and placement of side chain clusters is guided by a heuristic cost function based on a simple model of side chain packing interactions. Even with a simple model, we find that a large proportion of side chain conformations are modelled accurately. We expect our approach could be used with other homologous protein families, in addition to antibodies, both to improve the quality of model structures and to give a "smart start" to the side chain placement problem.

  7. Report on Approaches to Database Translation. Final Report.

    Science.gov (United States)

    Gallagher, Leonard; Salazar, Sandra

    This report describes approaches to database translation (i.e., transferring data and data definitions from a source, either a database management system (DBMS) or a batch file, to a target DBMS), and recommends a method for representing the data structures of newly-proposed network and relational data models in a form suitable for database…

  8. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...19&lang=en Whole data download - Referenced database Rice Expression Database (RED) Rice full-length cDNA Database... (KOME) Rice Genome Integrated Map Database (INE) Rice Mutant Panel Database (Tos17) Rice Genome Annotation Database

  9. A coordination language for databases

    DEFF Research Database (Denmark)

    Li, Ximeng; Wu, Xi; Lluch Lafuente, Alberto

    2017-01-01

    We present a coordination language for the modeling of distributed database applications. The language, baptized Klaim-DB, borrows the concepts of localities and nets of the coordination language Klaim but re-incarnates the tuple spaces of Klaim as databases. It provides high-level abstractions...... and primitives for the access and manipulation of structured data, with integrity and atomicity considerations. We present the formal semantics of Klaim-DB and develop a type system that avoids potential runtime errors such as certain evaluation errors and mismatches of data format in tables, which are monitored...... in the semantics. The use of the language is illustrated in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. Raising the abstraction level and encapsulating integrity checks in the language primitives have benefited the modeling task...

  10. International Ventilation Cooling Application Database

    DEFF Research Database (Denmark)

    Holzer, Peter; Psomas, Theofanis Ch.; OSullivan, Paul

    2016-01-01

    The currently running International Energy Agency, Energy and Conservation in Buildings, Annex 62 Ventilative Cooling (VC) project, is coordinating research towards extended use of VC. Within this Annex 62 the joint research activity of International VC Application Database has been carried out...... and locations, using VC as a mean of indoor comfort improvement. The building-spreadsheet highlights distributions of technologies and strategies, such as the following. (Numbers in % refer to the sample of the database’s 91 buildings.) It may be concluded that Ventilative Cooling is applied in temporary......, systematically investigating the distribution of technologies and strategies within VC. The database is structured as both a ticking-list-like building-spreadsheet and a collection of building-datasheets. The content of both closely follows Annex 62 State-Of-The- Art-Report. The database has been filled, based...

  11. Olap Cube Representation For Objectoriented Database

    OpenAIRE

    Vipin Saxena; Ajay Pratap

    2012-01-01

    In the current scenario, the size of database related to any organization is rapidly increasing and due to evolution of the object-oriented approach, many of the Software Industries are converting the old structured approach based softwares into the object-oriented based softwares. Therefore, for the largeamount of database, it is necessary to study the faster retrieval system as On-Line Analytical Processing (OLAP) which was introduced by E.Codd in 1993. The present paper is an attempt in t...

  12. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  13. The 2002 RPA Plot Summary database users manual

    Science.gov (United States)

    Patrick D. Miles; John S. Vissage; W. Brad Smith

    2004-01-01

    Describes the structure of the RPA 2002 Plot Summary database and provides information on generating estimates of forest statistics from these data. The RPA 2002 Plot Summary database provides a consistent framework for storing forest inventory data across all ownerships across the entire United States. The data represents the best available data as of October 2001....

  14. Analysis of large databases in vascular surgery.

    Science.gov (United States)

    Nguyen, Louis L; Barshes, Neal R

    2010-09-01

    Large databases can be a rich source of clinical and administrative information on broad populations. These datasets are characterized by demographic and clinical data for over 1000 patients from multiple institutions. Since they are often collected and funded for other purposes, their use for secondary analysis increases their utility at relatively low costs. Advantages of large databases as a source include the very large numbers of available patients and their related medical information. Disadvantages include lack of detailed clinical information and absence of causal descriptions. Researchers working with large databases should also be mindful of data structure design and inherent limitations to large databases, such as treatment bias and systemic sampling errors. Withstanding these limitations, several important studies have been published in vascular care using large databases. They represent timely, "real-world" analyses of questions that may be too difficult or costly to address using prospective randomized methods. Large databases will be an increasingly important analytical resource as we focus on improving national health care efficacy in the setting of limited resources.

  15. Research on Construction of Road Network Database Based on Video Retrieval Technology

    Directory of Open Access Journals (Sweden)

    Wang Fengling

    2017-01-01

    Full Text Available Based on the characteristics of the video database and the basic structure of the video database and several typical video data models, the segmentation-based multi-level data model is used to describe the landscape information video database, the network database model and the road network management database system. Landscape information management system detailed design and implementation of a detailed preparation.

  16. AMYPdb: A database dedicated to amyloid precursor proteins

    Directory of Open Access Journals (Sweden)

    Delamarche Christian

    2008-06-01

    Full Text Available Abstract Background Misfolding and aggregation of proteins into ordered fibrillar structures is associated with a number of severe pathologies, including Alzheimer's disease, prion diseases, and type II diabetes. The rapid accumulation of knowledge about the sequences and structures of these proteins allows using of in silico methods to investigate the molecular mechanisms of their abnormal conformational changes and assembly. However, such an approach requires the collection of accurate data, which are inconveniently dispersed among several generalist databases. Results We therefore created a free online knowledge database (AMYPdb dedicated to amyloid precursor proteins and we have performed large scale sequence analysis of the included data. Currently, AMYPdb integrates data on 31 families, including 1,705 proteins from nearly 600 organisms. It displays links to more than 2,300 bibliographic references and 1,200 3D-structures. A Wiki system is available to insert data into the database, providing a sharing and collaboration environment. We generated and analyzed 3,621 amino acid sequence patterns, reporting highly specific patterns for each amyloid family, along with patterns likely to be involved in protein misfolding and aggregation. Conclusion AMYPdb is a comprehensive online database aiming at the centralization of bioinformatic data regarding all amyloid proteins and their precursors. Our sequence pattern discovery and analysis approach unveiled protein regions of significant interest. AMYPdb is freely accessible 1.

  17. BioWarehouse: a bioinformatics database warehouse toolkit.

    Science.gov (United States)

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D

    2006-03-23

    This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

  18. Analysis of Patent Databases Using VxInsight

    Energy Technology Data Exchange (ETDEWEB)

    BOYACK,KEVIN W.; WYLIE,BRIAN N.; DAVIDSON,GEORGE S.; JOHNSON,DAVID K.

    2000-12-12

    We present the application of a new knowledge visualization tool, VxInsight, to the mapping and analysis of patent databases. Patent data are mined and placed in a database, relationships between the patents are identified, primarily using the citation and classification structures, then the patents are clustered using a proprietary force-directed placement algorithm. Related patents cluster together to produce a 3-D landscape view of the tens of thousands of patents. The user can navigate the landscape by zooming into or out of regions of interest. Querying the underlying database places a colored marker on each patent matching the query. Automatically generated labels, showing landscape content, update continually upon zooming. Optionally, citation links between patents may be shown on the landscape. The combination of these features enables powerful analyses of patent databases.

  19. The Immune Epitope Database 2.0

    DEFF Research Database (Denmark)

    Hoof, Ilka; Vita, R; Zarebski, L

    2010-01-01

    The Immune Epitope Database (IEDB, www.iedb.org) provides a catalog of experimentally characterized B and T cell epitopes, as well as data on Major Histocompatibility Complex (MHC) binding and MHC ligand elution experiments. The database represents the molecular structures recognized by adaptive...... immune receptors and the experimental contexts in which these molecules were determined to be immune epitopes. Epitopes recognized in humans, nonhuman primates, rodents, pigs, cats and all other tested species are included. Both positive and negative experimental results are captured. Over the course...

  20. Designing a Lexical Database for a Combined Use of Corpus Annotation and Dictionary Editing

    DEFF Research Database (Denmark)

    Kristoffersen, Jette Hedegaard; Troelsgård, Thomas; Langer, Gabriele

    2016-01-01

    In a combined corpus-dictionary project, you would need one lexical database that could serve as a shared “backbone” for both corpus annotation and dictionary editing, but it is not that easy to define a database structure that applies satisfactorily to both these purposes. In this paper, we...... will exemplify the problem and present ideas on how to model structures in a lexical database that facilitate corpus annotation as well as dictionary editing. The paper is a joint work between the DGS Corpus Project and the DTS Dictionary Project. The two projects come from opposite sides of the spectrum (one...... adjusting a lexical database grown from dictionary making for corpus annotating, one building a lexical database in parallel with corpus annotation and editing a corpus-based dictionary), and we will consider requirements and feasible structures for a database that can serve both corpus and dictionary....

  1. Characteristic conformation of Mosher's amide elucidated using the cambridge structural database.

    Science.gov (United States)

    Ichikawa, Akio; Ono, Hiroshi; Mikata, Yuji

    2015-07-16

    Conformations of the crystalline 3,3,3-trifluoro-2-methoxy-2-phenylpropanamide derivatives (MTPA amides) deposited in the Cambridge Structural Database (CSD) were examined statistically as Racid-enantiomers. The majority of dihedral angles (48/58, ca. 83%) of the amide carbonyl groups and the trifluoromethyl groups ranged from -30° to 0° with an average angle θ1 of -13°. The other conformational properties were also clarified: (1) one of the fluorine atoms was antiperiplanar (ap) to the amide carbonyl group, forming a staggered conformation; (2) the MTPA amides prepared from primary amines showed a Z form in amide moieties; (3) in the case of the MTPA amide prepared from a primary amine possessing secondary alkyl groups (i.e., Mosher-type MTPA amide), the dihedral angles between the methine groups and the carbonyl groups were syn and indicative of a moderate conformational flexibility; (4) the phenyl plane was inclined from the O-Cchiral bond of the methoxy moiety with an average dihedral angle θ2 of +21°; (5) the methyl group of the methoxy moiety was ap to the ipso-carbon atom of the phenyl group.

  2. Significance of FIZ Technik Databases in nuclear safety and environmental protection

    International Nuclear Information System (INIS)

    Das, N.K.

    1993-01-01

    The language of the abstracts of the FIZ Technik databases is primarly German (e.g. DOMA 80%, SDIM 70%). Furthermore FIZ Technik offers licence databases on engineering and technology, management, manufacturers, products, contacts, standards and specifications, geosciences and natural resources. The contents and structure of the databases are described in the FIZ Technik bluesheets and the database news. With some examples the significance of the FIZ Technik databases DOMA, ZDEE, SDIM, SILI and MEDI in nuclear safety and environmental protection is shown. (orig.)

  3. The ATLAS conditions database architecture for the Muon spectrometer

    International Nuclear Information System (INIS)

    Verducci, Monica

    2010-01-01

    The Muon System, facing the challenge requirement of the conditions data storage, has extensively started to use the conditions database project 'COOL' as the basis for all its conditions data storage both at CERN and throughout the worldwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored. The Muon conditions database is responsible for almost all of the 'non event' data and detector quality flags storage needed for debugging of the detector operations and for performing reconstruction and analysis. The COOL database allows database applications to be written independently of the underlying database technology and ensures long term compatibility with the entire ATLAS Software. COOL implements an interval of validity database, i.e. objects stored or referenced in COOL have an associated start and end time between which they are valid, the data is stored in folders, which are themselves arranged in a hierarchical structure of folder sets. The structure is simple and mainly optimized to store and retrieve object(s) associated with a particular time. In this work, an overview of the entire Muon conditions database architecture is given, including the different sources of the data and the storage model used. In addiction the software interfaces used to access to the conditions data are described, more emphasis is given to the Offline Reconstruction framework ATHENA and the services developed to provide the conditions data to the reconstruction.

  4. The ATLAS conditions database architecture for the Muon spectrometer

    Science.gov (United States)

    Verducci, Monica; ATLAS Muon Collaboration

    2010-04-01

    The Muon System, facing the challenge requirement of the conditions data storage, has extensively started to use the conditions database project 'COOL' as the basis for all its conditions data storage both at CERN and throughout the worldwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored. The Muon conditions database is responsible for almost all of the 'non event' data and detector quality flags storage needed for debugging of the detector operations and for performing reconstruction and analysis. The COOL database allows database applications to be written independently of the underlying database technology and ensures long term compatibility with the entire ATLAS Software. COOL implements an interval of validity database, i.e. objects stored or referenced in COOL have an associated start and end time between which they are valid, the data is stored in folders, which are themselves arranged in a hierarchical structure of folder sets. The structure is simple and mainly optimized to store and retrieve object(s) associated with a particular time. In this work, an overview of the entire Muon conditions database architecture is given, including the different sources of the data and the storage model used. In addiction the software interfaces used to access to the conditions data are described, more emphasis is given to the Offline Reconstruction framework ATHENA and the services developed to provide the conditions data to the reconstruction.

  5. On a Fuzzy Algebra for Querying Graph Databases

    OpenAIRE

    Pivert , Olivier; Thion , Virginie; Jaudoin , Hélène; Smits , Grégory

    2014-01-01

    International audience; This paper proposes a notion of fuzzy graph database and describes a fuzzy query algebra that makes it possible to handle such database, which may be fuzzy or not, in a flexible way. The algebra, based on fuzzy set theory and the concept of a fuzzy graph, is composed of a set of operators that can be used to express preference queries on fuzzy graph databases. The preferences concern i) the content of the vertices of the graph and ii) the structure of the graph. In a s...

  6. A59 waste repackaging database (AWARD)

    International Nuclear Information System (INIS)

    Keel, A.

    1993-06-01

    This document describes the data structures to be implemented to provide the A59 Waste Repackaging Database (AWARD); a Computer System for the in-cave Bertha waste sorting and LLW repackaging operations in A59. (Author)

  7. The RMS program system and database

    International Nuclear Information System (INIS)

    Fisher, S.M.; Peach, K.J.

    1982-08-01

    This report describes the program system developed for the data reduction and analysis of data obtained with the Rutherford Multiparticle Spectrometer (RMS), with particular emphasis on the utility of a well structured central data-base. (author)

  8. Research on the establishment of the database system for R and D on the innovative technology for the earth; Chikyu kankyo sangyo gijutsu kenkyu kaihatsuyo database system ni kansuru chosa

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-03-01

    For the purpose of structuring a database system of technical information about the earth environmental issues, the `database system for R and D of the earth environmental industrial technology` was operationally evaluated, and study was made to open it and structure a prototype of database. In the present state as pointed out in the operational evaluation, the utilization frequency is not heightened due to lack of UNIX experience, absence of system managers and shortage of utilizable articles listed, so that the renewal of database does not ideally progress. Therefore, study was then made to introduce tools utilizable by the initiators and open the information access terminal to the researchers at headquarters utilizing the internet. In order for the earth environment-related researchers to easily obtain the information, a database was prototypically structured to support the research exchange. Tasks were made clear to be taken for selecting the fields of research and compiling common thesauri in Japanese, Western and other languages. 28 figs., 16 tabs.

  9. Alternatives to relational database: comparison of NoSQL and XML approaches for clinical data storage.

    Science.gov (United States)

    Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze

    2013-04-01

    Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  10. Tank Characterization Database (TCD) Data Dictionary: Version 4.0

    International Nuclear Information System (INIS)

    1996-04-01

    This document is the data dictionary for the tank characterization database (TCD) system and contains information on the data model and SYBASE reg-sign database structure. The first two parts of this document are subject areas based on the two different areas of the (TCD) database: sample analysis and waste inventory. Within each subject area is an alphabetical list of all the database tables contained in the subject area. Within each table defintiion is a brief description of the table and alist of field names and attributes. The third part, Field Descriptions, lists all field names in the data base alphabetically

  11. Database Description - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RPD Alternative name Rice Proteome Database...titute of Crop Science, National Agriculture and Food Research Organization Setsuko Komatsu E-mail: Database... classification Proteomics Resources Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database... description Rice Proteome Database contains information on protei...and entered in the Rice Proteome Database. The database is searchable by keyword,

  12. Database Description - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PLACE Alternative name A Database...Kannondai, Tsukuba, Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Databas...e classification Plant databases Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database...99, Vol.27, No.1 :297-300 External Links: Original website information Database maintenance site National In...- Need for user registration Not available About This Database Database Descripti

  13. ProCarDB: a database of bacterial carotenoids.

    Science.gov (United States)

    Nupur, L N U; Vats, Asheema; Dhanda, Sandeep Kumar; Raghava, Gajendra P S; Pinnaka, Anil Kumar; Kumar, Ashwani

    2016-05-26

    Carotenoids have important functions in bacteria, ranging from harvesting light energy to neutralizing oxidants and acting as virulence factors. However, information pertaining to the carotenoids is scattered throughout the literature. Furthermore, information about the genes/proteins involved in the biosynthesis of carotenoids has tremendously increased in the post-genomic era. A web server providing the information about microbial carotenoids in a structured manner is required and will be a valuable resource for the scientific community working with microbial carotenoids. Here, we have created a manually curated, open access, comprehensive compilation of bacterial carotenoids named as ProCarDB- Prokaryotic Carotenoid Database. ProCarDB includes 304 unique carotenoids arising from 50 biosynthetic pathways distributed among 611 prokaryotes. ProCarDB provides important information on carotenoids, such as 2D and 3D structures, molecular weight, molecular formula, SMILES, InChI, InChIKey, IUPAC name, KEGG Id, PubChem Id, and ChEBI Id. The database also provides NMR data, UV-vis absorption data, IR data, MS data and HPLC data that play key roles in the identification of carotenoids. An important feature of this database is the extension of biosynthetic pathways from the literature and through the presence of the genes/enzymes in different organisms. The information contained in the database was mined from published literature and databases such as KEGG, PubChem, ChEBI, LipidBank, LPSN, and Uniprot. The database integrates user-friendly browsing and searching with carotenoid analysis tools to help the user. We believe that this database will serve as a major information centre for researchers working on bacterial carotenoids.

  14. Validation and extraction of molecular-geometry information from small-molecule databases.

    Science.gov (United States)

    Long, Fei; Nicholls, Robert A; Emsley, Paul; Graǽulis, Saulius; Merkys, Andrius; Vaitkus, Antanas; Murshudov, Garib N

    2017-02-01

    A freely available small-molecule structure database, the Crystallography Open Database (COD), is used for the extraction of molecular-geometry information on small-molecule compounds. The results are used for the generation of new ligand descriptions, which are subsequently used by macromolecular model-building and structure-refinement software. To increase the reliability of the derived data, and therefore the new ligand descriptions, the entries from this database were subjected to very strict validation. The selection criteria made sure that the crystal structures used to derive atom types, bond and angle classes are of sufficiently high quality. Any suspicious entries at a crystal or molecular level were removed from further consideration. The selection criteria included (i) the resolution of the data used for refinement (entries solved at 0.84 Å resolution or higher) and (ii) the structure-solution method (structures must be from a single-crystal experiment and all atoms of generated molecules must have full occupancies), as well as basic sanity checks such as (iii) consistency between the valences and the number of connections between atoms, (iv) acceptable bond-length deviations from the expected values and (v) detection of atomic collisions. The derived atom types and bond classes were then validated using high-order moment-based statistical techniques. The results of the statistical analyses were fed back to fine-tune the atom typing. The developed procedure was repeated four times, resulting in fine-grained atom typing, bond and angle classes. The procedure will be repeated in the future as and when new entries are deposited in the COD. The whole procedure can also be applied to any source of small-molecule structures, including the Cambridge Structural Database and the ZINC database.

  15. Database Description - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name JSNP Alternative nam...n Science and Technology Agency Creator Affiliation: Contact address E-mail : Database...sapiens Taxonomy ID: 9606 Database description A database of about 197,000 polymorphisms in Japanese populat...1):605-610 External Links: Original website information Database maintenance site Institute of Medical Scien...er registration Not available About This Database Database Description Download License Update History of This Database

  16. Database Description - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti... Article title: Rice Expression Database: the gateway to rice functional genomics...nt Science (2002) Dec 7 (12):563-564 External Links: Original website information Database maintenance site

  17. Solid Waste Projection Model: Database (Version 1.4)

    International Nuclear Information System (INIS)

    Blackburn, C.; Cillan, T.

    1993-09-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.4 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement. Those interested in using the SWPM database should refer to the SWPM Database User's Guide. This document is available from the PNL Task M Project Manager (D. L. Stiles, 509-372-4358), the PNL Task L Project Manager (L. L. Armacost, 509-372-4304), the WHC Restoration Projects Section Manager (509-372-1443), or the WHC Waste Characterization Manager (509-372-1193)

  18. A C programmer's view of a relational database

    International Nuclear Information System (INIS)

    Clifford, T.; Katz, R.; Griffiths, C.

    1989-01-01

    The AGS Distributed Control System (AGSDCS) uses a relational database (Interbase) for the storage of all data on the host system network. This includes the static data which describes the components of the accelerator complex, as well as data for application program setup and data records that are used in analysis. By creating a mapping of each elation in the database to a C record and providing general tools for relation (record) across, all the data in the database is available in a natural fashion (in structures) to all the C programs on any of the nodes of the control system. In this paper the correspondence between the Interbase elations and the C structure is detailed with examples of C typedefs and relation definitions. It is also shown how the relations can be put into memory and linked (related) together when fast access is needed by programs. 1 ref., 2 tabs

  19. Databases spatiotemporal taxonomy with moving objects. Theme review

    Directory of Open Access Journals (Sweden)

    Sergio Alejandro Rojas Barbosa

    2018-01-01

    Full Text Available Context: In the last decade, databases have evolved so much that we no longer speak only of spatial databases, but also of spatial and temporal databases. This means that the event or record has a spatial or localization variable and a temporality variable, which allows updating the previously stored record. Method: This paper presents a bibliographic review about concepts, spatio-temporal data models, specifically the models of data in movement. Results: Taxonomic considerations of the queries are presented in the models of data in movement, according to the persistence of the query (time, location, movement, object and patterns, as well as the different proposals of indexes and structures. Conclusions: The implementation of model proposals, such as indexes and structures, can lead to standardization problems. This is why it should be standardized under the standards and standards of the OGC (Open Geospatial Consortium.

  20. BioWarehouse: a bioinformatics database warehouse toolkit

    Directory of Open Access Journals (Sweden)

    Stringer-Calvert David WJ

    2006-03-01

    Full Text Available Abstract Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the

  1. A trait database for marine copepods

    DEFF Research Database (Denmark)

    Brun, Philipp Georg; Payne, Mark R.; Kiørboe, Thomas

    2016-01-01

    was more limited for quantitative traits related to reproduction and physiology. The database may be used to investigate relationships between traits, to produce trait biogeographies, or to inform and validate trait-based marine ecosystem models. The data can be downloaded from PANGAEA, doi:10.1594/PANGAEA......, and organised the data into a structured database. We collected 9345 records for 14 functional traits. Particular attention was given to body size, feeding mode, egg size, spawning strategy, respiration rate and myelination (presence of nerve sheathing). Most records were reported on the species level, but some...

  2. A trait database for marine copepods

    DEFF Research Database (Denmark)

    Brun, Philipp Georg; Payne, Mark; Kiørboe, Thomas

    2017-01-01

    , while information was more limited for quantitative traits related to reproduction and physiology. The database may be used to investigate relationships between traits, to produce trait biogeographies, or to inform and validate trait-based marine ecosystem models. The data can be downloaded from PANGAEA...... and organized the data into a structured database. We collected 9306 records for 14 functional traits. Particular attention was given to body size, feeding mode, egg size, spawning strategy, respiration rate, and myelination (presence of nerve sheathing). Most records were reported at the species level...

  3. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  4. Alkamid database: Chemistry, occurrence and functionality of plant N-alkylamides.

    Science.gov (United States)

    Boonen, Jente; Bronselaer, Antoon; Nielandt, Joachim; Veryser, Lieselotte; De Tré, Guy; De Spiegeleer, Bart

    2012-08-01

    N-Alkylamides (NAAs) are a promising group of bioactive compounds, which are anticipated to act as important lead compounds for plant protection and biocidal products, functional food, cosmeceuticals and drugs in the next decennia. These molecules, currently found in more than 25 plant families and with a wide structural diversity, exert a variety of biological-pharmacological effects and are of high ethnopharmacological importance. However, information is scattered in literature, with different, often unstandardized, pharmacological methodologies being used. Therefore, a comprehensive NAA database (acronym: Alkamid) was constructed to collect the available structural and functional NAA data, linked to their occurrence in plants (family, tribe, species, genus). For loading information in the database, literature data was gathered over the period 1950-2010, by using several search engines. In order to represent the collected information about NAAs, the plants in which they occur and the functionalities for which they have been examined, a relational database is constructed and implemented on a MySQL back-end. The database is supported by describing the NAA plant-, functional- and chemical-space. The chemical space includes a NAA classification, according to their fatty acid and amine structures. The Alkamid database (publicly available on the website http://alkamid.ugent.be/) is not only a central information point, but can also function as a useful tool to prioritize the NAA choice in the evaluation of their functionality, to perform data mining leading to quantitative structure-property relationships (QSPRs), functionality comparisons, clustering, plant biochemistry and taxonomic evaluations. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  5. View discovery in OLAP databases through statistical combinatorial optimization

    Energy Technology Data Exchange (ETDEWEB)

    Hengartner, Nick W [Los Alamos National Laboratory; Burke, John [PNNL; Critchlow, Terence [PNNL; Joslyn, Cliff [PNNL; Hogan, Emilie [PNNL

    2009-01-01

    OnLine Analytical Processing (OLAP) is a relational database technology providing users with rapid access to summary, aggregated views of a single large database, and is widely recognized for knowledge representation and discovery in high-dimensional relational databases. OLAP technologies provide intuitive and graphical access to the massively complex set of possible summary views available in large relational (SQL) structured data repositories. The capability of OLAP database software systems to handle data complexity comes at a high price for analysts, presenting them a combinatorially vast space of views of a relational database. We respond to the need to deploy technologies sufficient to allow users to guide themselves to areas of local structure by casting the space of 'views' of an OLAP database as a combinatorial object of all projections and subsets, and 'view discovery' as an search process over that lattice. We equip the view lattice with statistical information theoretical measures sufficient to support a combinatorial optimization process. We outline 'hop-chaining' as a particular view discovery algorithm over this object, wherein users are guided across a permutation of the dimensions by searching for successive two-dimensional views, pushing seen dimensions into an increasingly large background filter in a 'spiraling' search process. We illustrate this work in the context of data cubes recording summary statistics for radiation portal monitors at US ports.

  6. A performance evaluation of in-memory databases

    Directory of Open Access Journals (Sweden)

    Abdullah Talha Kabakus

    2017-10-01

    Full Text Available The popularity of NoSQL databases has increased due to the need of (1 processing vast amount of data faster than the relational database management systems by taking the advantage of highly scalable architecture, (2 flexible (schema-free data structure, and, (3 low latency and high performance. Despite that memory usage is not major criteria to evaluate performance of algorithms, since these databases serve the data from memory, their memory usages are also experimented alongside the time taken to complete each operation in the paper to reveal which one uses the memory most efficiently. Currently there exists over 225 NoSQL databases that provide different features and characteristics. So it is necessary to reveal which one provides better performance for different data operations. In this paper, we experiment the widely used in-memory databases to measure their performance in terms of (1 the time taken to complete operations, and (2 how efficiently they use memory during operations. As per the results reported in this paper, there is no database that provides the best performance for all data operations. It is also proved that even though a RDMS stores its data in memory, its overall performance is worse than NoSQL databases.

  7. Software Engineering Laboratory (SEL) database organization and user's guide, revision 2

    Science.gov (United States)

    Morusiewicz, Linda; Bristow, John

    1992-01-01

    The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base table is described. In addition, techniques for accessing the database through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL) are discussed.

  8. Worldwide databases in marine geology: A review

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    such as image capturing, multimedia and geographic information system (GIS) should be utilized. Information managers need to collaborate with subject experts in order to maintain the high quality of the databases. 1. Introduction With the advent of computer...-DOS and Macintosh $ 56 MS-DOS P. D. KunteJMarine Geology 122 (1995) 263-275 coordination between the information providers and management centres. Within the databases there is no uniformity in the structure, storage and operating systems. Every producer...

  9. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RMG Alternative name ...raki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database... classification Nucleotide Sequence Databases Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database...rnal: Mol Genet Genomics (2002) 268: 434–445 External Links: Original website information Database...available URL of Web services - Need for user registration Not available About This Database Database Descri

  10. Disbiome database: linking the microbiome to disease.

    Science.gov (United States)

    Janssens, Yorick; Nielandt, Joachim; Bronselaer, Antoon; Debunne, Nathan; Verbeke, Frederick; Wynendaele, Evelien; Van Immerseel, Filip; Vandewynckel, Yves-Paul; De Tré, Guy; De Spiegeleer, Bart

    2018-06-04

    Recent research has provided fascinating indications and evidence that the host health is linked to its microbial inhabitants. Due to the development of high-throughput sequencing technologies, more and more data covering microbial composition changes in different disease types are emerging. However, this information is dispersed over a wide variety of medical and biomedical disciplines. Disbiome is a database which collects and presents published microbiota-disease information in a standardized way. The diseases are classified using the MedDRA classification system and the micro-organisms are linked to their NCBI and SILVA taxonomy. Finally, each study included in the Disbiome database is assessed for its reporting quality using a standardized questionnaire. Disbiome is the first database giving a clear, concise and up-to-date overview of microbial composition differences in diseases, together with the relevant information of the studies published. The strength of this database lies within the combination of the presence of references to other databases, which enables both specific and diverse search strategies within the Disbiome database, and the human annotation which ensures a simple and structured presentation of the available data.

  11. Analysing and Rationalising Molecular and Materials Databases Using Machine-Learning

    Science.gov (United States)

    de, Sandip; Ceriotti, Michele

    Computational materials design promises to greatly accelerate the process of discovering new or more performant materials. Several collaborative efforts are contributing to this goal by building databases of structures, containing between thousands and millions of distinct hypothetical compounds, whose properties are computed by high-throughput electronic-structure calculations. The complexity and sheer amount of information has made manual exploration, interpretation and maintenance of these databases a formidable challenge, making it necessary to resort to automatic analysis tools. Here we will demonstrate how, starting from a measure of (dis)similarity between database items built from a combination of local environment descriptors, it is possible to apply hierarchical clustering algorithms, as well as dimensionality reduction methods such as sketchmap, to analyse, classify and interpret trends in molecular and materials databases, as well as to detect inconsistencies and errors. Thanks to the agnostic and flexible nature of the underlying metric, we will show how our framework can be applied transparently to different kinds of systems ranging from organic molecules and oligopeptides to inorganic crystal structures as well as molecular crystals. Funded by National Center for Computational Design and Discovery of Novel Materials (MARVEL) and Swiss National Science Foundation.

  12. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  13. The ATLAS conditions database architecture for the Muon spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Verducci, Monica, E-mail: monica.verducci@cern.c [University of Wuerzburg Am Hubland, 97074, Wuerzburg (Germany)

    2010-04-01

    The Muon System, facing the challenge requirement of the conditions data storage, has extensively started to use the conditions database project 'COOL' as the basis for all its conditions data storage both at CERN and throughout the worldwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored. The Muon conditions database is responsible for almost all of the 'non event' data and detector quality flags storage needed for debugging of the detector operations and for performing reconstruction and analysis. The COOL database allows database applications to be written independently of the underlying database technology and ensures long term compatibility with the entire ATLAS Software. COOL implements an interval of validity database, i.e. objects stored or referenced in COOL have an associated start and end time between which they are valid, the data is stored in folders, which are themselves arranged in a hierarchical structure of folder sets. The structure is simple and mainly optimized to store and retrieve object(s) associated with a particular time. In this work, an overview of the entire Muon conditions database architecture is given, including the different sources of the data and the storage model used. In addiction the software interfaces used to access to the conditions data are described, more emphasis is given to the Offline Reconstruction framework ATHENA and the services developed to provide the conditions data to the reconstruction.

  14. An algorithm to transform natural language into SQL queries for relational databases

    Directory of Open Access Journals (Sweden)

    Garima Singh

    2016-09-01

    Full Text Available Intelligent interface, to enhance efficient interactions between user and databases, is the need of the database applications. Databases must be intelligent enough to make the accessibility faster. However, not every user familiar with the Structured Query Language (SQL queries as they may not aware of structure of the database and they thus require to learn SQL. So, non-expert users need a system to interact with relational databases in their natural language such as English. For this, Database Management System (DBMS must have an ability to understand Natural Language (NL. In this research, an intelligent interface is developed using semantic matching technique which translates natural language query to SQL using set of production rules and data dictionary. The data dictionary consists of semantics sets for relations and attributes. A series of steps like lower case conversion, tokenization, speech tagging, database element and SQL element extraction is used to convert Natural Language Query (NLQ to SQL Query. The transformed query is executed and the results are obtained by the user. Intelligent Interface is the need of database applications to enhance efficient interaction between user and DBMS.

  15. Database Description - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DGBY Alternative name Database...EL: +81-29-838-8066 E-mail: Database classification Microarray Data and other Gene Expression Databases Orga...nism Taxonomy Name: Saccharomyces cerevisiae Taxonomy ID: 4932 Database descripti...-called phenomics). We uploaded these data on this website which is designated DGBY(Database for Gene expres...ma J, Ando A, Takagi H. Journal: Yeast. 2008 Mar;25(3):179-90. External Links: Original website information Database

  16. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name KOME Alternative nam... Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice ...Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description Information about approximately ...Hayashizaki Y, Kikuchi S. Journal: PLoS One. 2007 Nov 28; 2(11):e1235. External Links: Original website information Database...OS) Rice mutant panel database (Tos17) A Database of Plant Cis-acting Regulatory

  17. Design database for quantitative trait loci (QTL) data warehouse, data mining, and meta-analysis.

    Science.gov (United States)

    Hu, Zhi-Liang; Reecy, James M; Wu, Xiao-Lin

    2012-01-01

    A database can be used to warehouse quantitative trait loci (QTL) data from multiple sources for comparison, genomic data mining, and meta-analysis. A robust database design involves sound data structure logistics, meaningful data transformations, normalization, and proper user interface designs. This chapter starts with a brief review of relational database basics and concentrates on issues associated with curation of QTL data into a relational database, with emphasis on the principles of data normalization and structure optimization. In addition, some simple examples of QTL data mining and meta-analysis are included. These examples are provided to help readers better understand the potential and importance of sound database design.

  18. VaProS: a database-integration approach for protein/genome information retrieval

    KAUST Repository

    Gojobori, Takashi; Ikeo, Kazuho; Katayama, Yukie; Kawabata, Takeshi; Kinjo, Akira R.; Kinoshita, Kengo; Kwon, Yeondae; Migita, Ohsuke; Mizutani, Hisashi; Muraoka, Masafumi; Nagata, Koji; Omori, Satoshi; Sugawara, Hideaki; Yamada, Daichi; Yura, Kei

    2016-01-01

    Life science research now heavily relies on all sorts of databases for genome sequences, transcription, protein three-dimensional (3D) structures, protein–protein interactions, phenotypes and so forth. The knowledge accumulated by all the omics research is so vast that a computer-aided search of data is now a prerequisite for starting a new study. In addition, a combinatory search throughout these databases has a chance to extract new ideas and new hypotheses that can be examined by wet-lab experiments. By virtually integrating the related databases on the Internet, we have built a new web application that facilitates life science researchers for retrieving experts’ knowledge stored in the databases and for building a new hypothesis of the research target. This web application, named VaProS, puts stress on the interconnection between the functional information of genome sequences and protein 3D structures, such as structural effect of the gene mutation. In this manuscript, we present the notion of VaProS, the databases and tools that can be accessed without any knowledge of database locations and data formats, and the power of search exemplified in quest of the molecular mechanisms of lysosomal storage disease. VaProS can be freely accessed at http://p4d-info.nig.ac.jp/vapros/.

  19. VaProS: a database-integration approach for protein/genome information retrieval

    KAUST Repository

    Gojobori, Takashi

    2016-12-24

    Life science research now heavily relies on all sorts of databases for genome sequences, transcription, protein three-dimensional (3D) structures, protein–protein interactions, phenotypes and so forth. The knowledge accumulated by all the omics research is so vast that a computer-aided search of data is now a prerequisite for starting a new study. In addition, a combinatory search throughout these databases has a chance to extract new ideas and new hypotheses that can be examined by wet-lab experiments. By virtually integrating the related databases on the Internet, we have built a new web application that facilitates life science researchers for retrieving experts’ knowledge stored in the databases and for building a new hypothesis of the research target. This web application, named VaProS, puts stress on the interconnection between the functional information of genome sequences and protein 3D structures, such as structural effect of the gene mutation. In this manuscript, we present the notion of VaProS, the databases and tools that can be accessed without any knowledge of database locations and data formats, and the power of search exemplified in quest of the molecular mechanisms of lysosomal storage disease. VaProS can be freely accessed at http://p4d-info.nig.ac.jp/vapros/.

  20. LRRML: a conformational database and an XML description of leucine-rich repeats (LRRs).

    Science.gov (United States)

    Wei, Tiandi; Gong, Jing; Jamitzky, Ferdinand; Heckl, Wolfgang M; Stark, Robert W; Rössle, Shaila C

    2008-11-05

    Leucine-rich repeats (LRRs) are present in more than 6000 proteins. They are found in organisms ranging from viruses to eukaryotes and play an important role in protein-ligand interactions. To date, more than one hundred crystal structures of LRR containing proteins have been determined. This knowledge has increased our ability to use the crystal structures as templates to model LRR proteins with unknown structures. Since the individual three-dimensional LRR structures are not directly available from the established databases and since there are only a few detailed annotations for them, a conformational LRR database useful for homology modeling of LRR proteins is desirable. We developed LRRML, a conformational database and an extensible markup language (XML) description of LRRs. The release 0.2 contains 1261 individual LRR structures, which were identified from 112 PDB structures and annotated manually. An XML structure was defined to exchange and store the LRRs. LRRML provides a source for homology modeling and structural analysis of LRR proteins. In order to demonstrate the capabilities of the database we modeled the mouse Toll-like receptor 3 (TLR3) by multiple templates homology modeling and compared the result with the crystal structure. LRRML is an information source for investigators involved in both theoretical and applied research on LRR proteins. It is available at http://zeus.krist.geo.uni-muenchen.de/~lrrml.

  1. LRRML: a conformational database and an XML description of leucine-rich repeats (LRRs

    Directory of Open Access Journals (Sweden)

    Stark Robert W

    2008-11-01

    Full Text Available Abstract Background Leucine-rich repeats (LRRs are present in more than 6000 proteins. They are found in organisms ranging from viruses to eukaryotes and play an important role in protein-ligand interactions. To date, more than one hundred crystal structures of LRR containing proteins have been determined. This knowledge has increased our ability to use the crystal structures as templates to model LRR proteins with unknown structures. Since the individual three-dimensional LRR structures are not directly available from the established databases and since there are only a few detailed annotations for them, a conformational LRR database useful for homology modeling of LRR proteins is desirable. Description We developed LRRML, a conformational database and an extensible markup language (XML description of LRRs. The release 0.2 contains 1261 individual LRR structures, which were identified from 112 PDB structures and annotated manually. An XML structure was defined to exchange and store the LRRs. LRRML provides a source for homology modeling and structural analysis of LRR proteins. In order to demonstrate the capabilities of the database we modeled the mouse Toll-like receptor 3 (TLR3 by multiple templates homology modeling and compared the result with the crystal structure. Conclusion LRRML is an information source for investigators involved in both theoretical and applied research on LRR proteins. It is available at http://zeus.krist.geo.uni-muenchen.de/~lrrml.

  2. The Effects of Normalisation of the Satisfaction of Novice End-User Querying Databases

    Directory of Open Access Journals (Sweden)

    Conrad Benedict

    1997-05-01

    Full Text Available This paper reports the results of an experiment that investigated the effects different structural characteristics of relational databases have on information satisfaction of end-users querying databases. The results show that unnormalised tables adversely affect end-user satisfaction. The adverse affect on end-user satisfaction is attributable primarily to the use of non atomic data. In this study, the affect on end user satisfaction of repeating fields was not significant. The study contributes to the further development of theories of individual adjustment to information technology in the workplace by alerting organisations and, in particular, database designers to the ways in which the structural characteristics of relational databases may affect end-user satisfaction. More importantly, the results suggest that database designers need to clearly identify the domains for each item appearing in their databases. These issues are of increasing importance because of the growth in the amount of data available to end-users in relational databases.

  3. Enhanced Fire Events Database to Support Fire PRA

    International Nuclear Information System (INIS)

    Baranowsky, Patrick; Canavan, Ken; St. Germain, Shawn

    2010-01-01

    This paper provides a description of the updated and enhanced Fire Events Data Base (FEDB) developed by the Electric Power Research Institute (EPRI) in cooperation with the U.S. Nuclear Regulatory Commission (NRC). The FEDB is the principal source of fire incident operational data for use in fire PRAs. It provides a comprehensive and consolidated source of fire incident information for nuclear power plants operating in the U.S. The database classification scheme identifies important attributes of fire incidents to characterize their nature, causal factors, and severity consistent with available data. The database provides sufficient detail to delineate important plant specific attributes of the incidents to the extent practical. A significant enhancement to the updated FEDB is the reorganization and refinement of the database structure and data fields and fire characterization details added to more rigorously capture the nature and magnitude of the fire and damage to the ignition source and nearby equipment and structures.

  4. Requirements and specifications for a particle database

    International Nuclear Information System (INIS)

    2015-01-01

    One of the tasks of WPEC Subgroup 38 (SG38) is to design a database structure for storing the particle information needed for nuclear reaction databases and transport codes. Since the same particle may appear many times in a reaction database (produced by many different reactions on different targets), one of the long-term goals for SG38 is to move towards a central database of particle information to reduce redundancy and ensure consistency among evaluations. The database structure must be general enough to describe all relevant particles and their properties, including mass, charge, spin and parity, half-life, decay properties, and so on. Furthermore, it must be broad enough to handle not only excited nuclear states but also excited atomic states that can de-excite through atomic relaxation. Databases built with this hierarchy will serve as central repositories for particle information that can be linked to from codes and other databases. It is hoped that the final product is general enough for use in other projects besides SG38. While this is called a 'particle database', the definition of a particle (as described in Section 2) is very broad. The database must describe nucleons, nuclei, excited nuclear states (and possibly atomic states) in addition to fundamental particles like photons, electrons, muons, etc. Under this definition the list of possible particles becomes quite large. To help organize them the database will need a way of grouping related particles (e.g., all the isotopes of an element, or all the excited levels of an isotope) together into particle 'groups'. The database will also need a way to classify particles that belong to the same 'family' (such as 'leptons', 'baryons', etc.). Each family of particles may have special requirements as to what properties are required. One important function of the particle database will be to provide an easy way for codes and external databases to look up any particle stored inside. In order to make access as

  5. Beyond the ENDF format: A modern nuclear database structure. SG38 meeting, NEA Headquarters, 21-22 May 2013

    International Nuclear Information System (INIS)

    McNabb, D.; Ishikawa, M.; White, M.; Zerkin, V.; Mattoon, C.; Koning, A.; Brown, D.; Badikov, S.; Beck, B.; Haeck, W.; Dunn, M.; Sublet, J.-C.; Sinitsa, V.; Shmakov, V.; Dupont, E.

    2013-05-01

    WPEC subgroup 38 (SG38) was formed to develop a new structure for storing nuclear reaction data, that is meant to eventually replace ENDF-6 as the standard way to store and share evaluations. The work of SG38 covers the following tasks: Designing flexible, general-purpose data containers; Determining a logical and easy-to-understand top-level hierarchy for storing evaluated nuclear reaction data; Creating a particle database for storing particles, masses and level schemes; Specifying the infrastructure (plotting, processing, etc.) that must accompany the new structure; Developing an Application Programming Interface or API to allow other codes to access data stored in the new structure; Specifying what tests need to be implemented for quality assurance of the new structure and associated infrastructure; Ensuring documentation and governance of the structure and associated infrastructure. This document is the proceedings of the SG38 meeting, held at the NEA Headquarters on 21-22 May 2013. It comprises all the available presentations (slides) given by the participants: A - Introduction: - Developing a plan to meet our requirements (D. McNabb); - Possible option for next meeting date and place (M. Ishikawa); B - Task 1 - Designing low-level data containers for the new structure: - Low-Level Data Containers (M. White); - Low-level data structures in EXFOR and associated software - Development of EXFOR-XML (V. Zerkin); - Data containers in GND (C. Mattoon); Download an example of basic data containers for 1, 2 and 3-D data as well as the table and matrix C - Task 2 - Designing high-level hierarchy for nuclear reaction data: - Nuclear reaction data in a new structure (A. Koning); - Bibliographic and documentation issues (D. Brown); - Designing a high-level hierarchy for nuclear reaction data (D. Brown); - The ENDF-6 Format for the Evaluated Covariances of Discrete Radiation Spectrum Data (S. Badikov); D - Task 3 - Designing an API for reading/writing new data

  6. Hmrbase: a database of hormones and their receptors

    Science.gov (United States)

    Rashid, Mamoon; Singla, Deepak; Sharma, Arun; Kumar, Manish; Raghava, Gajendra PS

    2009-01-01

    Background Hormones are signaling molecules that play vital roles in various life processes, like growth and differentiation, physiology, and reproduction. These molecules are mostly secreted by endocrine glands, and transported to target organs through the bloodstream. Deficient, or excessive, levels of hormones are associated with several diseases such as cancer, osteoporosis, diabetes etc. Thus, it is important to collect and compile information about hormones and their receptors. Description This manuscript describes a database called Hmrbase which has been developed for managing information about hormones and their receptors. It is a highly curated database for which information has been collected from the literature and the public databases. The current version of Hmrbase contains comprehensive information about ~2000 hormones, e.g., about their function, source organism, receptors, mature sequences, structures etc. Hmrbase also contains information about ~3000 hormone receptors, in terms of amino acid sequences, subcellular localizations, ligands, and post-translational modifications etc. One of the major features of this database is that it provides data about ~4100 hormone-receptor pairs. A number of online tools have been integrated into the database, to provide the facilities like keyword search, structure-based search, mapping of a given peptide(s) on the hormone/receptor sequence, sequence similarity search. This database also provides a number of external links to other resources/databases in order to help in the retrieving of further related information. Conclusion Owing to the high impact of endocrine research in the biomedical sciences, the Hmrbase could become a leading data portal for researchers. The salient features of Hmrbase are hormone-receptor pair-related information, mapping of peptide stretches on the protein sequences of hormones and receptors, Pfam domain annotations, categorical browsing options, online data submission, Drug

  7. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  8. International shock-wave database project : report of the requirements workshop.

    Energy Technology Data Exchange (ETDEWEB)

    Aidun, John Bahram (Institute of Problems of chemical Physics of Russian Academy of Sciences); Lomonosov, Igor V. (Institute of Problems of chemical Physics of Russian Academy of Sciences); Levashov, Pavel R. (Joint Institute for High Temperatures of Russian Academy of Sciences)

    2012-03-01

    We report on the requirements workshop for a new project, the International Shock-Wave database (ISWdb), which was held October 31 - November 2, 2011, at GSI, Darmstadt, Germany. Participants considered the idea of this database, its structure, technical requirements, content, and principles of operation. This report presents the consensus conclusions from the workshop, key discussion points, and the goals and plan for near-term and intermediate-term development of the ISWdb. The main points of consensus from the workshop were: (1) This international database is of interest and of practical use for the shock-wave and high pressure physics communities; (2) Intermediate state information and off-Hugoniot information is important and should be included in ISWdb; (3) Other relevant high pressure and auxiliary data should be included to the database, in the future; (4) Information on the ISWdb needs to be communicated, broadly, to the research community; and (5) Operating structure will consist of an Advisory Board, subject-matter expert Moderators to vet submitted data, and the database Project Team. This brief report is intended to inform the shock-wave research community and interested funding agencies about the project, as its success, ultimately, depends on both of these groups finding sufficient value in the database to use it, contribute to it, and support it.

  9. NVST Data Archiving System Based On FastBit NoSQL Database

    Science.gov (United States)

    Liu, Ying-bo; Wang, Feng; Ji, Kai-fan; Deng, Hui; Dai, Wei; Liang, Bo

    2014-06-01

    The New Vacuum Solar Telescope (NVST) is a 1-meter vacuum solar telescope that aims to observe the fine structures of active regions on the Sun. The main tasks of the NVST are high resolution imaging and spectral observations, including the measurements of the solar magnetic field. The NVST has been collecting more than 20 million FITS files since it began routine observations in 2012 and produces a maximum observational records of 120 thousand files in a day. Given the large amount of files, the effective archiving and retrieval of files becomes a critical and urgent problem. In this study, we implement a new data archiving system for the NVST based on the Fastbit Not Only Structured Query Language (NoSQL) database. Comparing to the relational database (i.e., MySQL; My Structured Query Language), the Fastbit database manifests distinctive advantages on indexing and querying performance. In a large scale database of 40 million records, the multi-field combined query response time of Fastbit database is about 15 times faster and fully meets the requirements of the NVST. Our study brings a new idea for massive astronomical data archiving and would contribute to the design of data management systems for other astronomical telescopes.

  10. [Establishement for regional pelvic trauma database in Hunan Province].

    Science.gov (United States)

    Cheng, Liang; Zhu, Yong; Long, Haitao; Yang, Junxiao; Sun, Buhua; Li, Kanghua

    2017-04-28

    To establish a database for pelvic trauma in Hunan Province, and to start the work of multicenter pelvic trauma registry.
 Methods: To establish the database, literatures relevant to pelvic trauma were screened, the experiences from the established trauma database in China and abroad were learned, and the actual situations for pelvic trauma rescue in Hunan Province were considered. The database for pelvic trauma was established based on the PostgreSQL and the advanced programming language Java 1.6.
 Results: The complex procedure for pelvic trauma rescue was described structurally. The contents for the database included general patient information, injurious condition, prehospital rescue, conditions in admission, treatment in hospital, status on discharge, diagnosis, classification, complication, trauma scoring and therapeutic effect. The database can be accessed through the internet by browser/servicer. The functions for the database include patient information management, data export, history query, progress report, video-image management and personal information management.
 Conclusion: The database with whole life cycle pelvic trauma is successfully established for the first time in China. It is scientific, functional, practical, and user-friendly.

  11. International nuclear safety center database on material properties

    International Nuclear Information System (INIS)

    Fink, J.K.

    1996-01-01

    International nuclear safety center database on the following material properties is described: fuel, cladding,absorbers, moderators, structural materials, coolants, concretes, liquid mixtures, uranium dioxide

  12. Database on wind characteristics. Users manual

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, G.C.; Hansen, K.S.

    2001-11-01

    The main objective of IEA R and D Wind Annex XVII - Database on Wind Characteristics - is to provide wind energy planners and designers, as well as the international wind engineering community in general, with easy access to quality controlled measured wind field time series observed in a wide range of environments. The project partners are Sweden, Norway, U.S.A., The Netherlands, Japan and Denmark, with Denmark as the Operating Agent. The reporting of IEA R and D Annex XVII falls in three separate parts. Part one deals with the overall structure and philosophy behind the database (including the applied data quality control procedures), part two accounts in details for the available data in the established database bank and part three is the Users Manual describing the various ways to access and analyse the data. The present report constitutes part three of the Annex XVII reporting and contains a trough description of the available online facilities for identifying, selecting, downloading and handling measured wind field time series and resource data from 'Database on Wind Characteristics'. (au)

  13. Database Description - SSBD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SSBD Alternative nam...ss 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe 650-0047, Japan, RIKEN Quantitative Biology Center Shuichi Onami E-mail: Database... classification Other Molecular Biology Databases Database classification Dynamic databa...elegans Taxonomy ID: 6239 Taxonomy Name: Escherichia coli Taxonomy ID: 562 Database description Systems Scie...i Onami Journal: Bioinformatics/April, 2015/Volume 31, Issue 7 External Links: Original website information Database

  14. Database Description - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name GETDB Alternative n...ame Gal4 Enhancer Trap Insertion Database DOI 10.18908/lsdba.nbdc00236-000 Creator Creator Name: Shigeo Haya... Chuo-ku, Kobe 650-0047 Tel: +81-78-306-3185 FAX: +81-78-306-3183 E-mail: Database classification Expression... Invertebrate genome database Organism Taxonomy Name: Drosophila melanogaster Taxonomy ID: 7227 Database des...riginal website information Database maintenance site Drosophila Genetic Resource

  15. JICST Factual DatabaseJICST Chemical Substance Safety Regulation Database

    Science.gov (United States)

    Abe, Atsushi; Sohma, Tohru

    JICST Chemical Substance Safety Regulation Database is based on the Database of Safety Laws for Chemical Compounds constructed by Japan Chemical Industry Ecology-Toxicology & Information Center (JETOC) sponsored by the Sience and Technology Agency in 1987. JICST has modified JETOC database system, added data and started the online service through JOlS-F (JICST Online Information Service-Factual database) in January 1990. JICST database comprises eighty-three laws and fourteen hundred compounds. The authors outline the database, data items, files and search commands. An example of online session is presented.

  16. Database on Performance of Neutron Irradiated FeCrAl Alloys

    Energy Technology Data Exchange (ETDEWEB)

    Field, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Briggs, Samuel A. [Univ. of Wisconsin, Madison, WI (United States); Littrell, Ken [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Parish, Chad M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Yamamoto, Yukinori [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-08-01

    The present report summarizes and discusses the database on radiation tolerance for Generation I, Generation II, and commercial FeCrAl alloys. This database has been built upon mechanical testing and microstructural characterization on selected alloys irradiated within the High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL) up to doses of 13.8 dpa at temperatures ranging from 200°C to 550°C. The structure and performance of these irradiated alloys were characterized using advanced microstructural characterization techniques and mechanical testing. The primary objective of developing this database is to enhance the rapid development of a mechanistic understanding on the radiation tolerance of FeCrAl alloys, thereby enabling informed decisions on the optimization of composition and microstructure of FeCrAl alloys for application as an accident tolerant fuel (ATF) cladding. This report is structured to provide a brief summary of critical results related to the database on radiation tolerance of FeCrAl alloys.

  17. The design and development of RDGSM isotope database based on J2ME/J2EE

    International Nuclear Information System (INIS)

    Zhou Shumin; Gao Yongping; Sun Yamin

    2006-01-01

    RDGSM (the Regional Database for Geothermal Surface Manifestation) is a distributed database designed for IAEA which used to manage isotope and hydrology data in asian-pacific region. The data structure of RDGSM is introduced in the paper. An mobile database structure based on J2ME/J2EE is proposed. The design and development of RDGSM isotope database is demonstrated in detail. The data synchronization method, Model-View-Controller design model and data integrity constraint method are discussed to implement mobile database application. (authors)

  18. Pembangunan Database Destinasi Pariwisata Indonesia Pengumpulan dan Pengolahan Data Tahap I

    Directory of Open Access Journals (Sweden)

    Yosafati Hulu

    2014-12-01

    Full Text Available Considering the increasing need for local (government and community in developing tourism destinations in the era of autonomy, considering the need to select the appropriate attraction according to the respective criteria, and considering the needs of businessmen travel / hotel to offer the appropriate attraction with the needs of potential tourists, it is necessary to develop a database of tourist destinations in Indonesia that is able to facilitate these needs. The database is built is a web-based database that is widely accessible and capable of storing complete information about Indonesian tourism destination as a whole, systematic and structured. Attractions in the database already classifiable based attributes: location (the name of the island, province, district, type/tourism products, how to achieve these attractions, the cost, and also a variety of informal information such as: the ins and outs of local attractions by local communities or tourists. This study is a continuation of previous studies or research phase two of three phases planned. Phase two will focus on the collection and processing of data as well as testing and refinement of the model design and database structure that has been created in Phase I. The study was conducted in stages: 1 Design Model and Structure Database,2 Making a Web-based program, 3 Installation and Hosting, 4 Data Collection, 5 Data Processing and Data entry, and 6 Evaluation and improvement/Completion.

  19. IAEA international database on irradiated nuclear graphite properties

    International Nuclear Information System (INIS)

    Burchell, T.D.; Clark, R.E.H.; Stephens, J.A.; Eto, M.; Haag, G.; Hacker, P.; Neighbour, G.B.; Janev, R.K.; Wickham, A.J.

    2000-02-01

    This report describes an IAEA database containing data on the properties of irradiated nuclear graphites. Development and implementation of the graphite database followed initial discussions at an IAEA Specialists' Meeting held in September 1995. The design of the database is based upon developments at the University of Bath (United Kingdom), work which the UK Health and Safety Executive initially supported. The database content and data management policies were determined during two IAEA Consultants' Meetings of nuclear reactor graphite specialists held in 1998 and 1999. The graphite data are relevant to the construction and safety case developments required for new and existing HTR nuclear power plants, and to the development of safety cases for continued operation of existing plants. The database design provides a flexible structure for data archiving and retrieval and employs Microsoft Access 97. An instruction manual is provided within this document for new users, including installation instructions for the database on personal computers running Windows 95/NT 4.0 or higher versions. The data management policies and associated responsibilities are contained in the database Working Arrangement which is included as an Appendix to this report. (author)

  20. The new Scandinavian Donations and Transfusions database (SCANDAT2)

    DEFF Research Database (Denmark)

    Edgren, Gustaf; Rostgaard, Klaus; Vasan, Senthil K

    2015-01-01

    : It is possible to create a binational, nationwide database with almost 50 years of follow-up of blood donors and transfused patients for a range of health outcomes. We aim to use this database for further studies of donor health, transfusion-associated risks, and transfusion-transmitted disease....... AND METHODS: We have previously created the anonymized Scandinavian Donations and Transfusions (SCANDAT) database, containing data on blood donors, blood transfusions, and transfused patients, with complete follow-up of donors and patients for a range of health outcomes. Here we describe the re......-creation of SCANDAT with updated, identifiable data. We collected computerized data on blood donations and transfusions from blood banks covering all of Sweden and Denmark. After data cleaning, two structurally identical databases were created and the entire database was linked with nationwide health outcomes...

  1. Heterogeneous Biomedical Database Integration Using a Hybrid Strategy: A p53 Cancer Research Database

    Directory of Open Access Journals (Sweden)

    Vadim Y. Bichutskiy

    2006-01-01

    Full Text Available Complex problems in life science research give rise to multidisciplinary collaboration, and hence, to the need for heterogeneous database integration. The tumor suppressor p53 is mutated in close to 50% of human cancers, and a small drug-like molecule with the ability to restore native function to cancerous p53 mutants is a long-held medical goal of cancer treatment. The Cancer Research DataBase (CRDB was designed in support of a project to find such small molecules. As a cancer informatics project, the CRDB involved small molecule data, computational docking results, functional assays, and protein structure data. As an example of the hybrid strategy for data integration, it combined the mediation and data warehousing approaches. This paper uses the CRDB to illustrate the hybrid strategy as a viable approach to heterogeneous data integration in biomedicine, and provides a design method for those considering similar systems. More efficient data sharing implies increased productivity, and, hopefully, improved chances of success in cancer research. (Code and database schemas are freely downloadable, http://www.igb.uci.edu/research/research.html.

  2. TIPdb: A Database of Anticancer, Antiplatelet, and Antituberculosis Phytochemicals from Indigenous Plants in Taiwan

    Directory of Open Access Journals (Sweden)

    Ying-Chi Lin

    2013-01-01

    Full Text Available The unique geographic features of Taiwan are attributed to the rich indigenous and endemic plant species in Taiwan. These plants serve as resourceful bank for biologically active phytochemicals. Given that these plant-derived chemicals are prototypes of potential drugs for diseases, databases connecting the chemical structures and pharmacological activities may facilitate drug development. To enhance the utility of the data, it is desirable to develop a database of chemical compounds and corresponding activities from indigenous plants in Taiwan. A database of anticancer, antiplatelet, and antituberculosis phytochemicals from indigenous plants in Taiwan was constructed. The database, TIPdb, is composed of a standardized format of published anticancer, antiplatelet, and antituberculosis phytochemicals from indigenous plants in Taiwan. A browse function was implemented for users to browse the database in a taxonomy-based manner. Search functions can be utilized to filter records of interest by botanical name, part, chemical class, or compound name. The structured and searchable database TIPdb was constructed to serve as a comprehensive and standardized resource for anticancer, antiplatelet, and antituberculosis compounds search. The manually curated chemical structures and activities provide a great opportunity to develop quantitative structure-activity relationship models for the high-throughput screening of potential anticancer, antiplatelet, and antituberculosis drugs.

  3. TIPdb: a database of anticancer, antiplatelet, and antituberculosis phytochemicals from indigenous plants in Taiwan.

    Science.gov (United States)

    Lin, Ying-Chi; Wang, Chia-Chi; Chen, Ih-Sheng; Jheng, Jhao-Liang; Li, Jih-Heng; Tung, Chun-Wei

    2013-01-01

    The unique geographic features of Taiwan are attributed to the rich indigenous and endemic plant species in Taiwan. These plants serve as resourceful bank for biologically active phytochemicals. Given that these plant-derived chemicals are prototypes of potential drugs for diseases, databases connecting the chemical structures and pharmacological activities may facilitate drug development. To enhance the utility of the data, it is desirable to develop a database of chemical compounds and corresponding activities from indigenous plants in Taiwan. A database of anticancer, antiplatelet, and antituberculosis phytochemicals from indigenous plants in Taiwan was constructed. The database, TIPdb, is composed of a standardized format of published anticancer, antiplatelet, and antituberculosis phytochemicals from indigenous plants in Taiwan. A browse function was implemented for users to browse the database in a taxonomy-based manner. Search functions can be utilized to filter records of interest by botanical name, part, chemical class, or compound name. The structured and searchable database TIPdb was constructed to serve as a comprehensive and standardized resource for anticancer, antiplatelet, and antituberculosis compounds search. The manually curated chemical structures and activities provide a great opportunity to develop quantitative structure-activity relationship models for the high-throughput screening of potential anticancer, antiplatelet, and antituberculosis drugs.

  4. Ontological interpretation of biomedical database content.

    Science.gov (United States)

    Santana da Silva, Filipe; Jansen, Ludger; Freitas, Fred; Schulz, Stefan

    2017-06-26

    addressed by a method that identifies implicit knowledge behind semantic annotations in biological databases and grounds it in an expressive upper-level ontology. The result is a seamless representation of database structure, content and annotations as OWL models.

  5. Database Description - KAIKOcDNA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us KAIKOcDNA Database Description General information of database Database name KAIKOcDNA Alter...National Institute of Agrobiological Sciences Akiya Jouraku E-mail : Database cla...ssification Nucleotide Sequence Databases Organism Taxonomy Name: Bombyx mori Taxonomy ID: 7091 Database des...rnal: G3 (Bethesda) / 2013, Sep / vol.9 External Links: Original website information Database maintenance si...available URL of Web services - Need for user registration Not available About This Database Database

  6. Download - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Download First of all, please read the license of this database. Data ...1.4 KB) Simple search and download Downlaod via FTP FTP server is sometimes jammed. If it is, access [here]. About This Database Data...base Description Download License Update History of This Database Site Policy | Contact Us Download - Trypanosomes Database | LSDB Archive ...

  7. License - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database License License to Use This Database Last updated : 2017/02/27 You may use this database...cense specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative ...Commons Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...ative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database, you ar

  8. The EPOS Vision for the Open Science Cloud

    Science.gov (United States)

    Jeffery, Keith; Harrison, Matt; Cocco, Massimo

    2016-04-01

    Cloud computing offers dynamic elastic scalability for data processing on demand. For much research activity, demand for computing is uneven over time and so CLOUD computing offers both cost-effectiveness and capacity advantages. However, as reported repeatedly by the EC Cloud Expert Group, there are barriers to the uptake of Cloud Computing: (1) security and privacy; (2) interoperability (avoidance of lock-in); (3) lack of appropriate systems development environments for application programmers to characterise their applications to allow CLOUD middleware to optimize their deployment and execution. From CERN, the Helix-Nebula group has proposed the architecture for the European Open Science Cloud. They are discussing with other e-Infrastructure groups such as EGI (GRIDs), EUDAT (data curation), AARC (network authentication and authorisation) and also with the EIROFORUM group of 'international treaty' RIs (Research Infrastructures) and the ESFRI (European Strategic Forum for Research Infrastructures) RIs including EPOS. Many of these RIs are either e-RIs (electronic-RIs) or have an e-RI interface for access and use. The EPOS architecture is centred on a portal: ICS (Integrated Core Services). The architectural design already allows for access to e-RIs (which may include any or all of data, software, users and resources such as computers or instruments). Those within any one domain (subject area) of EPOS are considered within the TCS (Thematic Core Services). Those outside, or available across multiple domains of EPOS, are ICS-d (Integrated Core Services-Distributed) since the intention is that they will be used by any or all of the TCS via the ICS. Another such service type is CES (Computational Earth Science); effectively an ICS-d specializing in high performance computation, analytics, simulation or visualization offered by a TCS for others to use. Already discussions are underway between EPOS and EGI, EUDAT, AARC and Helix-Nebula for those offerings to be

  9. Database automation of accelerator operation

    International Nuclear Information System (INIS)

    Casstevens, B.J.; Ludemann, C.A.

    1982-01-01

    The Oak Ridge Isochronous Cyclotron (ORIC) is a variable energy, multiparticle accelerator that produces beams of energetic heavy ions which are used as probes to study the structure of the atomic nucleus. To accelerate and transmit a particular ion at a specified energy to an experimenter's apparatus, the electrical currents in up to 82 magnetic field producing coils must be established to accuracies of from 0.1 to 0.001 percent. Mechanical elements must also be positioned by means of motors or pneumatic drives. A mathematical model of this complex system provides a good approximation of operating parameters required to produce an ion beam. However, manual tuning of the system must be performed to optimize the beam quality. The database system was implemented as an on-line query and retrieval system running at a priority lower than the cyclotron real-time software. It was designed for matching beams recorded in the database with beams specified for experiments. The database is relational and permits searching on ranges of any subset of the eleven beam categorizing attributes. A beam file selected from the database is transmitted to the cyclotron general control software which handles the automatic slewing of power supply currents and motor positions to the file values, thereby replicating the desired parameters

  10. The new ENSDF search system NESSY: IBM/PC nuclear spectroscopy database

    International Nuclear Information System (INIS)

    Boboshin, I.N.; Varlamov, V.V.

    1996-01-01

    The universal relational nuclear structure and decay database NESSY (New ENSDF Search SYstem) developed for the IBM/PC and compatible PCs, and based on the international file ENSDF (Evaluated Nuclear Structure Data File), is described. The NESSY provides the possibility of high efficiency processing (the search and retrieval of any kind of physical data) of the information from ENSDF. The principles of the database development are described and examples of applications are presented. (orig.)

  11. 16th East-European Conference on Advances in Databases and Information Systems (ADBIS 2012)

    CERN Document Server

    Härder, Theo; Wrembel, Robert; Advances in Databases and Information Systems

    2013-01-01

    This volume is the second one of the 16th East-European Conference on Advances in Databases and Information Systems (ADBIS 2012), held on September 18-21, 2012, in Poznań, Poland. The first one has been published in the LNCS series.   This volume includes 27 research contributions, selected out of 90. The contributions cover a wide spectrum of topics in the database and information systems field, including: database foundation and theory, data modeling and database design, business process modeling, query optimization in relational and object databases, materialized view selection algorithms, index data structures, distributed systems, system and data integration, semi-structured data and databases, semantic data management, information retrieval, data mining techniques, data stream processing, trust and reputation in the Internet, and social networks. Thus, the content of this volume covers the research areas from fundamentals of databases, through still hot topic research problems (e.g., data mining, XML ...

  12. A study on relational ENSDF databases and online services

    International Nuclear Information System (INIS)

    Fan Tieshuan; Song Xiangxiang; Ye Weiguo; Liu Wenlong; Feng Yuqing; Chen Jinxiang; Tang Guoyou; Shi Zhaoming; Guo Zhiyu; Huang Xiaolong; Liu Tingjin; China Inst. of Atomic Energy, Beijing

    2007-01-01

    A relational ENSDF library software is designed and released. Using relational databases, object-oriented programming and web-based technology, this software offers online data services of a centralized repository of data, including international ENSDF files for nuclear structure and decay data. The software can easily reconstruct nuclear data in original ENSDF format from the relational database. The computer programs providing support for database management and online data services via the Internet are based on the Linux implementation of PHP and the MySQL software, and platform independent in a wider sense. (authors)

  13. The Danish (Q)SAR Database Update Project

    DEFF Research Database (Denmark)

    Nikolov, Nikolai Georgiev; Dybdahl, Marianne; Abildgaard Rosenberg, Sine

    2013-01-01

    The Danish (Q)SAR Database is a collection of predictions from quantitative structure–activity relationship ((Q)SAR) models for over 70 environmental and human health-related endpoints (covering biodegradation, metabolism, allergy, irritation, endocrine disruption, teratogenicity, mutagenicity......, carcinogenicity and others), each of them available for 185,000 organic substances. The database has been available online since 2005 (http://qsar.food.dtu.dk). A major update project for the Danish (Q)SAR database is under way, with a new online release planned in the beginning of 2015. The updated version...... will contain more than 600,000 discrete organic structures and new, more precise predictions for all endpoints, derived by consensus algorithms from a number of state-of-the-art individual predictions. Copyright © 2013 Published by Elsevier Ireland Ltd....

  14. Klaim-DB: A Modeling Language for Distributed Database Applications

    DEFF Research Database (Denmark)

    Wu, Xi; Li, Ximeng; Lluch Lafuente, Alberto

    2015-01-01

    and manipulation of structured data, with integrity and atomicity considerations. We present the formal semantics of KlaimDB and illustrate the use of the language in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. It can be seen......We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access...... that raising the abstraction level and encapsulating integrity checks (concerning the schema of tables, etc.) in the language primitives for database operations benefit the modelling task considerably....

  15. The detector of BES III muon constructs with the quality control database

    International Nuclear Information System (INIS)

    Yao Ning; Chinese Academy of Sciences, Beijing; Zheng Guoheng; Yang Lei; Zhang Jiawen; Han Jifeng; Xie Yuguang; Zhao Jianbing; Chen Jin

    2006-01-01

    Because of these softwares' characters, the authors use MySQL, PHP, Apache to construct our quality control database. The authors show the structure of BES MUON Detector and explain the reason why we must construct database. The authors show the results that our database can present. People can access the system through its web site, which retrieves data on request from the database and can display results in dynamically created images. The database is the transparent technique support platform of the maintenance of the detector. (authors)

  16. Database Description - AcEST | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name AcEST Alternative n...hi, Tokyo-to 192-0397 Tel: +81-42-677-1111(ext.3654) E-mail: Database classificat...eneris Taxonomy ID: 13818 Database description This is a database of EST sequences of Adiantum capillus-vene...(3): 223-227. External Links: Original website information Database maintenance site Plant Environmental Res...base Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - AcEST | LSDB Archive ...

  17. License - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database License License to Use This Database Last updated : 2017/03/13 You may use this database...specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Common...s Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...al ... . The summary of the Creative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database

  18. Energy confinement scaling from the international stellarator database

    Energy Technology Data Exchange (ETDEWEB)

    Stroth, U [Max-Planck-Institut fuer Plasmaphysik, Garching (Germany); Murakami, M; Dory, R A; Yamada, H; Okamura, S; Sano, F; Obiki, T

    1995-09-01

    An international stellarator database on global energy confinement is presented comprising data from the ATF, CHS and Heliotron E heliotron/torsatrons and the W7-A and W7-AS shearless stellarators. Regression expressions for the energy confinement time are given for the individual devices and the combined dataset. A comparison with tokamak L mode confinement is discussed on the basis of various scaling expressions. In order to make this database available to interested colleagues, the structure of the database and the parameter list are explained in detail. More recent confinement results incorporating data from enhanced confinement regimes such as H mode are reported elsewhere. (author).

  19. A Bayesian Network Schema for Lessening Database Inference

    National Research Council Canada - National Science Library

    Chang, LiWu; Moskowitz, Ira S

    2001-01-01

    .... The authors introduce a formal schema for database inference analysis, based upon a Bayesian network structure, which identifies critical parameters involved in the inference problem and represents...

  20. An Experimental Investigation of Complexity in Database Query Formulation Tasks

    Science.gov (United States)

    Casterella, Gretchen Irwin; Vijayasarathy, Leo

    2013-01-01

    Information Technology professionals and other knowledge workers rely on their ability to extract data from organizational databases to respond to business questions and support decision making. Structured query language (SQL) is the standard programming language for querying data in relational databases, and SQL skills are in high demand and are…

  1. Database Description - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us FANTOM5 Database Description General information of database Database name FANTOM5 Alternati...me: Rattus norvegicus Taxonomy ID: 10116 Taxonomy Name: Macaca mulatta Taxonomy ID: 9544 Database descriptio...l Links: Original website information Database maintenance site RIKEN Center for Life Science Technologies, ...ilable Web services Not available URL of Web services - Need for user registration Not available About This Database Database... Description Download License Update History of This Database Site Policy | Contact Us Database Description - FANTOM5 | LSDB Archive ...

  2. Structure alerts for carcinogenicity, and the Salmonella assay system: a novel insight through the chemical relational databases technology.

    Science.gov (United States)

    Benigni, Romualdo; Bossa, Cecilia

    2008-01-01

    In the past decades, chemical carcinogenicity has been the object of mechanistic studies that have been translated into valuable experimental (e.g., the Salmonella assays system) and theoretical (e.g., compilations of structure alerts for chemical carcinogenicity) models. These findings remain the basis of the science and regulation of mutagens and carcinogens. Recent advances in the organization and treatment of large databases consisting of both biological and chemical information nowadays allows for a much easier and more refined view of data. This paper reviews recent analyses on the predictive performance of various lists of structure alerts, including a new compilation of alerts that combines previous work in an optimized form for computer implementation. The revised compilation is part of the Toxtree 1.50 software (freely available from the European Chemicals Bureau website). The use of structural alerts for the chemical biological profiling of a large database of Salmonella mutagenicity results is also reported. Together with being a repository of the science on the chemical biological interactions at the basis of chemical carcinogenicity, the SAs have a crucial role in practical applications for risk assessment, for: (a) description of sets of chemicals; (b) preliminary hazard characterization; (c) formation of categories for e.g., regulatory purposes; (d) generation of subsets of congeneric chemicals to be analyzed subsequently with QSAR methods; (e) priority setting. An important aspect of SAs as predictive toxicity tools is that they derive directly from mechanistic knowledge. The crucial role of mechanistic knowledge in the process of applying (Q)SAR considerations to risk assessment should be strongly emphasized. Mechanistic knowledge provides a ground for interaction and dialogue between model developers, toxicologists and regulators, and permits the integration of the (Q)SAR results into a wider regulatory framework, where different types of

  3. NoSQL databases

    OpenAIRE

    Mrozek, Jakub

    2012-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  4. Design and Establishment of Database on the AtoN Properties for AtoN Simulator

    Directory of Open Access Journals (Sweden)

    Ji-Min Yeo

    2015-12-01

    Full Text Available This study investigated the design and establishment of AtoN(Aids to Navigation in the AtoN Simulator. In recent years, IALA(International Association of Lighthouse Authorities has raised the need of a system which could verify whether AtoN is appropriate for the design of AtoN and placement planning. An AtoN Simulator provides simulation circumstances, including the topographical and environmental characteristics of a primary harbor and the characteristics of a navigating ship and the maritime traffic. In the present study, we have developed an AtoN simulator system, including the integrated system design and the construction of an AtoN database. The AtoN Manager was developed as a virtual AtoN to manage the AtoN simulation. However, the data and materials are difficult to manage. Hence, an AtoN Manager and an AtoN simulation system have been needed for an effective managing of the AtoN properties database data. For the database structure design, we have analyzed the database design methodology. The AtoN properties have been analyzed for the effective data management. The AtoN properties designed in the present paper were classified into 19 categories. The status and properties database of AtoN were installed in 14 major ports, those were applied to the AtoN Manager. Designing the AtoN properties database was defined to reduce confusion of the use of English terms and abbreviations. The terms and acronyms have been defined in the AtoN properties and designed to the properties database structure for each AtoN. Before structuring the database for the AtoN Manager, the data of the AtoN properties for each harbor have been organized in accordance with the Excel system. The regulated data were converted into the AtoN properties database in use for the AtoN Manager. The database based on the AtoN properties table structure to each AtoN was designed. The AtoN simulator was implemented by the AtoN Manager applied to the AtoN properties database.

  5. PETROGRAPHY AND APPLICATION OF THE RIETVELD METHOD TO THE QUANTITATIVE ANALYSIS OF PHASES OF NATURAL CLINKER GENERATED BY COAL SPONTANEOUS COMBUSTION

    Directory of Open Access Journals (Sweden)

    Pinilla A. Jesús Andelfo

    2010-06-01

    Full Text Available

    Fine-grained and mainly reddish color, compact and slightly breccious and vesicular pyrometamorphic rocks (natural clinker are associated to the spontaneous combustion of coal seams of the Cerrejón Formation exploited by Carbones del Cerrejón Limited in La Guajira Peninsula (Caribbean Region of Colombia. These rocks constitute remaining inorganic materials derived from claystones, mudstones and sandstones originally associated with the coal and are essentially a complex mixture of various amorphous and crystalline inorganic constituents. In this paper, a petrographic characterization of natural clinker, aswell as the application of the X-ray diffraction (Rietveld method by mean of quantitative analysis of its mineral phases were carried out. The RIQAS program was used for the refinement of X ray powder diffraction profiles, analyzing the importance of using the correct isostructural models for each of the existing phases, which were obtained from the Inorganic Crystal Structure Database (ICSD. The results obtained in this investigation show that the Rietveld method can be used as a powerful tool in the quantitative analysis of phases in polycrystalline samples, which has been a traditional problem in geology.

  6. Random Walks and Diffusions on Graphs and Databases An Introduction

    CERN Document Server

    Blanchard, Philippe

    2011-01-01

    Most networks and databases that humans have to deal with contain large, albeit finite number of units. Their structure, for maintaining functional consistency of the components, is essentially not random and calls for a precise quantitative description of relations between nodes (or data units) and all network components. This book is an introduction, for both graduate students and newcomers to the field, to the theory of graphs and random walks on such graphs. The methods based on random walks and diffusions for exploring the structure of finite connected graphs and databases are reviewed (Markov chain analysis). This provides the necessary basis for consistently discussing a number of applications such diverse as electric resistance networks, estimation of land prices, urban planning, linguistic databases, music, and gene expression regulatory networks.

  7. Structuring osteosarcoma knowledge: an osteosarcoma-gene association database based on literature mining and manual annotation.

    Science.gov (United States)

    Poos, Kathrin; Smida, Jan; Nathrath, Michaela; Maugg, Doris; Baumhoer, Daniel; Neumann, Anna; Korsching, Eberhard

    2014-01-01

    Osteosarcoma (OS) is the most common primary bone cancer exhibiting high genomic instability. This genomic instability affects multiple genes and microRNAs to a varying extent depending on patient and tumor subtype. Massive research is ongoing to identify genes including their gene products and microRNAs that correlate with disease progression and might be used as biomarkers for OS. However, the genomic complexity hampers the identification of reliable biomarkers. Up to now, clinico-pathological factors are the key determinants to guide prognosis and therapeutic treatments. Each day, new studies about OS are published and complicate the acquisition of information to support biomarker discovery and therapeutic improvements. Thus, it is necessary to provide a structured and annotated view on the current OS knowledge that is quick and easily accessible to researchers of the field. Therefore, we developed a publicly available database and Web interface that serves as resource for OS-associated genes and microRNAs. Genes and microRNAs were collected using an automated dictionary-based gene recognition procedure followed by manual review and annotation by experts of the field. In total, 911 genes and 81 microRNAs related to 1331 PubMed abstracts were collected (last update: 29 October 2013). Users can evaluate genes and microRNAs according to their potential prognostic and therapeutic impact, the experimental procedures, the sample types, the biological contexts and microRNA target gene interactions. Additionally, a pathway enrichment analysis of the collected genes highlights different aspects of OS progression. OS requires pathways commonly deregulated in cancer but also features OS-specific alterations like deregulated osteoclast differentiation. To our knowledge, this is the first effort of an OS database containing manual reviewed and annotated up-to-date OS knowledge. It might be a useful resource especially for the bone tumor research community, as specific

  8. The Development of a Combined Search for a Heterogeneous Chemistry Database

    Directory of Open Access Journals (Sweden)

    Lulu Jiang

    2015-05-01

    Full Text Available A combined search, which joins a slow molecule structure search with a fast compound property search, results in more accurate search results and has been applied in several chemistry databases. However, the problems of search speed differences and combining the two separate search results are two major challenges. In this paper, two kinds of search strategies, synchronous search and asynchronous search, are proposed to solve these problems in the heterogeneous structure database and the property database found in ChemDB, a chemistry database owned by the Institute of Process Engineering, CAS. Their advantages and disadvantages under different conditions are discussed in detail. Furthermore, we applied these two searches to ChemDB and used them to screen for potential molecules that can work as CO2 absorbents. The results reveal that this combined search discovers reasonable target molecules within an acceptable time frame.

  9. Search extension transforms Wiki into a relational system: a case for flavonoid metabolite database.

    Science.gov (United States)

    Arita, Masanori; Suwa, Kazuhiro

    2008-09-17

    In computer science, database systems are based on the relational model founded by Edgar Codd in 1970. On the other hand, in the area of biology the word 'database' often refers to loosely formatted, very large text files. Although such bio-databases may describe conflicts or ambiguities (e.g. a protein pair do and do not interact, or unknown parameters) in a positive sense, the flexibility of the data format sacrifices a systematic query mechanism equivalent to the widely used SQL. To overcome this disadvantage, we propose embeddable string-search commands on a Wiki-based system and designed a half-formatted database. As proof of principle, a database of flavonoid with 6902 molecular structures from over 1687 plant species was implemented on MediaWiki, the background system of Wikipedia. Registered users can describe any information in an arbitrary format. Structured part is subject to text-string searches to realize relational operations. The system was written in PHP language as the extension of MediaWiki. All modifications are open-source and publicly available. This scheme benefits from both the free-formatted Wiki style and the concise and structured relational-database style. MediaWiki supports multi-user environments for document management, and the cost for database maintenance is alleviated.

  10. An update of the DEF database of protein fold class predictions

    DEFF Research Database (Denmark)

    Reczko, Martin; Karras, Dimitris; Bohr, Henrik

    1997-01-01

    An update is given on the Database of Expected Fold classes (DEF) that contains a collection of fold-class predictions made from protein sequences and a mail server that provides new predictions for new sequences. To any given sequence one of 49 fold-classes is chosen to classify the structure re...... related to the sequence with high accuracy. The updated predictions system is developed using data from the new version of the 3D-ALI database of aligned protein structures and thus is giving more reliable and more detailed predictions than the previous DEF system.......An update is given on the Database of Expected Fold classes (DEF) that contains a collection of fold-class predictions made from protein sequences and a mail server that provides new predictions for new sequences. To any given sequence one of 49 fold-classes is chosen to classify the structure...

  11. Wireless Sensor Networks Database: Data Management and Implementation

    Directory of Open Access Journals (Sweden)

    Ping Liu

    2014-04-01

    Full Text Available As the core application of wireless sensor network technology, Data management and processing have become the research hotspot in the new database. This article studied mainly data management in wireless sensor networks, in connection with the characteristics of the data in wireless sensor networks, discussed wireless sensor network data query, integrating technology in-depth, proposed a mobile database structure based on wireless sensor network and carried out overall design and implementation for the data management system. In order to achieve the communication rules of above routing trees, network manager uses a simple maintenance algorithm of routing trees. Design ordinary node end, server end in mobile database at gathering nodes and mobile client end that can implement the system, focus on designing query manager, storage modules and synchronous module at server end in mobile database at gathering nodes.

  12. Database Description - DMPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DMPD Alternative nam...e Dynamic Macrophage Pathway CSML Database DOI 10.18908/lsdba.nbdc00558-000 Creator Creator Name: Masao Naga...ty of Tokyo 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639 Tel: +81-3-5449-5615 FAX: +83-3-5449-5442 E-mail: Database...606 Taxonomy Name: Mammalia Taxonomy ID: 40674 Database description DMPD collects...e(s) Article title: Author name(s): Journal: External Links: Original website information Database maintenan

  13. Database Dump - fRNAdb | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us fRNAdb Database Dump Data detail Data name Database Dump DOI 10.18908/lsdba.nbdc00452-002 De... data (tab separeted text) Data file File name: Database_Dump File URL: ftp://ftp....biosciencedbc.jp/archive/frnadb/LATEST/Database_Dump File size: 673 MB Simple search URL - Data acquisition...s. Data analysis method - Number of data entries 4 files - About This Database Database Description Download... License Update History of This Database Site Policy | Contact Us Database Dump - fRNAdb | LSDB Archive ...

  14. Development of a road materials database: Interim Report: 1st draft

    CSIR Research Space (South Africa)

    Jones, DJ

    2003-03-01

    Full Text Available The data required for a road materials database has been identified and a basic structure formulated. A set of Excel workbooks and associated worksheets has been developed as a foundation for the database and as an interim means of gathering test...

  15. Axiomatic Specification of Database Domain Statics

    NARCIS (Netherlands)

    Wieringa, Roelf J.

    1987-01-01

    In the past ten years, much work has been done to add more structure to database models 1 than what is represented by a mere collection of flat relations (Albano & Cardelli [1985], Albano et al. [1986], Borgida eta. [1984], Brodie [1984], Brodie & Ridjanovic [1984], Brodie & Silva (1982], Codd

  16. Construction of image database for newspapaer articles using CTS

    Science.gov (United States)

    Kamio, Tatsuo

    Nihon Keizai Shimbun, Inc. developed a system of making articles' image database automatically by use of CTS (Computer Typesetting System). Besides the articles and the headlines inputted in CTS, it reproduces the image of elements of such as photography and graphs by article in accordance with information of position on the paper. So to speak, computer itself clips the articles out of the newspaper. Image database is accumulated in magnetic file and optical file and is output to the facsimile of users. With diffusion of CTS, newspaper companies which start to have structure of articles database are increased rapidly, the said system is the first attempt to make database automatically. This paper describes the device of CTS which supports this system and outline.

  17. Database on nuclide content of coal and gangue in Chinese coal mines

    International Nuclear Information System (INIS)

    Liu Fudong; Liao Haitao; Wang Chunhong; Chen Ling; Liu Senlin

    2006-01-01

    The designing ides, structure, interface and basic function of a database are introduced of nuclide content of coal or gangue in Chinese coal mine. The design of the database adopts Sybase database system, and the database has the functions of making inquiries of keyword, classification and statistics, printing, data input which are achieved by using Power builder Language program. At the present, in this database, the data are collected on the radioactivity of natural radionuclide of 2043 coal, gangue and the other relative samples from various coal miners of all over the country. The database will provide the basic data for the environmental impact assessment of Chinese coal energy. (authors)

  18. A Magnetic Petrology Database for Satellite Magnetic Anomaly Interpretations

    Science.gov (United States)

    Nazarova, K.; Wasilewski, P.; Didenko, A.; Genshaft, Y.; Pashkevich, I.

    2002-05-01

    A Magnetic Petrology Database (MPDB) is now being compiled at NASA/Goddard Space Flight Center in cooperation with Russian and Ukrainian Institutions. The purpose of this database is to provide the geomagnetic community with a comprehensive and user-friendly method of accessing magnetic petrology data via Internet for more realistic interpretation of satellite magnetic anomalies. Magnetic Petrology Data had been accumulated in NASA/Goddard Space Flight Center, United Institute of Physics of the Earth (Russia) and Institute of Geophysics (Ukraine) over several decades and now consists of many thousands of records of data in our archives. The MPDB was, and continues to be in big demand especially since recent launching in near Earth orbit of the mini-constellation of three satellites - Oersted (in 1999), Champ (in 2000), and SAC-C (in 2000) which will provide lithospheric magnetic maps with better spatial and amplitude resolution (about 1 nT). The MPDB is focused on lower crustal and upper mantle rocks and will include data on mantle xenoliths, serpentinized ultramafic rocks, granulites, iron quartzites and rocks from Archean-Proterozoic metamorphic sequences from all around the world. A substantial amount of data is coming from the area of unique Kursk Magnetic Anomaly and Kola Deep Borehole (which recovered 12 km of continental crust). A prototype MPDB can be found on the Geodynamics Branch web server of Goddard Space Flight Center at http://core2.gsfc.nasa.gov/terr_mag/magnpetr.html. The MPDB employs a searchable relational design and consists of 7 interrelated tables. The schema of database is shown at http://core2.gsfc.nasa.gov/terr_mag/doc.html. MySQL database server was utilized to implement MPDB. The SQL (Structured Query Language) is used to query the database. To present the results of queries on WEB and for WEB programming we utilized PHP scripting language and CGI scripts. The prototype MPDB is designed to search database by major satellite magnetic

  19. DOT Online Database

    Science.gov (United States)

    Page Home Table of Contents Contents Search Database Search Login Login Databases Advisory Circulars accessed by clicking below: Full-Text WebSearch Databases Database Records Date Advisory Circulars 2092 5 data collection and distribution policies. Document Database Website provided by MicroSearch

  20. DPTEdb, an integrative database of transposable elements in dioecious plants.

    Science.gov (United States)

    Li, Shu-Fen; Zhang, Guo-Jun; Zhang, Xue-Jin; Yuan, Jin-Hong; Deng, Chuan-Liang; Gu, Lian-Feng; Gao, Wu-Jun

    2016-01-01

    Dioecious plants usually harbor 'young' sex chromosomes, providing an opportunity to study the early stages of sex chromosome evolution. Transposable elements (TEs) are mobile DNA elements frequently found in plants and are suggested to play important roles in plant sex chromosome evolution. The genomes of several dioecious plants have been sequenced, offering an opportunity to annotate and mine the TE data. However, comprehensive and unified annotation of TEs in these dioecious plants is still lacking. In this study, we constructed a dioecious plant transposable element database (DPTEdb). DPTEdb is a specific, comprehensive and unified relational database and web interface. We used a combination of de novo, structure-based and homology-based approaches to identify TEs from the genome assemblies of previously published data, as well as our own. The database currently integrates eight dioecious plant species and a total of 31 340 TEs along with classification information. DPTEdb provides user-friendly web interfaces to browse, search and download the TE sequences in the database. Users can also use tools, including BLAST, GetORF, HMMER, Cut sequence and JBrowse, to analyze TE data. Given the role of TEs in plant sex chromosome evolution, the database will contribute to the investigation of TEs in structural, functional and evolutionary dynamics of the genome of dioecious plants. In addition, the database will supplement the research of sex diversification and sex chromosome evolution of dioecious plants.Database URL: http://genedenovoweb.ticp.net:81/DPTEdb/index.php. © The Author(s) 2016. Published by Oxford University Press.

  1. NNDC database migration project

    Energy Technology Data Exchange (ETDEWEB)

    Burrows, Thomas W; Dunford, Charles L [U.S. Department of Energy, Brookhaven Science Associates (United States)

    2004-03-01

    NNDC Database Migration was necessary to replace obsolete hardware and software, to be compatible with the industry standard in relational databases (mature software, large base of supporting software for administration and dissemination and replication and synchronization tools) and to improve the user access in terms of interface and speed. The Relational Database Management System (RDBMS) consists of a Sybase Adaptive Server Enterprise (ASE), which is relatively easy to move between different RDB systems (e.g., MySQL, MS SQL-Server, or MS Access), the Structured Query Language (SQL) and administrative tools written in Java. Linux or UNIX platforms can be used. The existing ENSDF datasets are often VERY large and will need to be reworked and both the CRP (adopted) and CRP (Budapest) datasets give elemental cross sections (not relative I{gamma}) in the RI field (so it is not immediately obvious which of the old values has been changed). But primary and secondary intensities are now available on the same scale. The intensity normalization has been done for us. We will gain access to a large volume of data from Budapest and some of those gamma-ray intensity and energy data will be superior to what we already have.

  2. Database on radioecological situation in Semipalatinsk nuclear test site

    International Nuclear Information System (INIS)

    Turkebaev, T.Eh.; Kislitsin, S.B.; Lopuga, A.D.; Kuketaev, A.T.; Kikkarin, S.M.

    1999-01-01

    One of the main objectives of the National Nuclear Center of the Republic of Kazakstan is to define radioecological situation in details, conduct a continuous monitoring and eliminate consequences of nuclear explosions at Semipalatinsk nuclear test site. Investigations of Semipalatinsk nuclear test site area contamination by radioactive substances and vindication activity are the reasons for development of computer database on radioecological situation of the test site area, which will allow arranging and processing the available and entering information about the radioecological situation, assessing the effect of different testing factors on the environment and health of the Semipalatinsk nuclear test site area population.The described conception of database on radioecological situation of the Semipalatinsk nuclear test site area cannot be considered as the final one. As new information arrives, structure and content of the database is updated and optimized. New capabilities and structural elements may be provided if new aspects in Semipalatinsk nuclear test site area contamination study (air environment study, radionuclides migration) arise

  3. High Performance Descriptive Semantic Analysis of Semantic Graph Databases

    Energy Technology Data Exchange (ETDEWEB)

    Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan; Feo, John T.; Haglin, David J.; Mackey, Greg E.; Mizell, David W.

    2011-06-02

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.

  4. Database structure for plasma modeling programs

    International Nuclear Information System (INIS)

    Dufresne, M.; Silvester, P.P.

    1993-01-01

    Continuum plasma models often use a finite element (FE) formulation. Another approach is simulation models based on particle-in-cell (PIC) formulation. The model equations generally include four nonlinear differential equations specifying the plasma parameters. In simulation a large number of equations must be integrated iteratively to determine the plasma evolution from an initial state. The complexity of the resulting programs is a combination of the physics involved and the numerical method used. The data structure requirements of plasma programs are stated by defining suitable abstract data types. These abstractions are then reduced to data structures and a group of associated algorithms. These are implemented in an object oriented language (C++) as object classes. Base classes encapsulate data management into a group of common functions such as input-output management, instance variable updating and selection of objects by Boolean operations on their instance variables. Operations are thereby isolated from specific element types and uniformity of treatment is guaranteed. Creation of the data structures and associated functions for a particular plasma model is reduced merely to defining the finite element matrices for each equation, or the equations of motion for PIC models. Changes in numerical method or equation alterations are readily accommodated through the mechanism of inheritance, without modification of the data management software. The central data type is an n-relation implemented as a tuple of variable internal structure. Any finite element program may be described in terms of five relational tables: nodes, boundary conditions, sources, material/particle descriptions, and elements. Equivalently, plasma simulation programs may be described using four relational tables: cells, boundary conditions, sources, and particle descriptions

  5. A database of immunoglobulins with integrated tools: DIGIT.

    KAUST Repository

    Chailyan, Anna; Tramontano, Anna; Marcatili, Paolo

    2011-01-01

    The DIGIT (Database of ImmunoGlobulins with Integrated Tools) database (http://biocomputing.it/digit) is an integrated resource storing sequences of annotated immunoglobulin variable domains and enriched with tools for searching and analyzing them. The annotations in the database include information on the type of antigen, the respective germline sequences and on pairing information between light and heavy chains. Other annotations, such as the identification of the complementarity determining regions, assignment of their structural class and identification of mutations with respect to the germline, are computed on the fly and can also be obtained for user-submitted sequences. The system allows customized BLAST searches and automatic building of 3D models of the domains to be performed.

  6. A database of immunoglobulins with integrated tools: DIGIT.

    KAUST Repository

    Chailyan, Anna

    2011-11-10

    The DIGIT (Database of ImmunoGlobulins with Integrated Tools) database (http://biocomputing.it/digit) is an integrated resource storing sequences of annotated immunoglobulin variable domains and enriched with tools for searching and analyzing them. The annotations in the database include information on the type of antigen, the respective germline sequences and on pairing information between light and heavy chains. Other annotations, such as the identification of the complementarity determining regions, assignment of their structural class and identification of mutations with respect to the germline, are computed on the fly and can also be obtained for user-submitted sequences. The system allows customized BLAST searches and automatic building of 3D models of the domains to be performed.

  7. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  8. Database Description - eSOL | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name eSOL Alternative nam...eator Affiliation: The Research and Development of Biological Databases Project, National Institute of Genet...nology 4259 Nagatsuta-cho, Midori-ku, Yokohama, Kanagawa 226-8501 Japan Email: Tel.: +81-45-924-5785 Database... classification Protein sequence databases - Protein properties Organism Taxonomy Name: Escherichia coli Taxonomy ID: 562 Database...i U S A. 2009 Mar 17;106(11):4201-6. External Links: Original website information Database maintenance site

  9. Applying AN Object-Oriented Database Model to a Scientific Database Problem: Managing Experimental Data at Cebaf.

    Science.gov (United States)

    Ehlmann, Bryon K.

    Current scientific experiments are often characterized by massive amounts of very complex data and the need for complex data analysis software. Object-oriented database (OODB) systems have the potential of improving the description of the structure and semantics of this data and of integrating the analysis software with the data. This dissertation results from research to enhance OODB functionality and methodology to support scientific databases (SDBs) and, more specifically, to support a nuclear physics experiments database for the Continuous Electron Beam Accelerator Facility (CEBAF). This research to date has identified a number of problems related to the practical application of OODB technology to the conceptual design of the CEBAF experiments database and other SDBs: the lack of a generally accepted OODB design methodology, the lack of a standard OODB model, the lack of a clear conceptual level in existing OODB models, and the limited support in existing OODB systems for many common object relationships inherent in SDBs. To address these problems, the dissertation describes an Object-Relationship Diagram (ORD) and an Object-oriented Database Definition Language (ODDL) that provide tools that allow SDB design and development to proceed systematically and independently of existing OODB systems. These tools define multi-level, conceptual data models for SDB design, which incorporate a simple notation for describing common types of relationships that occur in SDBs. ODDL allows these relationships and other desirable SDB capabilities to be supported by an extended OODB system. A conceptual model of the CEBAF experiments database is presented in terms of ORDs and the ODDL to demonstrate their functionality and use and provide a foundation for future development of experimental nuclear physics software using an OODB approach.

  10. "Mr. Database" : Jim Gray and the History of Database Technologies.

    Science.gov (United States)

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  11. Mathematics for Databases

    NARCIS (Netherlands)

    ir. Sander van Laar

    2007-01-01

    A formal description of a database consists of the description of the relations (tables) of the database together with the constraints that must hold on the database. Furthermore the contents of a database can be retrieved using queries. These constraints and queries for databases can very well be

  12. Automatic treatment planning implementation using a database of previously treated patients

    International Nuclear Information System (INIS)

    Moore, J A; Evans, K; Yang, W; Herman, J; McNutt, T

    2014-01-01

    Purpose: Using a database of prior treated patients, it is possible to predict the dose to critical structures for future patients. Automatic treatment planning speeds the planning process by generating a good initial plan from predicted dose values. Methods: A SQL relational database of previously approved treatment plans is populated via an automated export from Pinnacle 3 . This script outputs dose and machine information and selected Regions of Interests as well as its associated Dose-Volume Histogram (DVH) and Overlap Volume Histograms (OVHs) with respect to the target structures. Toxicity information is exported from Mosaiq and added to the database for each patient. The SQL query is designed to ask the system for the lowest achievable dose for a specified region of interest (ROI) for each patient with a given volume of that ROI being as close or closer to the target than the current patient. Results: The additional time needed to calculate OVHs is approximately 1.5 minutes for a typical patient. Database lookup of planning objectives takes approximately 4 seconds. The combined additional time is less than that of a typical single plan optimization (2.5 mins). Conclusions: An automatic treatment planning interface has been successfully used by dosimetrists to quickly produce a number of SBRT pancreas treatment plans. The database can be used to compare dose to individual structures with the toxicity experienced and predict toxicities before planning for future patients.

  13. SU-E-T-544: A Radiation Oncology-Specific Multi-Institutional Federated Database: Initial Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Hendrickson, K; Phillips, M; Fishburn, M; Evans, K; Banerian, S; Mayr, N [University of Washington, Seattle, WA (United States); Wong, J; McNutt, T; Moore, J; Robertson, S [Johns Hopkins University, Baltimore, MD (United States)

    2014-06-01

    Purpose: To implement a common database structure and user-friendly web-browser based data collection tools across several medical institutions to better support evidence-based clinical decision making and comparative effectiveness research through shared outcomes data. Methods: A consortium of four academic medical centers agreed to implement a federated database, known as Oncospace. Initial implementation has addressed issues of differences between institutions in workflow and types and breadth of structured information captured. This requires coordination of data collection from departmental oncology information systems (OIS), treatment planning systems, and hospital electronic medical records in order to include as much as possible the multi-disciplinary clinical data associated with a patients care. Results: The original database schema was well-designed and required only minor changes to meet institution-specific data requirements. Mobile browser interfaces for data entry and review for both the OIS and the Oncospace database were tailored for the workflow of individual institutions. Federation of database queries--the ultimate goal of the project--was tested using artificial patient data. The tests serve as proof-of-principle that the system as a whole--from data collection and entry to providing responses to research queries of the federated database--was viable. The resolution of inter-institutional use of patient data for research is still not completed. Conclusions: The migration from unstructured data mainly in the form of notes and documents to searchable, structured data is difficult. Making the transition requires cooperation of many groups within the department and can be greatly facilitated by using the structured data to improve clinical processes and workflow. The original database schema design is critical to providing enough flexibility for multi-institutional use to improve each institution s ability to study outcomes, determine best practices

  14. SU-E-T-544: A Radiation Oncology-Specific Multi-Institutional Federated Database: Initial Implementation

    International Nuclear Information System (INIS)

    Hendrickson, K; Phillips, M; Fishburn, M; Evans, K; Banerian, S; Mayr, N; Wong, J; McNutt, T; Moore, J; Robertson, S

    2014-01-01

    Purpose: To implement a common database structure and user-friendly web-browser based data collection tools across several medical institutions to better support evidence-based clinical decision making and comparative effectiveness research through shared outcomes data. Methods: A consortium of four academic medical centers agreed to implement a federated database, known as Oncospace. Initial implementation has addressed issues of differences between institutions in workflow and types and breadth of structured information captured. This requires coordination of data collection from departmental oncology information systems (OIS), treatment planning systems, and hospital electronic medical records in order to include as much as possible the multi-disciplinary clinical data associated with a patients care. Results: The original database schema was well-designed and required only minor changes to meet institution-specific data requirements. Mobile browser interfaces for data entry and review for both the OIS and the Oncospace database were tailored for the workflow of individual institutions. Federation of database queries--the ultimate goal of the project--was tested using artificial patient data. The tests serve as proof-of-principle that the system as a whole--from data collection and entry to providing responses to research queries of the federated database--was viable. The resolution of inter-institutional use of patient data for research is still not completed. Conclusions: The migration from unstructured data mainly in the form of notes and documents to searchable, structured data is difficult. Making the transition requires cooperation of many groups within the department and can be greatly facilitated by using the structured data to improve clinical processes and workflow. The original database schema design is critical to providing enough flexibility for multi-institutional use to improve each institution s ability to study outcomes, determine best practices

  15. [A web-based integrated clinical database for laryngeal cancer].

    Science.gov (United States)

    E, Qimin; Liu, Jialin; Li, Yong; Liang, Chuanyu

    2014-08-01

    To establish an integrated database for laryngeal cancer, and to provide an information platform for laryngeal cancer in clinical and fundamental researches. This database also meet the needs of clinical and scientific use. Under the guidance of clinical expert, we have constructed a web-based integrated clinical database for laryngeal carcinoma on the basis of clinical data standards, Apache+PHP+MySQL technology, laryngeal cancer specialist characteristics and tumor genetic information. A Web-based integrated clinical database for laryngeal carcinoma had been developed. This database had a user-friendly interface and the data could be entered and queried conveniently. In addition, this system utilized the clinical data standards and exchanged information with existing electronic medical records system to avoid the Information Silo. Furthermore, the forms of database was integrated with laryngeal cancer specialist characteristics and tumor genetic information. The Web-based integrated clinical database for laryngeal carcinoma has comprehensive specialist information, strong expandability, high feasibility of technique and conforms to the clinical characteristics of laryngeal cancer specialties. Using the clinical data standards and structured handling clinical data, the database can be able to meet the needs of scientific research better and facilitate information exchange, and the information collected and input about the tumor sufferers are very informative. In addition, the user can utilize the Internet to realize the convenient, swift visit and manipulation on the database.

  16. SPIRE Data-Base Management System

    Science.gov (United States)

    Fuechsel, C. F.

    1984-01-01

    Spacelab Payload Integration and Rocket Experiment (SPIRE) data-base management system (DBMS) based on relational model of data bases. Data bases typically used for engineering and mission analysis tasks and, unlike most commercially available systems, allow data items and data structures stored in forms suitable for direct analytical computation. SPIRE DBMS designed to support data requests from interactive users as well as applications programs.

  17. The GEM Global Active Faults Database: The growth and synthesis of a worldwide database of active structures for PSHA, research, and education

    Science.gov (United States)

    Styron, R. H.; Garcia, J.; Pagani, M.

    2017-12-01

    A global catalog of active faults is a resource of value to a wide swath of the geoscience, earthquake engineering, and hazards risk communities. Though construction of such a dataset has been attempted now and again through the past few decades, success has been elusive. The Global Earthquake Model (GEM) Foundation has been working on this problem, as a fundamental step in its goal of making a global seismic hazard model. Progress on the assembly of the database is rapid, with the concatenation of many national—, orogen—, and continental—scale datasets produced by different research groups throughout the years. However, substantial data gaps exist throughout much of the deforming world, requiring new mapping based on existing publications as well as consideration of seismicity, geodesy and remote sensing data. Thus far, new fault datasets have been created for the Caribbean and Central America, North Africa, and northeastern Asia, with Madagascar, Canada and a few other regions in the queue. The second major task, as formidable as the initial data concatenation, is the 'harmonization' of data. This entails the removal or recombination of duplicated structures, reconciliation of contrastinginterpretations in areas of overlap, and the synthesis of many different types of attributes or metadata into a consistent whole. In a project of this scale, the methods used in the database construction are as critical to project success as the data themselves. After some experimentation, we have settled on an iterative methodology that involves rapid accumulation of data followed by successive episodes of data revision, and a computer-scripted data assembly using GIS file formats that is flexible, reproducible, and as able as possible to cope with updates to the constituent datasets. We find that this approach of initially maximizing coverage and then increasing resolution is the most robust to regional data problems and the most amenable to continued updates and

  18. The Nuclear Science References (NSR) database and Web Retrieval System

    International Nuclear Information System (INIS)

    Pritychenko, B.; Betak, E.; Kellett, M.A.; Singh, B.; Totans, J.

    2011-01-01

    The Nuclear Science References (NSR) database together with its associated Web interface is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 200,000 articles since the beginning of nuclear science. The weekly updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center (http://www.nndc.bnl.gov/nsr) and the International Atomic Energy Agency (http://www-nds.iaea.org/nsr).

  19. [Design and establishment of modern literature database about acupuncture Deqi].

    Science.gov (United States)

    Guo, Zheng-rong; Qian, Gui-feng; Pan, Qiu-yin; Wang, Yang; Xin, Si-yuan; Li, Jing; Hao, Jie; Hu, Ni-juan; Zhu, Jiang; Ma, Liang-xiao

    2015-02-01

    A search on acupuncture Deqi was conducted using four Chinese-language biomedical databases (CNKI, Wan-Fang, VIP and CBM) and PubMed database and using keywords "Deqi" or "needle sensation" "needling feeling" "needle feel" "obtaining qi", etc. Then, a "Modern Literature Database for Acupuncture Deqi" was established by employing Microsoft SQL Server 2005 Express Edition, introducing the contents, data types, information structure and logic constraint of the system table fields. From this Database, detailed inquiries about general information of clinical trials, acupuncturists' experience, ancient medical works, comprehensive literature, etc. can be obtained. The present databank lays a foundation for subsequent evaluation of literature quality about Deqi and data mining of undetected Deqi knowledge.

  20. Database reliability engineering designing and operating resilient database systems

    CERN Document Server

    Campbell, Laine

    2018-01-01

    The infrastructure-as-code revolution in IT is also affecting database administration. With this practical book, developers, system administrators, and junior to mid-level DBAs will learn how the modern practice of site reliability engineering applies to the craft of database architecture and operations. Authors Laine Campbell and Charity Majors provide a framework for professionals looking to join the ranks of today’s database reliability engineers (DBRE). You’ll begin by exploring core operational concepts that DBREs need to master. Then you’ll examine a wide range of database persistence options, including how to implement key technologies to provide resilient, scalable, and performant data storage and retrieval. With a firm foundation in database reliability engineering, you’ll be ready to dive into the architecture and operations of any modern database. This book covers: Service-level requirements and risk management Building and evolving an architecture for operational visibility ...

  1. From document to database: modernizing requirements management

    International Nuclear Information System (INIS)

    Giajnorio, J.; Hamilton, S.

    2007-01-01

    The creation, communication, and management of design requirements are central to the successful completion of any large engineering project, both technically and commercially. Design requirements in the Canadian nuclear industry are typically numbered lists in multiple documents created using word processing software. As an alternative, GE Nuclear Products implemented a central requirements management database for a major project at Bruce Power. The database configured the off-the-shelf software product, Telelogic Doors, to GE's requirements structure. This paper describes the advantages realized by this scheme. Examples include traceability from customer requirements through to test procedures, concurrent engineering, and automated change history. (author)

  2. FishPathogens.eu a new database in the research of aquatic animal diseases

    DEFF Research Database (Denmark)

    Jonstrup, Søren Peter; Gray, T.; Olesen, Niels Jørgen

    Virus (KHV). The database design is based on freeware and could easily be implemented to include pathogens relevant for other species than fish. We present the database using the data on the different fish pathogens as example. However if some are interested in the platform we are happy to cooperate...... and share the database structure with other Epizone members....

  3. JAERI Material Performance Database (JMPD); outline of the system

    International Nuclear Information System (INIS)

    Yokoyama, Norio; Tsukada, Takashi; Nakajima, Hajime.

    1991-01-01

    JAERI Material Performance Database (JMPD) has been developed since 1986 in JAERI with a view to utilizing the various kinds of characteristic data of nuclear materials efficiently. Management system of relational database, PLANNER was employed and supporting systems for data retrieval and output were expanded. JMPD is currently serving the following data; (1) Data yielded from the research activities of JAERI including fatigue crack growth data of LWR pressure vessel materials as well as creep and fatigue data of the alloy developed for the High Temperature Gas-cooled Reactor (HTGR), Hastelloy XR. (2) Data of environmentally assisted cracking of LWR materials arranged by Electric power Research Institute (EPRI) including fatigue crack growth data (3000 tests), stress corrosion data (500 tests) and Slow Strain Rate Technique (SSRT) data (1000 tests). In order to improve user-friendliness of retrieval system, the menu selection type procedures have been developed where knowledge of system and data structures are not required for end-users. In addition a retrieval via database commands, Structured Query Language (SQL), is supported by the relational database management system. In JMPD the retrieved data can be processed readily through supporting systems for graphical and statistical analyses. The present report outlines JMPD and describes procedures for data retrieval and analyses by utilizing JMPD. (author)

  4. Solving Relational Database Problems with ORDBMS in an Advanced Database Course

    Science.gov (United States)

    Wang, Ming

    2011-01-01

    This paper introduces how to use the object-relational database management system (ORDBMS) to solve relational database (RDB) problems in an advanced database course. The purpose of the paper is to provide a guideline for database instructors who desire to incorporate the ORDB technology in their traditional database courses. The paper presents…

  5. Generalized Database Management System Support for Numeric Database Environments.

    Science.gov (United States)

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  6. Practical use of chemical shift databases for protein solid-state NMR: 2D chemical shift maps and amino-acid assignment with secondary-structure information

    International Nuclear Information System (INIS)

    Fritzsching, K. J.; Yang, Y.; Schmidt-Rohr, K.; Hong Mei

    2013-01-01

    We introduce a Python-based program that utilizes the large database of 13 C and 15 N chemical shifts in the Biological Magnetic Resonance Bank to rapidly predict the amino acid type and secondary structure from correlated chemical shifts. The program, called PACSYlite Unified Query (PLUQ), is designed to help assign peaks obtained from 2D 13 C– 13 C, 15 N– 13 C, or 3D 15 N– 13 C– 13 C magic-angle-spinning correlation spectra. We show secondary-structure specific 2D 13 C– 13 C correlation maps of all twenty amino acids, constructed from a chemical shift database of 262,209 residues. The maps reveal interesting conformation-dependent chemical shift distributions and facilitate searching of correlation peaks during amino-acid type assignment. Based on these correlations, PLUQ outputs the most likely amino acid types and the associated secondary structures from inputs of experimental chemical shifts. We test the assignment accuracy using four high-quality protein structures. Based on only the Cα and Cβ chemical shifts, the highest-ranked PLUQ assignments were 40–60 % correct in both the amino-acid type and the secondary structure. For three input chemical shifts (CO–Cα–Cβ or N–Cα–Cβ), the first-ranked assignments were correct for 60 % of the residues, while within the top three predictions, the correct assignments were found for 80 % of the residues. PLUQ and the chemical shift maps are expected to be useful at the first stage of sequential assignment, for combination with automated sequential assignment programs, and for highly disordered proteins for which secondary structure analysis is the main goal of structure determination.

  7. Knowledge Based Engineering for Spatial Database Management and Use

    Science.gov (United States)

    Peuquet, D. (Principal Investigator)

    1984-01-01

    The use of artificial intelligence techniques that are applicable to Geographic Information Systems (GIS) are examined. Questions involving the performance and modification to the database structure, the definition of spectra in quadtree structures and their use in search heuristics, extension of the knowledge base, and learning algorithm concepts are investigated.

  8. License - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us Trypanoso... Attribution-Share Alike 2.1 Japan . If you use data from this database, please be sure attribute this database as follows: Trypanoso...nse Update History of This Database Site Policy | Contact Us License - Trypanosomes Database | LSDB Archive ...

  9. Digital bedrock mapping at the Geological Survey of Norway: BGS SIGMA tool and in-house database structure

    Science.gov (United States)

    Gasser, Deta; Viola, Giulio; Bingen, Bernard

    2016-04-01

    Since 2010, the Geological Survey of Norway has been implementing and continuously developing a digital workflow for geological bedrock mapping in Norway, from fieldwork to final product. Our workflow is based on the ESRI ArcGIS platform, and we use rugged Windows computers in the field. Three different hardware solutions have been tested over the past 5 years (2010-2015). (1) Panasonic Toughbook CE-19 (2.3 kg), (2) Panasonic Toughbook CF H2 Field (1.6 kg) and (3) Motion MC F5t tablet (1.5 kg). For collection of point observations in the field we mainly use the SIGMA Mobile application in ESRI ArcGIS developed by the British Geological Survey, which allows the mappers to store georeferenced comments, structural measurements, sample information, photographs, sketches, log information etc. in a Microsoft Access database. The application is freely downloadable from the BGS websites. For line- and polygon work we use our in-house database, which is currently under revision. Our line database consists of three feature classes: (1) bedrock boundaries, (2) bedrock lineaments, and (3) bedrock lines, with each feature class having up to 24 different attribute fields. Our polygon database consists of one feature class with 38 attribute fields enabling to store various information concerning lithology, stratigraphic order, age, metamorphic grade and tectonic subdivision. The polygon and line databases are coupled via topology in ESRI ArcGIS, which allows us to edit them simultaneously. This approach has been applied in two large-scale 1:50 000 bedrock mapping projects, one in the Kongsberg domain of the Sveconorwegian orogen, and the other in the greater Trondheim area (Orkanger) in the Caledonian belt. The mapping projects combined collection of high-resolution geophysical data, digital acquisition of field data, and collection of geochronological, geochemical and petrological data. During the Kongsberg project, some 25000 field observation points were collected by eight

  10. A natural language interface plug-in for cooperative query answering in biological databases.

    Science.gov (United States)

    Jamil, Hasan M

    2012-06-11

    One of the many unique features of biological databases is that the mere existence of a ground data item is not always a precondition for a query response. It may be argued that from a biologist's standpoint, queries are not always best posed using a structured language. By this we mean that approximate and flexible responses to natural language like queries are well suited for this domain. This is partly due to biologists' tendency to seek simpler interfaces and partly due to the fact that questions in biology involve high level concepts that are open to interpretations computed using sophisticated tools. In such highly interpretive environments, rigidly structured databases do not always perform well. In this paper, our goal is to propose a semantic correspondence plug-in to aid natural language query processing over arbitrary biological database schema with an aim to providing cooperative responses to queries tailored to users' interpretations. Natural language interfaces for databases are generally effective when they are tuned to the underlying database schema and its semantics. Therefore, changes in database schema become impossible to support, or a substantial reorganization cost must be absorbed to reflect any change. We leverage developments in natural language parsing, rule languages and ontologies, and data integration technologies to assemble a prototype query processor that is able to transform a natural language query into a semantically equivalent structured query over the database. We allow knowledge rules and their frequent modifications as part of the underlying database schema. The approach we adopt in our plug-in overcomes some of the serious limitations of many contemporary natural language interfaces, including support for schema modifications and independence from underlying database schema. The plug-in introduced in this paper is generic and facilitates connecting user selected natural language interfaces to arbitrary databases using a

  11. Federal databases

    International Nuclear Information System (INIS)

    Welch, M.J.; Welles, B.W.

    1988-01-01

    Accident statistics on all modes of transportation are available as risk assessment analytical tools through several federal agencies. This paper reports on the examination of the accident databases by personal contact with the federal staff responsible for administration of the database programs. This activity, sponsored by the Department of Energy through Sandia National Laboratories, is an overview of the national accident data on highway, rail, air, and marine shipping. For each mode, the definition or reporting requirements of an accident are determined and the method of entering the accident data into the database is established. Availability of the database to others, ease of access, costs, and who to contact were prime questions to each of the database program managers. Additionally, how the agency uses the accident data was of major interest

  12. Computer-aided visualization of database structural relationships

    International Nuclear Information System (INIS)

    Cahn, D.F.

    1980-04-01

    Interactive computer graphic displays can be extremely useful in augmenting understandability of data structures. In complexly interrelated domains such as bibliographic thesauri and energy information systems, node and link displays represent one such tool. This paper presents examples of data structure representations found useful in these domains and discusses some of their generalizable components. 2 figures

  13. The YH database: the first Asian diploid genome database

    DEFF Research Database (Denmark)

    Li, Guoqing; Ma, Lijia; Song, Chao

    2009-01-01

    genome consensus. The YH database is currently one of the three personal genome database, organizing the original data and analysis results in a user-friendly interface, which is an endeavor to achieve fundamental goals for establishing personal medicine. The database is available at http://yh.genomics.org.cn....

  14. Database Description - tRNADB-CE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us tRNAD...B-CE Database Description General information of database Database name tRNADB-CE Alter...CC BY-SA Detail Background and funding Name: MEXT Integrated Database Project Reference(s) Article title: tRNAD... 2009 Jan;37(Database issue):D163-8. External Links: Article title: tRNADB-CE 2011: tRNA gene database curat...n Download License Update History of This Database Site Policy | Contact Us Database Description - tRNADB-CE | LSDB Archive ...

  15. A Preliminary Study on the Multiple Mapping Structure of Classification Systems for Heterogeneous Databases

    OpenAIRE

    Seok-Hyoung Lee; Hwan-Min Kim; Ho-Seop Choe

    2012-01-01

    While science and technology information service portals and heterogeneous databases produced in Korea and other countries are integrated, methods of connecting the unique classification systems applied to each database have been studied. Results of technologists' research, such as, journal articles, patent specifications, and research reports, are organically related to each other. In this case, if the most basic and meaningful classification systems are not connected, it is difficult to ach...

  16. SNaX: A Database of Supernova X-Ray Light Curves

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Mathias; Dwarkadas, Vikram V., E-mail: Mathias_Ross@msn.com, E-mail: vikram@oddjob.uchicago.edu [Astronomy and Astrophysics, University of Chicago, 5640 S Ellis Avenue, ERC 569, Chicago, IL 60637 (United States)

    2017-06-01

    We present the Supernova X-ray Database (SNaX), a compilation of the X-ray data from young supernovae (SNe). The database includes the X-ray fluxes and luminosities of young SNe, from days to years after outburst. The original goal and intent of this study was to present a database of Type IIn SNe (SNe IIn), which we have accomplished. Our ongoing goal is to expand the database to include all SNe for which published data are available. The database interface allows one to search for SNe using various criteria, plot all or selected data points, and download both the data and the plot. The plotting facility allows for significant customization. There is also a facility for the user to submit data that can be directly incorporated into the database. We include an option to fit the decay of any given SN light curve with a power-law. The database includes a conversion of most data points to a common 0.3–8 keV band so that SN light curves may be directly compared with each other. A mailing list has been set up to disseminate information about the database. We outline the structure and function of the database, describe its various features, and outline the plans for future expansion.

  17. SNaX: A Database of Supernova X-Ray Light Curves

    International Nuclear Information System (INIS)

    Ross, Mathias; Dwarkadas, Vikram V.

    2017-01-01

    We present the Supernova X-ray Database (SNaX), a compilation of the X-ray data from young supernovae (SNe). The database includes the X-ray fluxes and luminosities of young SNe, from days to years after outburst. The original goal and intent of this study was to present a database of Type IIn SNe (SNe IIn), which we have accomplished. Our ongoing goal is to expand the database to include all SNe for which published data are available. The database interface allows one to search for SNe using various criteria, plot all or selected data points, and download both the data and the plot. The plotting facility allows for significant customization. There is also a facility for the user to submit data that can be directly incorporated into the database. We include an option to fit the decay of any given SN light curve with a power-law. The database includes a conversion of most data points to a common 0.3–8 keV band so that SN light curves may be directly compared with each other. A mailing list has been set up to disseminate information about the database. We outline the structure and function of the database, describe its various features, and outline the plans for future expansion.

  18. SWEETLEAD: an in silico database of approved drugs, regulated chemicals, and herbal isolates for computer-aided drug discovery.

    Directory of Open Access Journals (Sweden)

    Paul A Novick

    Full Text Available In the face of drastically rising drug discovery costs, strategies promising to reduce development timelines and expenditures are being pursued. Computer-aided virtual screening and repurposing approved drugs are two such strategies that have shown recent success. Herein, we report the creation of a highly-curated in silico database of chemical structures representing approved drugs, chemical isolates from traditional medicinal herbs, and regulated chemicals, termed the SWEETLEAD database. The motivation for SWEETLEAD stems from the observance of conflicting information in publicly available chemical databases and the lack of a highly curated database of chemical structures for the globally approved drugs. A consensus building scheme surveying information from several publicly accessible databases was employed to identify the correct structure for each chemical. Resulting structures are filtered for the active pharmaceutical ingredient, standardized, and differing formulations of the same drug were combined in the final database. The publically available release of SWEETLEAD (https://simtk.org/home/sweetlead provides an important tool to enable the successful completion of computer-aided repurposing and drug discovery campaigns.

  19. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  20. DATABASES DEVELOPED IN INDIA FOR BIOLOGICAL SCIENCES

    Directory of Open Access Journals (Sweden)

    Gitanjali Yadav

    2017-09-01

    Full Text Available The complexity of biological systems requires use of a variety of experimental methods with ever increasing sophistication to probe various cellular processes at molecular and atomic resolution. The availability of technologies for determining nucleic acid sequences of genes and atomic resolution structures of biomolecules prompted development of major biological databases like GenBank and PDB almost four decades ago. India was one of the few countries to realize early, the utility of such databases for progress in modern biology/biotechnology. Department of Biotechnology (DBT, India established Biotechnology Information System (BTIS network in late eighties. Starting with the genome sequencing revolution at the turn of the century, application of high-throughput sequencing technologies in biology and medicine for analysis of genomes, transcriptomes, epigenomes and microbiomes have generated massive volumes of sequence data. BTIS network has not only provided state of the art computational infrastructure to research institutes and universities for utilizing various biological databases developed abroad in their research, it has also actively promoted research and development (R&D projects in Bioinformatics to develop a variety of biological databases in diverse areas. It is encouraging to note that, a large number of biological databases or data driven software tools developed in India, have been published in leading peer reviewed international journals like Nucleic Acids Research, Bioinformatics, Database, BMC, PLoS and NPG series publication. Some of these databases are not only unique, they are also highly accessed as reflected in number of citations. Apart from databases developed by individual research groups, BTIS has initiated consortium projects to develop major India centric databases on Mycobacterium tuberculosis, Rice and Mango, which can potentially have practical applications in health and agriculture. Many of these biological

  1. Development of the Global Earthquake Model’s neotectonic fault database

    Science.gov (United States)

    Christophersen, Annemarie; Litchfield, Nicola; Berryman, Kelvin; Thomas, Richard; Basili, Roberto; Wallace, Laura; Ries, William; Hayes, Gavin P.; Haller, Kathleen M.; Yoshioka, Toshikazu; Koehler, Richard D.; Clark, Dan; Wolfson-Schwehr, Monica; Boettcher, Margaret S.; Villamor, Pilar; Horspool, Nick; Ornthammarath, Teraphan; Zuñiga, Ramon; Langridge, Robert M.; Stirling, Mark W.; Goded, Tatiana; Costa, Carlos; Yeats, Robert

    2015-01-01

    The Global Earthquake Model (GEM) aims to develop uniform, openly available, standards, datasets and tools for worldwide seismic risk assessment through global collaboration, transparent communication and adapting state-of-the-art science. GEM Faulted Earth (GFE) is one of GEM’s global hazard module projects. This paper describes GFE’s development of a modern neotectonic fault database and a unique graphical interface for the compilation of new fault data. A key design principle is that of an electronic field notebook for capturing observations a geologist would make about a fault. The database is designed to accommodate abundant as well as sparse fault observations. It features two layers, one for capturing neotectonic faults and fold observations, and the other to calculate potential earthquake fault sources from the observations. In order to test the flexibility of the database structure and to start a global compilation, five preexisting databases have been uploaded to the first layer and two to the second. In addition, the GFE project has characterised the world’s approximately 55,000 km of subduction interfaces in a globally consistent manner as a basis for generating earthquake event sets for inclusion in earthquake hazard and risk modelling. Following the subduction interface fault schema and including the trace attributes of the GFE database schema, the 2500-km-long frontal thrust fault system of the Himalaya has also been characterised. We propose the database structure to be used widely, so that neotectonic fault data can make a more complete and beneficial contribution to seismic hazard and risk characterisation globally.

  2. PrimateLit Database

    Science.gov (United States)

    Primate Info Net Related Databases NCRR PrimateLit: A bibliographic database for primatology Top of any problems with this service. We welcome your feedback. The PrimateLit database is no longer being Resources, National Institutes of Health. The database is a collaborative project of the Wisconsin Primate

  3. LogiQL a query language for smart databases

    CERN Document Server

    Halpin, Terry

    2014-01-01

    LogiQL is a new state-of-the-art programming language based on Datalog. It can be used to build applications that combine transactional, analytical, graph, probabilistic, and mathematical programming. LogiQL makes it possible to build hybrid applications that previously required multiple programming languages and databases. In this first book to cover LogiQL, the authors explain how to design, implement, and query deductive databases using this new programming language. LogiQL's declarative approach enables complex data structures and business rules to be simply specified and then automaticall

  4. The Master Lens Database and The Orphan Lenses Project

    Science.gov (United States)

    Moustakas, Leonidas

    2012-10-01

    Strong gravitational lenses are uniquely suited for the study of dark matter structure and substructure within massive halos of many scales, act as gravitational telescopes for distant faint objects, and can give powerful and competitive cosmological constraints. While hundreds of strong lenses are known to date, spanning five orders of magnitude in mass scale, thousands will be identified this decade. To fully exploit the power of these objects presently, and in the near future, we are creating the Master Lens Database. This is a clearinghouse of all known strong lens systems, with a sophisticated and modern database of uniformly measured and derived observational and lens-model derived quantities, using archival Hubble data across several instruments. This Database enables new science that can be done with a comprehensive sample of strong lenses. The operational goal of this proposal is to develop the process and the code to semi-automatically stage Hubble data of each system, create appropriate masks of the lensing objects and lensing features, and derive gravitational lens models, to provide a uniform and fairly comprehensive information set that is ingested into the Database. The scientific goal for this team is to use the properties of the ensemble of lenses to make a new study of the internal structure of lensing galaxies, and to identify new objects that show evidence of strong substructure lensing, for follow-up study. All data, scripts, masks, model setup files, and derived parameters, will be public, and free. The Database will be accessible online and through a sophisticated smartphone application, which will also be free.

  5. License - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database License to Use This Database Last updated : 2010/02/15 You may use this database...nal License described below. The Standard License specifies the license terms regarding the use of this database... and the requirements you must follow in using this database. The Additional ...the Standard License. Standard License The Standard License for this database is the license specified in th...e Creative Commons Attribution-Share Alike 2.1 Japan . If you use data from this database

  6. Database on wind characteristics - contents of database bank

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, G.C.; Hansen, K.S.

    2004-06-01

    The main objective of IEA R and D Wind Annex XVII - Database on Wind Characteristics - has been to provide wind energy planners, designers and researchers, as well as the international wind engineering community in general, with a source of actual wind field data (time series and resource data) observed in a wide range of different wind climates and terrain types. Connected to an extension of the initial Annex period, the scope for the continuation was widened to include also support to the international wind turbine standardisation efforts.. The project partners are Sweden, Norway, U.S.A., The Netherlands and Denmark, with Denmark as the Operating Agent. The reporting of the continuation of Annex XVII falls in two separate parts. Part one accounts in details for the available data in the established database bank, and part two describes various data analyses performed with the overall purpose of improving the design load cases with relevance for to wind turbine structures. The present report constitutes the second part of the Annex XVII reporting. Both fatigue and extreme load aspects are dealt with, however, with the main emphasis on the latter. The work has been supported by The Ministry of Environment and Energy, Danish Energy Agency, The Netherlands Agency for Energy and the Environment (NOVEM), The Norwegian Water Resources and Energy Administration (NVE), The Swedish National Energy Administration (STEM) and The Government of the United States of America. (au)

  7. NoSQL database scaling

    OpenAIRE

    Žardin, Norbert

    2017-01-01

    NoSQL database scaling is a decision, where system resources or financial expenses are traded for database performance or other benefits. By scaling a database, database performance and resource usage might increase or decrease, such changes might have a negative impact on an application that uses the database. In this work it is analyzed how database scaling affect database resource usage and performance. As a results, calculations are acquired, using which database scaling types and differe...

  8. SINEBase: a database and tool for SINE analysis.

    Science.gov (United States)

    Vassetzky, Nikita S; Kramerov, Dmitri A

    2013-01-01

    SINEBase (http://sines.eimb.ru) integrates the revisited body of knowledge about short interspersed elements (SINEs). A set of formal definitions concerning SINEs was introduced. All available sequence data were screened through these definitions and the genetic elements misidentified as SINEs were discarded. As a result, 175 SINE families have been recognized in animals, flowering plants and green algae. These families were classified by the modular structure of their nucleotide sequences and the frequencies of different patterns were evaluated. These data formed the basis for the database of SINEs. The SINEBase website can be used in two ways: first, to explore the database of SINE families, and second, to analyse candidate SINE sequences using specifically developed tools. This article presents an overview of the database and the process of SINE identification and analysis.

  9. Integration of first-principles methods and crystallographic database searches for new ferroelectrics: Strategies and explorations

    International Nuclear Information System (INIS)

    Bennett, Joseph W.; Rabe, Karin M.

    2012-01-01

    In this concept paper, the development of strategies for the integration of first-principles methods with crystallographic database mining for the discovery and design of novel ferroelectric materials is discussed, drawing on the results and experience derived from exploratory investigations on three different systems: (1) the double perovskite Sr(Sb 1/2 Mn 1/2 )O 3 as a candidate semiconducting ferroelectric; (2) polar derivatives of schafarzikite MSb 2 O 4 ; and (3) ferroelectric semiconductors with formula M 2 P 2 (S,Se) 6 . A variety of avenues for further research and investigation are suggested, including automated structure type classification, low-symmetry improper ferroelectrics, and high-throughput first-principles searches for additional representatives of structural families with desirable functional properties. - Graphical abstract: Integration of first-principles methods with crystallographic database mining, for the discovery and design of novel ferroelectric materials, could potentially lead to new classes of multifunctional materials. Highlights: ► Integration of first-principles methods and database mining. ► Minor structural families with desirable functional properties. ► Survey of polar entries in the Inorganic Crystal Structural Database.

  10. D Digital Model Database Applied to Conservation and Research of Wooden Construction in China

    Science.gov (United States)

    Zheng, Y.

    2013-07-01

    Protected by the Tai-Hang Mountains, Shanxi Province, located in north central China, is a highly prosperous, densely populated valley and considered to be one of the cradles of Chinese civilization. Its continuous habitation and rich culture have given rise to a large number of temple complexes and pavilions. Among these structures, 153 can be dated as early as from the Tang dynasty (618- 907C.E.) to the end of the Yuan dynasty (1279-1368C.E.) in Southern Shanxi area. The buildings are the best-preserved examples of wooden Chinese architecture in existence, exemplifying historic building technology and displaying highly intricate architectural decoration and detailing. They have survived war, earthquakes, and, in the last hundred years, neglect. In 2005, a decade-long conservation project was initiated by the State Administration of Cultural Heritage of China (SACH) to conserve and document these important buildings. The conservation process requires stabilization, conservation of important features, and, where necessary, partial dismantlement in order to replace unsound structural elements. Project team of CHCC have developed a practical recording system that created a record of all building components prior to and during the conservation process. After that we are trying to establish a comprehensive database which include all of the 153 earlier buildings, through which we can easily entering, browse, indexing information of the wooden construction, even deep into component details. The Database can help us to carry out comparative studies of these wooden structures, and, provide important support for the continued conservation of these heritage buildings. For some of the most important wooden structure, we have established three-dimensional models. Connected the Database with 3D Digital Model based on ArcGIS, we have developed 3D Digital Model Database for these cherish buildings. The 3D Digital Model Database helps us set up an integrate information inventory

  11. IAEA Coordinated Research Project on the Establishment of a Material Properties Database for Irradiated Core Structural Components for Continued Safe Operation and Lifetime Extension of Ageing Research Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Borio Di Tigliole, A.; Schaaf, Van Der; Barnea, Y.; Bradley, E.; Morris, C.; Rao, D. V. H. [Research Reactor Section, Vianna (Australia); Shokr, A. [Research Reactor Safety Section, Vienna (Australia); Zeman, A. [International Atomic Energy Agency, Vienna (Australia)

    2013-07-01

    Today more than 50% of operating Research Reactors (RRs) are over 45 years old. Thus, ageing management is one of the most important issues to face in order to ensure availability (including life extension), reliability and safe operation of these facilities for the future. Management of the ageing process requires, amongst others, the predictions for the behavior of structural materials of primary components subjected to irradiation such as reactor vessel and core support structures, many of which are extremely difficult or impossible to replace. In fact, age-related material degradation mechanisms resulted in high profile, unplanned and lengthy shutdowns and unique regulatory processes of relicensing the facilities in recent years. These could likely have been prevented by utilizing available data for the implementation of appropriate maintenance and surveillance programmes. This IAEA Coordinated Research Project (CRP) will provide an international forum to establish a material properties Database for irradiated core structural materials and components. It is expected that this Database will be used by research reactor operators and regulators to help predict ageing related degradation. This would be useful to minimize unpredicted outages due to ageing processes of primary components and to mitigate lengthy and costly shutdowns. The Database will be a compilation of data from RRs operators' inputs, comprehensive literature reviews and experimental data from RRs. Moreover, the CRP will specify further activities needed to be addressed in order to bridge the gaps in the new created Database, for potential follow-on activities. As per today, 13 Member States (MS) confirmed their agreement to contribute to the development of the Database, covering a wide number of materials and properties. The present publication incorporates two parts: the first part includes details on the pre-CRP Questionnaire, including the conclusions drawn from the answers received from

  12. IAEA Coordinated Research Project on the Establishment of a Material Properties Database for Irradiated Core Structural Components for Continued Safe Operation and Lifetime Extension of Ageing Research Reactors

    International Nuclear Information System (INIS)

    Borio Di Tigliole, A.; Schaaf, Van Der; Barnea, Y.; Bradley, E.; Morris, C.; Rao, D. V. H.; Shokr, A.; Zeman, A.

    2013-01-01

    Today more than 50% of operating Research Reactors (RRs) are over 45 years old. Thus, ageing management is one of the most important issues to face in order to ensure availability (including life extension), reliability and safe operation of these facilities for the future. Management of the ageing process requires, amongst others, the predictions for the behavior of structural materials of primary components subjected to irradiation such as reactor vessel and core support structures, many of which are extremely difficult or impossible to replace. In fact, age-related material degradation mechanisms resulted in high profile, unplanned and lengthy shutdowns and unique regulatory processes of relicensing the facilities in recent years. These could likely have been prevented by utilizing available data for the implementation of appropriate maintenance and surveillance programmes. This IAEA Coordinated Research Project (CRP) will provide an international forum to establish a material properties Database for irradiated core structural materials and components. It is expected that this Database will be used by research reactor operators and regulators to help predict ageing related degradation. This would be useful to minimize unpredicted outages due to ageing processes of primary components and to mitigate lengthy and costly shutdowns. The Database will be a compilation of data from RRs operators' inputs, comprehensive literature reviews and experimental data from RRs. Moreover, the CRP will specify further activities needed to be addressed in order to bridge the gaps in the new created Database, for potential follow-on activities. As per today, 13 Member States (MS) confirmed their agreement to contribute to the development of the Database, covering a wide number of materials and properties. The present publication incorporates two parts: the first part includes details on the pre-CRP Questionnaire, including the conclusions drawn from the answers received from the MS

  13. InverPep: A database of invertebrate antimicrobial peptides.

    Science.gov (United States)

    Gómez, Esteban A; Giraldo, Paula; Orduz, Sergio

    2017-03-01

    The aim of this work was to construct InverPep, a database specialised in experimentally validated antimicrobial peptides (AMPs) from invertebrates. AMP data contained in InverPep were manually curated from other databases and the scientific literature. MySQL was integrated with the development platform Laravel; this framework allows to integrate programming in PHP with HTML and was used to design the InverPep web page's interface. InverPep contains 18 separated fields, including InverPep code, phylum and species source, peptide name, sequence, peptide length, secondary structure, molar mass, charge, isoelectric point, hydrophobicity, Boman index, aliphatic index and percentage of hydrophobic amino acids. CALCAMPI, an algorithm to calculate the physicochemical properties of multiple peptides simultaneously, was programmed in PERL language. To date, InverPep contains 702 experimentally validated AMPs from invertebrate species. All of the peptides contain information associated with their source, physicochemical properties, secondary structure, biological activity and links to external literature. Most AMPs in InverPep have a length between 10 and 50 amino acids, a positive charge, a Boman index between 0 and 2 kcal/mol, and 30-50% hydrophobic amino acids. InverPep includes 33 AMPs not reported in other databases. Besides, CALCAMPI and statistical analysis of InverPep data is presented. The InverPep database is available in English and Spanish. InverPep is a useful database to study invertebrate AMPs and its information could be used for the design of new peptides. The user-friendly interface of InverPep and its information can be freely accessed via a web-based browser at http://ciencias.medellin.unal.edu.co/gruposdeinvestigacion/prospeccionydisenobiomoleculas/InverPep/public/home_en. Copyright © 2016 International Society for Chemotherapy of Infection and Cancer. Published by Elsevier Ltd. All rights reserved.

  14. cuticleDB: a relational database of Arthropod cuticular proteins

    Directory of Open Access Journals (Sweden)

    Willis Judith H

    2004-09-01

    Full Text Available Abstract Background The insect exoskeleton or cuticle is a bi-partite composite of proteins and chitin that provides protective, skeletal and structural functions. Little information is available about the molecular structure of this important complex that exhibits a helicoidal architecture. Scores of sequences of cuticular proteins have been obtained from direct protein sequencing, from cDNAs, and from genomic analyses. Most of these cuticular protein sequences contain motifs found only in arthropod proteins. Description cuticleDB is a relational database containing all structural proteins of Arthropod cuticle identified to date. Many come from direct sequencing of proteins isolated from cuticle and from sequences from cDNAs that share common features with these authentic cuticular proteins. It also includes proteins from the Drosophila melanogaster and the Anopheles gambiae genomes, that have been predicted to be cuticular proteins, based on a Pfam motif (PF00379 responsible for chitin binding in Arthropod cuticle. The total number of the database entries is 445: 370 derive from insects, 60 from Crustacea and 15 from Chelicerata. The database can be accessed from our web server at http://bioinformatics.biol.uoa.gr/cuticleDB. Conclusions CuticleDB was primarily designed to contain correct and full annotation of cuticular protein data. The database will be of help to future genome annotators. Users will be able to test hypotheses for the existence of known and also of yet unknown motifs in cuticular proteins. An analysis of motifs may contribute to understanding how proteins contribute to the physical properties of cuticle as well as to the precise nature of their interaction with chitin.

  15. Blending Education and Polymer Science: Semiautomated Creation of a Thermodynamic Property Database

    Science.gov (United States)

    Tchoua, Roselyne B.; Qin, Jian; Audus, Debra J.; Chard, Kyle; Foster, Ian T.; de Pablo, Juan

    2016-01-01

    Structured databases of chemical and physical properties play a central role in the everyday research activities of scientists and engineers. In materials science, researchers and engineers turn to these databases to quickly query, compare, and aggregate various properties, thereby allowing for the development or application of new materials. The…

  16. ProOpDB: Prokaryotic Operon DataBase.

    Science.gov (United States)

    Taboada, Blanca; Ciria, Ricardo; Martinez-Guerrero, Cristian E; Merino, Enrique

    2012-01-01

    The Prokaryotic Operon DataBase (ProOpDB, http://operons.ibt.unam.mx/OperonPredictor) constitutes one of the most precise and complete repositories of operon predictions now available. Using our novel and highly accurate operon identification algorithm, we have predicted the operon structures of more than 1200 prokaryotic genomes. ProOpDB offers diverse alternatives by which a set of operon predictions can be retrieved including: (i) organism name, (ii) metabolic pathways, as defined by the KEGG database, (iii) gene orthology, as defined by the COG database, (iv) conserved protein domains, as defined by the Pfam database, (v) reference gene and (vi) reference operon, among others. In order to limit the operon output to non-redundant organisms, ProOpDB offers an efficient method to select the most representative organisms based on a precompiled phylogenetic distances matrix. In addition, the ProOpDB operon predictions are used directly as the input data of our Gene Context Tool to visualize their genomic context and retrieve the sequence of their corresponding 5' regulatory regions, as well as the nucleotide or amino acid sequences of their genes.

  17. HIM-herbal ingredients in-vivo metabolism database.

    Science.gov (United States)

    Kang, Hong; Tang, Kailin; Liu, Qi; Sun, Yi; Huang, Qi; Zhu, Ruixin; Gao, Jun; Zhang, Duanfeng; Huang, Chenggang; Cao, Zhiwei

    2013-05-31

    Herbal medicine has long been viewed as a valuable asset for potential new drug discovery and herbal ingredients' metabolites, especially the in vivo metabolites were often found to gain better pharmacological, pharmacokinetic and even better safety profiles compared to their parent compounds. However, these herbal metabolite information is still scattered and waiting to be collected. HIM database manually collected so far the most comprehensive available in-vivo metabolism information for herbal active ingredients, as well as their corresponding bioactivity, organs and/or tissues distribution, toxicity, ADME and the clinical research profile. Currently HIM contains 361 ingredients and 1104 corresponding in-vivo metabolites from 673 reputable herbs. Tools of structural similarity, substructure search and Lipinski's Rule of Five are also provided. Various links were made to PubChem, PubMed, TCM-ID (Traditional Chinese Medicine Information database) and HIT (Herbal ingredients' targets databases). A curated database HIM is set up for the in vivo metabolites information of the active ingredients for Chinese herbs, together with their corresponding bioactivity, toxicity and ADME profile. HIM is freely accessible to academic researchers at http://www.bioinformatics.org.cn/.

  18. Practical use of chemical shift databases for protein solid-state NMR: 2D chemical shift maps and amino-acid assignment with secondary-structure information

    Energy Technology Data Exchange (ETDEWEB)

    Fritzsching, K. J.; Yang, Y.; Schmidt-Rohr, K.; Hong Mei, E-mail: mhong@iastate.edu [Iowa State University, Department of Chemistry (United States)

    2013-06-15

    We introduce a Python-based program that utilizes the large database of {sup 13}C and {sup 15}N chemical shifts in the Biological Magnetic Resonance Bank to rapidly predict the amino acid type and secondary structure from correlated chemical shifts. The program, called PACSYlite Unified Query (PLUQ), is designed to help assign peaks obtained from 2D {sup 13}C-{sup 13}C, {sup 15}N-{sup 13}C, or 3D {sup 15}N-{sup 13}C-{sup 13}C magic-angle-spinning correlation spectra. We show secondary-structure specific 2D {sup 13}C-{sup 13}C correlation maps of all twenty amino acids, constructed from a chemical shift database of 262,209 residues. The maps reveal interesting conformation-dependent chemical shift distributions and facilitate searching of correlation peaks during amino-acid type assignment. Based on these correlations, PLUQ outputs the most likely amino acid types and the associated secondary structures from inputs of experimental chemical shifts. We test the assignment accuracy using four high-quality protein structures. Based on only the C{alpha} and C{beta} chemical shifts, the highest-ranked PLUQ assignments were 40-60 % correct in both the amino-acid type and the secondary structure. For three input chemical shifts (CO-C{alpha}-C{beta} or N-C{alpha}-C{beta}), the first-ranked assignments were correct for 60 % of the residues, while within the top three predictions, the correct assignments were found for 80 % of the residues. PLUQ and the chemical shift maps are expected to be useful at the first stage of sequential assignment, for combination with automated sequential assignment programs, and for highly disordered proteins for which secondary structure analysis is the main goal of structure determination.

  19. Database application research in real-time data access of accelerator control system

    International Nuclear Information System (INIS)

    Chen Guanghua; Chen Jianfeng; Wan Tianmin

    2012-01-01

    The control system of Shanghai Synchrotron Radiation Facility (SSRF) is a large-scale distributed real-time control system, It involves many types and large amounts of real-time data access during the operating. Database system has wide application prospects in the large-scale accelerator control system. It is the future development direction of the accelerator control system, to replace the differently dedicated data structures with the mature standardized database system. This article discusses the application feasibility of database system in accelerators based on the database interface technology, real-time data access testing, and system optimization research and to establish the foundation of the wide scale application of database system in the SSRF accelerator control system. Based on the database interface technology, real-time data access testing and system optimization research, this article will introduce the application feasibility of database system in accelerators, and lay the foundation of database system application in the SSRF accelerator control system. (authors)

  20. Development of the structural materials information center

    International Nuclear Information System (INIS)

    Oland, C.B.; Naus, D.J.

    1990-01-01

    The U.S. Nuclear Regulatory Commission has initiated a Structural Aging Program at the Oak Ridge National Laboratory to identify potential structural safety issues related to continued service of nuclear power plants and to establish criteria for evaluating and resolving these issues. One of the tasks in this program focuses on the establishment of a Structural Materials Information Center where data and information on the time variation of concrete and other structural material properties under the influence of pertinent environmental stressors and aging factors are being collected and assembled into a database. This database will be used to assist in the prediction of potential long-term deterioration of critical structural components in nuclear power plants and to establish limits on hostile environmental exposure for these structures and materials. Two complementary database formats have been developed. The Structural Materials Handbook is an expandable, hard copy handbook that contains complete sets of data and information for selected portland cement concrete, metallic reinforcement, prestressing tendon, and structural steel materials. The Structural Materials Electronic Database is accessible by an IBM-compatible personal computer and provides an efficient means for searching the various database files to locate materials with similar properties. The database formats have been developed to accommodate data and information on the time-variation of concrete and other structural material properties. To date, the database includes information on concrete, reinforcement, prestressing, and structural steel materials

  1. The ABC (Analysing Biomolecular Contacts-database

    Directory of Open Access Journals (Sweden)

    Walter Peter

    2007-03-01

    Full Text Available As protein-protein interactions are one of the basic mechanisms in most cellular processes, it is desirable to understand the molecular details of protein-protein contacts and ultimately be able to predict which proteins interact. Interface areas on a protein surface that are involved in protein interactions exhibit certain characteristics. Therefore, several attempts were made to distinguish protein interactions from each other and to categorize them. One way of classification are the groups of transient and permanent interactions. Previously two of the authors analysed several properties for transient complexes such as the amino acid and secondary structure element composition and pairing preferences. Certainly, interfaces can be characterized by many more possible attributes and this is a subject of intense ongoing research. Although several freely available online databases exist that illuminate various aspects of protein-protein interactions, we decided to construct a new database collecting all desired interface features allowing for facile selection of subsets of complexes. As database-server we applied MySQL and the program logic was written in JAVA. Furthermore several class extensions and tools such as JMOL were included to visualize the interfaces and JfreeChart for the representation of diagrams and statistics. The contact data is automatically generated from standard PDB files by a tcl/tk-script running through the molecular visualization package VMD. Currently the database contains 536 interfaces extracted from 479 PDB files and it can be queried by various types of parameters. Here, we describe the database design and demonstrate its usefulness with a number of selected features.

  2. Principal Component Analysis Coupled with Artificial Neural Networks—A Combined Technique Classifying Small Molecular Structures Using a Concatenated Spectral Database

    Directory of Open Access Journals (Sweden)

    Mihail Lucian Birsa

    2011-10-01

    Full Text Available In this paper we present several expert systems that predict the class identity of the modeled compounds, based on a preprocessed spectral database. The expert systems were built using Artificial Neural Networks (ANN and are designed to predict if an unknown compound has the toxicological activity of amphetamines (stimulant and hallucinogen, or whether it is a nonamphetamine. In attempts to circumvent the laws controlling drugs of abuse, new chemical structures are very frequently introduced on the black market. They are obtained by slightly modifying the controlled molecular structures by adding or changing substituents at various positions on the banned molecules. As a result, no substance similar to those forming a prohibited class may be used nowadays, even if it has not been specifically listed. Therefore, reliable, fast and accessible systems capable of modeling and then identifying similarities at molecular level, are highly needed for epidemiological, clinical, and forensic purposes. In order to obtain the expert systems, we have preprocessed a concatenated spectral database, representing the GC-FTIR (gas chromatography-Fourier transform infrared spectrometry and GC-MS (gas chromatography-mass spectrometry spectra of 103 forensic compounds. The database was used as input for a Principal Component Analysis (PCA. The scores of the forensic compounds on the main principal components (PCs were then used as inputs for the ANN systems. We have built eight PC-ANN systems (principal component analysis coupled with artificial neural network with a different number of input variables: 15 PCs, 16 PCs, 17 PCs, 18 PCs, 19 PCs, 20 PCs, 21 PCs and 22 PCs. The best expert system was found to be the ANN network built with 18 PCs, which accounts for an explained variance of 77%. This expert system has the best sensitivity (a rate of classification C = 100% and a rate of true positives TP = 100%, as well as a good selectivity (a rate of true negatives TN

  3. A Study on the Korea Database Industry Promotion Act Legislation

    Directory of Open Access Journals (Sweden)

    Bae, Seoung-Hun

    2013-09-01

    Full Text Available The Database Industry Promotion Act was proposed at the National Assembly plenary session on July 26, 2012 and since then it has been in the process of enactment in consultation with all the governmental departments concerned. The recent trend of economic globalization and smart device innovation suggests a new opportunity and challenges for all industries. The database industry is also facing a new phase in an era of smart innovation. Korea is in a moment of opportunity to take an innovative approach to promoting the database industry. Korea should set up a national policy to promote the database industry for citizens, government, and research institutions, as well as enterprises. Above all, the Database Industry Promotion Act could play a great role in promoting the social infrastructure to enhance the capacity of small and medium-sized enterprises. This article discusses the background of the development of the Database Industry Promotion Act and its legislative processes in order to clarify its legal characteristics, including the meaning of the act. In addition, this article explains individual items related to the overall structure of the Database Industry Promotion Act. Finally, this article reviews the economic effects of the database industry for now and the future.

  4. RAACFDb: Rheumatoid arthritis ayurvedic classical formulations database.

    Science.gov (United States)

    Mohamed Thoufic Ali, A M; Agrawal, Aakash; Sajitha Lulu, S; Mohana Priya, A; Vino, S

    2017-02-02

    In the past years, the treatment of rheumatoid arthritis (RA) has undergone remarkable changes in all therapeutic modes. The present newfangled care in clinical research is to determine and to pick a new track for better treatment options for RA. Recent ethnopharmacological investigations revealed that traditional herbal remedies are the most preferred modality of complementary and alternative medicine (CAM). However, several ayurvedic modes of treatments and formulations for RA are not much studied and documented from Indian traditional system of medicine. Therefore, this directed us to develop an integrated database, RAACFDb (acronym: Rheumatoid Arthritis Ayurvedic Classical Formulations Database) by consolidating data from the repository of Vedic Samhita - The Ayurveda to retrieve the available formulations information easily. Literature data was gathered using several search engines and from ayurvedic practitioners for loading information in the database. In order to represent the collected information about classical ayurvedic formulations, an integrated database is constructed and implemented on a MySQL and PHP back-end. The database is supported by describing all the ayurvedic classical formulations for the treatment rheumatoid arthritis. It includes composition, usage, plant parts used, active ingredients present in the composition and their structures. The prime objective is to locate ayurvedic formulations proven to be quite successful and highly effective among the patients with reduced side effects. The database (freely available at www.beta.vit.ac.in/raacfdb/index.html) hopefully enables easy access for clinical researchers and students to discover novel leads with reduced side effects. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. The Unified Database for BM@N experiment data handling

    Science.gov (United States)

    Gertsenberger, Konstantin; Rogachevsky, Oleg

    2018-04-01

    The article describes the developed Unified Database designed as a comprehensive relational data storage for the BM@N experiment at the Joint Institute for Nuclear Research in Dubna. The BM@N experiment, which is one of the main elements of the first stage of the NICA project, is a fixed target experiment at extracted Nuclotron beams of the Laboratory of High Energy Physics (LHEP JINR). The structure and purposes of the BM@N setup are briefly presented. The article considers the scheme of the Unified Database, its attributes and implemented features in detail. The use of the developed BM@N database provides correct multi-user access to actual information of the experiment for data processing. It stores information on the experiment runs, detectors and their geometries, different configuration, calibration and algorithm parameters used in offline data processing. An important part of any database - user interfaces are presented.

  6. IAEA Post Irradiation Examination Facilities Database

    International Nuclear Information System (INIS)

    Jenssen, Haakon; Blanc, J.Y.; Dobuisson, P.; Manzel, R.; Egorov, A.A.; Golovanov, V.; Souslov, D.

    2005-01-01

    The number of hot cells in the world in which post irradiation examination (PIE) can be performed has diminished during the last few decades. This creates problems for countries that have nuclear power plants and require PIE for surveillance, safety and fuel development. With this in mind, the IAEA initiated the issue of a catalogue within the framework of a coordinated research program (CRP), started in 1992 and completed in 1995, under the title of ''Examination and Documentation Methodology for Water Reactor Fuel (ED-WARF-II)''. Within this program, a group of technical consultants prepared a questionnaire to be completed by relevant laboratories. From these questionnaires a catalogue was assembled. The catalogue lists the laboratories and PIE possibilities worldwide in order to make it more convenient to arrange and perform contractual PIE within hot cells on water reactor fuels and core components, e.g. structural and absorber materials. This catalogue was published as working material in the Agency in 1996. During 2002 and 2003, the catalogue was converted to a database and updated through questionnaires to the laboratories in the Member States of the Agency. This activity was recommended by the IAEA Technical Working Group on Water Reactor Fuel Performance and Technology (TWGFPT) at its plenary meeting in April 2001. The database consists of five main areas about PIE facilities: acceptance criteria for irradiated components; cell characteristics; PIE techniques; refabrication/instrumentation capabilities; and storage and conditioning capabilities. The content of the database represents the status of the listed laboratories as of 2003. With the database utilizing a uniform format for all laboratories and details of technique, it is hoped that the IAEA Member States will be able to use this catalogue to select laboratories most relevant to their particular needs. The database can also be used to compare the PIE capabilities worldwide with current and future

  7. DataBase on Demand

    International Nuclear Information System (INIS)

    Aparicio, R Gaspar; Gomez, D; Wojcik, D; Coz, I Coterillo

    2012-01-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  8. Knowledge discovery in variant databases using inductive logic programming.

    Science.gov (United States)

    Nguyen, Hoan; Luu, Tien-Dao; Poch, Olivier; Thompson, Julie D

    2013-01-01

    Understanding the effects of genetic variation on the phenotype of an individual is a major goal of biomedical research, especially for the development of diagnostics and effective therapeutic solutions. In this work, we describe the use of a recent knowledge discovery from database (KDD) approach using inductive logic programming (ILP) to automatically extract knowledge about human monogenic diseases. We extracted background knowledge from MSV3d, a database of all human missense variants mapped to 3D protein structure. In this study, we identified 8,117 mutations in 805 proteins with known three-dimensional structures that were known to be involved in human monogenic disease. Our results help to improve our understanding of the relationships between structural, functional or evolutionary features and deleterious mutations. Our inferred rules can also be applied to predict the impact of any single amino acid replacement on the function of a protein. The interpretable rules are available at http://decrypthon.igbmc.fr/kd4v/.

  9. Full Data of Yeast Interacting Proteins Database (Original Version) - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Full Data of Yeast Interacting Proteins Database (Origin...al Version) Data detail Data name Full Data of Yeast Interacting Proteins Database (Original Version) DOI 10....18908/lsdba.nbdc00742-004 Description of data contents The entire data in the Yeast Interacting Proteins Database...eir interactions are required. Several sources including YPD (Yeast Proteome Database, Costanzo, M. C., Hoga...ematic name in the SGD (Saccharomyces Genome Database; http://www.yeastgenome.org /). Bait gene name The gen

  10. Harvesting Covert Networks: The Case Study of the iMiner Database

    DEFF Research Database (Denmark)

    Memon, Nasrullah; Wiil, Uffe Kock; Alhajj, Reda

    2011-01-01

    was incorporated in the iMiner prototype tool, which makes use of investigative data mining techniques to analyse data. This paper will present the developed framework along with the form and structure of the terrorist data in the database. Selected cases will be referenced to highlight the effectiveness of the i...... collected by intelligence agencies and government organisations is inaccessible to researchers. To counter the information scarcity, we designed and built a database of terrorist-related data and information by harvesting such data from publicly available authenticated websites. The database...

  11. Analysis of functionality free CASE-tools databases design

    Directory of Open Access Journals (Sweden)

    A. V. Gavrilov

    2016-01-01

    Full Text Available The introduction in the educational process of database design CASEtechnologies requires the institution of significant costs for the purchase of software. A possible solution could be the use of free software peers. At the same time this kind of substitution should be based on even-com representation of the functional characteristics and features of operation of these programs. The purpose of the article – a review of the free and non-profi t CASE-tools database design, as well as their classifi cation on the basis of the analysis functionality. When writing this article were used materials from the offi cial websites of the tool developers. Evaluation of the functional characteristics of CASEtools for database design made exclusively empirically with the direct work with software products. Analysis functionality of tools allow you to distinguish the two categories CASE-tools database design. The first category includes systems with a basic set of features and tools. The most important basic functions of these systems are: management connections to database servers, visual tools to create and modify database objects (tables, views, triggers, procedures, the ability to enter and edit data in table mode, user and privilege management tools, editor SQL-code, means export/import data. CASE-system related to the first category can be used to design and develop simple databases, data management, as well as a means of administration server database. A distinctive feature of the second category of CASE-tools for database design (full-featured systems is the presence of visual designer, allowing to carry out the construction of the database model and automatic creation of the database on the server based on this model. CASE-system related to this categories can be used for the design and development of databases of any structural complexity, as well as a database server administration tool. The article concluded that the

  12. Energy Consumption Database

    Science.gov (United States)

    Consumption Database The California Energy Commission has created this on-line database for informal reporting ) classifications. The database also provides easy downloading of energy consumption data into Microsoft Excel (XLSX

  13. Using a Semi-Realistic Database to Support a Database Course

    Science.gov (United States)

    Yue, Kwok-Bun

    2013-01-01

    A common problem for university relational database courses is to construct effective databases for instructions and assignments. Highly simplified "toy" databases are easily available for teaching, learning, and practicing. However, they do not reflect the complexity and practical considerations that students encounter in real-world…

  14. PDTD: a web-accessible protein database for drug target identification

    Directory of Open Access Journals (Sweden)

    Gao Zhenting

    2008-02-01

    Full Text Available Abstract Background Target identification is important for modern drug discovery. With the advances in the development of molecular docking, potential binding proteins may be discovered by docking a small molecule to a repository of proteins with three-dimensional (3D structures. To complete this task, a reverse docking program and a drug target database with 3D structures are necessary. To this end, we have developed a web server tool, TarFisDock (Target Fishing Docking http://www.dddc.ac.cn/tarfisdock, which has been used widely by others. Recently, we have constructed a protein target database, Potential Drug Target Database (PDTD, and have integrated PDTD with TarFisDock. This combination aims to assist target identification and validation. Description PDTD is a web-accessible protein database for in silico target identification. It currently contains >1100 protein entries with 3D structures presented in the Protein Data Bank. The data are extracted from the literatures and several online databases such as TTD, DrugBank and Thomson Pharma. The database covers diverse information of >830 known or potential drug targets, including protein and active sites structures in both PDB and mol2 formats, related diseases, biological functions as well as associated regulating (signaling pathways. Each target is categorized by both nosology and biochemical function. PDTD supports keyword search function, such as PDB ID, target name, and disease name. Data set generated by PDTD can be viewed with the plug-in of molecular visualization tools and also can be downloaded freely. Remarkably, PDTD is specially designed for target identification. In conjunction with TarFisDock, PDTD can be used to identify binding proteins for small molecules. The results can be downloaded in the form of mol2 file with the binding pose of the probe compound and a list of potential binding targets according to their ranking scores. Conclusion PDTD serves as a comprehensive and

  15. Planned and ongoing projects (pop) database: development and results.

    Science.gov (United States)

    Wild, Claudia; Erdös, Judit; Warmuth, Marisa; Hinterreiter, Gerda; Krämer, Peter; Chalon, Patrice

    2014-11-01

    The aim of this study was to present the development, structure and results of a database on planned and ongoing health technology assessment (HTA) projects (POP Database) in Europe. The POP Database (POP DB) was set up in an iterative process from a basic Excel sheet to a multifunctional electronic online database. The functionalities, such as the search terminology, the procedures to fill and update the database, the access rules to enter the database, as well as the maintenance roles, were defined in a multistep participatory feedback loop with EUnetHTA Partners. The POP Database has become an online database that hosts not only the titles and MeSH categorizations, but also some basic information on status and contact details about the listed projects of EUnetHTA Partners. Currently, it stores more than 1,200 planned, ongoing or recently published projects of forty-three EUnetHTA Partners from twenty-four countries. Because the POP Database aims to facilitate collaboration, it also provides a matching system to assist in identifying similar projects. Overall, more than 10 percent of the projects in the database are identical both in terms of pathology (indication or disease) and technology (drug, medical device, intervention). In addition, approximately 30 percent of the projects are similar, meaning that they have at least some overlap in content. Although the POP DB is successful concerning regular updates of most national HTA agencies within EUnetHTA, little is known about its actual effects on collaborations in Europe. Moreover, many non-nationally nominated HTA producing agencies neither have access to the POP DB nor can share their projects.

  16. IAEA nuclear databases for applications

    International Nuclear Information System (INIS)

    Schwerer, Otto

    2003-01-01

    The Nuclear Data Section (NDS) of the International Atomic Energy Agency (IAEA) provides nuclear data services to scientists on a worldwide scale with particular emphasis on developing countries. More than 100 data libraries are made available cost-free by Internet, CD-ROM and other media. These databases are used for practically all areas of nuclear applications as well as basic research. An overview is given of the most important nuclear reaction and nuclear structure databases, such as EXFOR, CINDA, ENDF, NSR, ENSDF, NUDAT, and of selected special purpose libraries such as FENDL, RIPL, RNAL, the IAEA Photonuclear Data Library, and the IAEA charged-particle cross section database for medical radioisotope production. The NDS also coordinates two international nuclear data centre networks and is involved in data development activities (to create new or improve existing data libraries when the available data are inadequate) and in technology transfer to developing countries, e.g. through the installation and support of the mirror web site of the IAEA Nuclear Data Services at IPEN (operational since March 2000) and by organizing nuclear-data related workshops. By encouraging their participation in IAEA Co-ordinated Research Projects and also by compiling their experimental results in databases such as EXFOR, the NDS helps to make developing countries' contributions to nuclear science visible and conveniently available. The web address of the IAEA Nuclear Data Services is http://www.nds.iaea.org and the NDS mirror service at IPEN (Brasil) can be accessed at http://www.nds.ipen.br/ (author)

  17. Fire test database

    International Nuclear Information System (INIS)

    Lee, J.A.

    1989-01-01

    This paper describes a project recently completed for EPRI by Impell. The purpose of the project was to develop a reference database of fire tests performed on non-typical fire rated assemblies. The database is designed for use by utility fire protection engineers to locate test reports for power plant fire rated assemblies. As utilities prepare to respond to Information Notice 88-04, the database will identify utilities, vendors or manufacturers who have specific fire test data. The database contains fire test report summaries for 729 tested configurations. For each summary, a contact is identified from whom a copy of the complete fire test report can be obtained. Five types of configurations are included: doors, dampers, seals, wraps and walls. The database is computerized. One version for IBM; one for Mac. Each database is accessed through user-friendly software which allows adding, deleting, browsing, etc. through the database. There are five major database files. One each for the five types of tested configurations. The contents of each provides significant information regarding the test method and the physical attributes of the tested configuration. 3 figs

  18. Artificial Radionuclides Database in the Pacific Ocean: HAM Database

    Directory of Open Access Journals (Sweden)

    Michio Aoyama

    2004-01-01

    Full Text Available The database “Historical Artificial Radionuclides in the Pacific Ocean and its Marginal Seas”, or HAM database, has been created. The database includes 90Sr, 137Cs, and 239,240Pu concentration data from the seawater of the Pacific Ocean and its marginal seas with some measurements from the sea surface to the bottom. The data in the HAM database were collected from about 90 literature citations, which include published papers; annual reports by the Hydrographic Department, Maritime Safety Agency, Japan; and unpublished data provided by individuals. The data of concentrations of 90Sr, 137Cs, and 239,240Pu have been accumulating since 1957–1998. The present HAM database includes 7737 records for 137Cs concentration data, 3972 records for 90Sr concentration data, and 2666 records for 239,240Pu concentration data. The spatial variation of sampling stations in the HAM database is heterogeneous, namely, more than 80% of the data for each radionuclide is from the Pacific Ocean and the Sea of Japan, while a relatively small portion of data is from the South Pacific. This HAM database will allow us to use these radionuclides as significant chemical tracers for oceanographic study as well as the assessment of environmental affects of anthropogenic radionuclides for these 5 decades. Furthermore, these radionuclides can be used to verify the oceanic general circulation models in the time scale of several decades.

  19. PROXiMATE: a database of mutant protein-protein complex thermodynamics and kinetics.

    Science.gov (United States)

    Jemimah, Sherlyn; Yugandhar, K; Michael Gromiha, M

    2017-09-01

    We have developed PROXiMATE, a database of thermodynamic data for more than 6000 missense mutations in 174 heterodimeric protein-protein complexes, supplemented with interaction network data from STRING database, solvent accessibility, sequence, structural and functional information, experimental conditions and literature information. Additional features include complex structure visualization, search and display options, download options and a provision for users to upload their data. The database is freely available at http://www.iitm.ac.in/bioinfo/PROXiMATE/ . The website is implemented in Python, and supports recent versions of major browsers such as IE10, Firefox, Chrome and Opera. gromiha@iitm.ac.in. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  20. Databases and their application

    NARCIS (Netherlands)

    Grimm, E.C.; Bradshaw, R.H.W; Brewer, S.; Flantua, S.; Giesecke, T.; Lézine, A.M.; Takahara, H.; Williams, J.W.,Jr; Elias, S.A.; Mock, C.J.

    2013-01-01

    During the past 20 years, several pollen database cooperatives have been established. These databases are now constituent databases of the Neotoma Paleoecology Database, a public domain, multiproxy, relational database designed for Quaternary-Pliocene fossil data and modern surface samples. The

  1. CCDB: a curated database of genes involved in cervix cancer.

    Science.gov (United States)

    Agarwal, Subhash M; Raghav, Dhwani; Singh, Harinder; Raghava, G P S

    2011-01-01

    The Cervical Cancer gene DataBase (CCDB, http://crdd.osdd.net/raghava/ccdb) is a manually curated catalog of experimentally validated genes that are thought, or are known to be involved in the different stages of cervical carcinogenesis. In spite of the large women population that is presently affected from this malignancy still at present, no database exists that catalogs information on genes associated with cervical cancer. Therefore, we have compiled 537 genes in CCDB that are linked with cervical cancer causation processes such as methylation, gene amplification, mutation, polymorphism and change in expression level, as evident from published literature. Each record contains details related to gene like architecture (exon-intron structure), location, function, sequences (mRNA/CDS/protein), ontology, interacting partners, homology to other eukaryotic genomes, structure and links to other public databases, thus augmenting CCDB with external data. Also, manually curated literature references have been provided to support the inclusion of the gene in the database and establish its association with cervix cancer. In addition, CCDB provides information on microRNA altered in cervical cancer as well as search facility for querying, several browse options and an online tool for sequence similarity search, thereby providing researchers with easy access to the latest information on genes involved in cervix cancer.

  2. Multimodality medical image database for temporal lobe epilepsy

    Science.gov (United States)

    Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid; Fotouhi, Farshad A.; Elisevich, Kost

    2003-05-01

    This paper presents the development of a human brain multi-modality database for surgical candidacy determination in temporal lobe epilepsy. The focus of the paper is on content-based image management, navigation and retrieval. Several medical image-processing methods including our newly developed segmentation method are utilized for information extraction/correlation and indexing. The input data includes T1-, T2-Weighted and FLAIR MRI and ictal/interictal SPECT modalities with associated clinical data and EEG data analysis. The database can answer queries regarding issues such as the correlation between the attribute X of the entity Y and the outcome of a temporal lobe epilepsy surgery. The entity Y can be a brain anatomical structure such as the hippocampus. The attribute X can be either a functionality feature of the anatomical structure Y, calculated with SPECT modalities, such as signal average, or a volumetric/morphological feature of the entity Y such as volume or average curvature. The outcome of the surgery can be any surgery assessment such as non-verbal Wechsler memory quotient. A determination is made regarding surgical candidacy by analysis of both textual and image data. The current database system suggests a surgical determination for the cases with relatively small hippocampus and high signal intensity average on FLAIR images within the hippocampus. This indication matches the neurosurgeons expectations/observations. Moreover, as the database gets more populated with patient profiles and individual surgical outcomes, using data mining methods one may discover partially invisible correlations between the contents of different modalities of data and the outcome of the surgery.

  3. ADILE: Architecture of a database-supported learning environment

    NARCIS (Netherlands)

    Hiddink, G.W.

    2001-01-01

    This article proposes an architecture for distributed learning environments that use databases to store learning material. As the layout of learning material can inhibit reuse, the ar-chitecture implements the notion of "separation of layout and structure" using XML technology. Also, the

  4. Efficient Similarity Search Using the Earth Mover's Distance for Large Multimedia Databases

    DEFF Research Database (Denmark)

    Assent, Ira; Wichterich, Marc; Meisen, Tobias

    2008-01-01

    Multimedia similarity search in large databases requires efficient query processing. The Earth mover's distance, introduced in computer vision, is successfully used as a similarity model in a number of small-scale applications. Its computational complexity hindered its adoption in large multimedia...... databases. We enable directly indexing the Earth mover's distance in structures such as the R-tree and the VA-file by providing the accurate 'MinDist' function to any bounding rectangle in the index. We exploit the computational structure of the new MinDist to derive a new lower bound for the EMD Min...

  5. LCGbase: A Comprehensive Database for Lineage-Based Co-regulated Genes.

    Science.gov (United States)

    Wang, Dapeng; Zhang, Yubin; Fan, Zhonghua; Liu, Guiming; Yu, Jun

    2012-01-01

    Animal genes of different lineages, such as vertebrates and arthropods, are well-organized and blended into dynamic chromosomal structures that represent a primary regulatory mechanism for body development and cellular differentiation. The majority of genes in a genome are actually clustered, which are evolutionarily stable to different extents and biologically meaningful when evaluated among genomes within and across lineages. Until now, many questions concerning gene organization, such as what is the minimal number of genes in a cluster and what is the driving force leading to gene co-regulation, remain to be addressed. Here, we provide a user-friendly database-LCGbase (a comprehensive database for lineage-based co-regulated genes)-hosting information on evolutionary dynamics of gene clustering and ordering within animal kingdoms in two different lineages: vertebrates and arthropods. The database is constructed on a web-based Linux-Apache-MySQL-PHP framework and effective interactive user-inquiry service. Compared to other gene annotation databases with similar purposes, our database has three comprehensible advantages. First, our database is inclusive, including all high-quality genome assemblies of vertebrates and representative arthropod species. Second, it is human-centric since we map all gene clusters from other genomes in an order of lineage-ranks (such as primates, mammals, warm-blooded, and reptiles) onto human genome and start the database from well-defined gene pairs (a minimal cluster where the two adjacent genes are oriented as co-directional, convergent, and divergent pairs) to large gene clusters. Furthermore, users can search for any adjacent genes and their detailed annotations. Third, the database provides flexible parameter definitions, such as the distance of transcription start sites between two adjacent genes, which is extendable to genes that flanking the cluster across species. We also provide useful tools for sequence alignment, gene

  6. Database Optimizing Services

    Directory of Open Access Journals (Sweden)

    Adrian GHENCEA

    2010-12-01

    Full Text Available Almost every organization has at its centre a database. The database provides support for conducting different activities, whether it is production, sales and marketing or internal operations. Every day, a database is accessed for help in strategic decisions. The satisfaction therefore of such needs is entailed with a high quality security and availability. Those needs can be realised using a DBMS (Database Management System which is, in fact, software for a database. Technically speaking, it is software which uses a standard method of cataloguing, recovery, and running different data queries. DBMS manages the input data, organizes it, and provides ways of modifying or extracting the data by its users or other programs. Managing the database is an operation that requires periodical updates, optimizing and monitoring.

  7. Update History of This Database - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us RED Update History of This Database Date Update contents 2015/12/21 Rice Expression Database English archi...s Database Database Description Download License Update History of This Database Site Policy | Contact Us Update History of This Database - RED | LSDB Archive ... ...ve site is opened. 2000/10/1 Rice Expression Database ( http://red.dna.affrc.go.jp/RED/ ) is opened. About Thi

  8. The Use of Exhaustive Micro-Data Firm Databases for Economic Geography: The Issues of Geocoding and Usability in the Case of the Amadeus Database

    Directory of Open Access Journals (Sweden)

    Moritz Lennert

    2015-01-01

    Full Text Available Economic geography has begun to explore the options involved in micro-data. New databases have become available and new techniques and an increase in computer power allow their treatment. However, two major issues impede the use of these datasets: the lack of geocoded spatial location and lack of exhaustivity in coverage. In this article, I explore the possibilities of using large micro-scale firm databases for economic geography in Europe. I show that current evolution in European official spatial data dissemination alows for geocoding of such databases using means that are accessible for researchers with minimal programming knowledge. For the specific case of the Amadeus database of the Bureau Van Dijk, I show that its limitations in terms of coverage have to be taken into acount, but do not hinder its use for analysis. Resulting maps show how the data allows to go further than classic databases such as the Eurostat Structural Business Statistics.

  9. Computational 2D Materials Database

    DEFF Research Database (Denmark)

    Rasmussen, Filip Anselm; Thygesen, Kristian Sommer

    2015-01-01

    We present a comprehensive first-principles study of the electronic structure of 51 semiconducting monolayer transition-metal dichalcogenides and -oxides in the 2H and 1T hexagonal phases. The quasiparticle (QP) band structures with spin-orbit coupling are calculated in the G(0)W(0) approximation...... and used as input to a 2D hydrogenic model to estimate exciton binding energies. Throughout the paper we focus on trends and correlations in the electronic structure rather than detailed analysis of specific materials. All the computed data is available in an open database......., and comparison is made with different density functional theory descriptions. Pitfalls related to the convergence of GW calculations for two-dimensional (2D) materials are discussed together with possible solutions. The monolayer band edge positions relative to vacuum are used to estimate the band alignment...

  10. GRIP Database original data - GRIPDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us GRI...PDB GRIP Database original data Data detail Data name GRIP Database original data DOI 10....18908/lsdba.nbdc01665-006 Description of data contents GRIP Database original data It consists of data table...s and sequences. Data file File name: gripdb_original_data.zip File URL: ftp://ftp.biosciencedbc.jp/archive/gripdb/LATEST/gri...e Database Description Download License Update History of This Database Site Policy | Contact Us GRIP Database original data - GRIPDB | LSDB Archive ...

  11. 5S ribosomal RNA database Y2K.

    Science.gov (United States)

    Szymanski, M; Barciszewska, M Z; Barciszewski, J; Erdmann, V A

    2000-01-01

    This paper presents the updated version (Y2K) of the database of ribosomal 5S ribonucleic acids (5S rRNA) and their genes (5S rDNA), http://rose.man/poznan.pl/5SData/index.html. This edition of the database contains 1985primary structures of 5S rRNA and 5S rDNA. They include 60 archaebacterial, 470 eubacterial, 63 plastid, nine mitochondrial and 1383 eukaryotic sequences. The nucleotide sequences of the 5S rRNAs or 5S rDNAs are divided according to the taxonomic position of the source organisms.

  12. Limnological database for Par Pond: 1959 to 1980

    International Nuclear Information System (INIS)

    Tilly, L.J.

    1981-03-01

    A limnological database for Par Pond, a cooling reservoir for hot reactor effluent water at the Savannah River Plant, is described. The data are derived from a combination of research and monitoring efforts on Par Pond since 1959. The approximately 24,000-byte database provides water quality, primary productivity, and flow data from a number of different stations, depths, and times during the 22-year history of the Par Pond impoundment. The data have been organized to permit an interpretation of the effects of twenty years of cooling system operations on the structure and function of an aquatic ecosystem

  13. Super Natural II--a database of natural products.

    Science.gov (United States)

    Banerjee, Priyanka; Erehman, Jevgeni; Gohlke, Björn-Oliver; Wilhelm, Thomas; Preissner, Robert; Dunkel, Mathias

    2015-01-01

    Natural products play a significant role in drug discovery and development. Many topological pharmacophore patterns are common between natural products and commercial drugs. A better understanding of the specific physicochemical and structural features of natural products is important for corresponding drug development. Several encyclopedias of natural compounds have been composed, but the information remains scattered or not freely available. The first version of the Supernatural database containing ∼ 50,000 compounds was published in 2006 to face these challenges. Here we present a new, updated and expanded version of natural product database, Super Natural II (http://bioinformatics.charite.de/supernatural), comprising ∼ 326,000 molecules. It provides all corresponding 2D structures, the most important structural and physicochemical properties, the predicted toxicity class for ∼ 170,000 compounds and the vendor information for the vast majority of compounds. The new version allows a template-based search for similar compounds as well as a search for compound names, vendors, specific physical properties or any substructures. Super Natural II also provides information about the pathways associated with synthesis and degradation of the natural products, as well as their mechanism of action with respect to structurally similar drugs and their target proteins. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Database Aspects of Location-Based Services

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard

    2004-01-01

    in the databases underlying high-quality services. Several integrated representations - which capture different aspects of the same infrastructure - are needed. Further, all other content that can be related to geographical space must be integrated with the infrastructure representations. The chapter describes...... the general concepts underlying one approach to data modeling for location-based services. The chapter also covers techniques that are needed to keep a database for location-based services up to date with the reality it models. As part of this, caching is touched upon briefly. The notion of linear referencing......Adopting a data management perspective on location-based services, this chapter explores central challenges to data management posed by location-based services. Because service users typically travel in, and are constrained to, transportation infrastructures, such structures must be represented...

  15. Flexible network reconstruction from relational databases with Cytoscape and CytoSQL.

    Science.gov (United States)

    Laukens, Kris; Hollunder, Jens; Dang, Thanh Hai; De Jaeger, Geert; Kuiper, Martin; Witters, Erwin; Verschoren, Alain; Van Leemput, Koenraad

    2010-07-01

    Molecular interaction networks can be efficiently studied using network visualization software such as Cytoscape. The relevant nodes, edges and their attributes can be imported in Cytoscape in various file formats, or directly from external databases through specialized third party plugins. However, molecular data are often stored in relational databases with their own specific structure, for which dedicated plugins do not exist. Therefore, a more generic solution is presented. A new Cytoscape plugin 'CytoSQL' is developed to connect Cytoscape to any relational database. It allows to launch SQL ('Structured Query Language') queries from within Cytoscape, with the option to inject node or edge features of an existing network as SQL arguments, and to convert the retrieved data to Cytoscape network components. Supported by a set of case studies we demonstrate the flexibility and the power of the CytoSQL plugin in converting specific data subsets into meaningful network representations. CytoSQL offers a unified approach to let Cytoscape interact with relational databases. Thanks to the power of the SQL syntax, this tool can rapidly generate and enrich networks according to very complex criteria. The plugin is available at http://www.ptools.ua.ac.be/CytoSQL.

  16. Update History of This Database - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us RPD Update History of This Database Date Update contents 2016/02/02 Rice Proteome Database English archi...s Database Database Description Download License Update History of This Database Site Policy | Contact Us Update History of This Database - RPD | LSDB Archive ... ...ve site is opened. 2003/01/07 Rice Proteome Database ( http://gene64.dna.affrc.go.jp/RPD/ ) is opened. About Thi

  17. Secure Distributed Databases Using Cryptography

    OpenAIRE

    Ion IVAN; Cristian TOMA

    2006-01-01

    The computational encryption is used intensively by different databases management systems for ensuring privacy and integrity of information that are physically stored in files. Also, the information is sent over network and is replicated on different distributed systems. It is proved that a satisfying level of security is achieved if the rows and columns of tables are encrypted independently of table or computer that sustains the data. Also, it is very important that the SQL - Structured Que...

  18. Brasilia’s Database Administrators

    Directory of Open Access Journals (Sweden)

    Jane Adriana

    2016-06-01

    Full Text Available Database administration has gained an essential role in the management of new database technologies. Different data models are being created for supporting the enormous data volume, from the traditional relational database. These new models are called NoSQL (Not only SQL databases. The adoption of best practices and procedures, has become essential for the operation of database management systems. Thus, this paper investigates some of the techniques and tools used by database administrators. The study highlights features and particularities in databases within the area of Brasilia, the Capital of Brazil. The results point to which new technologies regarding database management are currently the most relevant, as well as the central issues in this area.

  19. GENISES: A GIS Database for the Yucca Mountain Site Characterization Project

    International Nuclear Information System (INIS)

    Beckett, J.

    1991-01-01

    This paper provides a general description of the Geographic Nodal Information Study and Evaluation System (GENISES) database design. The GENISES database is the Geographic Information System (GIS) component of the Yucca Mountain Site Characterization Project Technical Database (TDB). The GENISES database has been developed and is maintained by EG ampersand G Energy Measurements, Inc., Las Vegas, NV (EG ampersand G/EM). As part of the Yucca Mountain Project (YMP) Site Characterization Technical Data Management System, GENISES provides a repository for geographically oriented technical data. The primary objective of the GENISES database is to support the Yucca Mountain Site Characterization Project with an effective tool for describing, analyzing, and archiving geo-referenced data. The database design provides the maximum efficiency in input/output, data analysis, data management and information display. This paper provides the systematic approach or plan for the GENISES database design and operation. The paper also discusses the techniques used for data normalization or the decomposition of complex data structures as they apply to GIS database. ARC/INFO and INGRES files are linked or joined by establishing ''relate'' fields through the common attribute names. Thus, through these keys, ARC can allow access to normalized INGRES files greatly reducing redundancy and the size of the database

  20. OAP- OFFICE AUTOMATION PILOT GRAPHICS DATABASE SYSTEM

    Science.gov (United States)

    Ackerson, T.

    1994-01-01

    The Office Automation Pilot (OAP) Graphics Database system offers the IBM PC user assistance in producing a wide variety of graphs and charts. OAP uses a convenient database system, called a chartbase, for creating and maintaining data associated with the charts, and twelve different graphics packages are available to the OAP user. Each of the graphics capabilities is accessed in a similar manner. The user chooses creation, revision, or chartbase/slide show maintenance options from an initial menu. The user may then enter or modify data displayed on a graphic chart. The cursor moves through the chart in a "circular" fashion to facilitate data entries and changes. Various "help" functions and on-screen instructions are available to aid the user. The user data is used to generate the graphics portion of the chart. Completed charts may be displayed in monotone or color, printed, plotted, or stored in the chartbase on the IBM PC. Once completed, the charts may be put in a vector format and plotted for color viewgraphs. The twelve graphics capabilities are divided into three groups: Forms, Structured Charts, and Block Diagrams. There are eight Forms available: 1) Bar/Line Charts, 2) Pie Charts, 3) Milestone Charts, 4) Resources Charts, 5) Earned Value Analysis Charts, 6) Progress/Effort Charts, 7) Travel/Training Charts, and 8) Trend Analysis Charts. There are three Structured Charts available: 1) Bullet Charts, 2) Organization Charts, and 3) Work Breakdown Structure (WBS) Charts. The Block Diagram available is an N x N Chart. Each graphics capability supports a chartbase. The OAP graphics database system provides the IBM PC user with an effective means of managing data which is best interpreted as a graphic display. The OAP graphics database system is written in IBM PASCAL 2.0 and assembler for interactive execution on an IBM PC or XT with at least 384K of memory, and a color graphics adapter and monitor. Printed charts require an Epson, IBM, OKIDATA, or HP Laser

  1. Development of comprehensive material performance database for nuclear applications

    International Nuclear Information System (INIS)

    Tsuji, Hirokazu; Yokoyama, Norio; Tsukada, Takashi; Nakajima, Hajime

    1993-01-01

    This paper introduces the present status of the comprehensive material performance database for nuclear applications, which was named JAERI Material Performance Database (JMPD), and examples of its utilization. The JMPD has been developed since 1986 in JAERI with a view to utilizing various kinds of characteristics data of nuclear materials efficiently. Management system of relational database, PLANNER, was employed, and supporting systems for data retrieval and output were expanded. In order to improve user-friendliness of the retrieval system, the menu selection type procedures have been developed where knowledge of the system or the data structures are not required for end-users. As to utilization of the JMPD, two types of data analyses are mentioned as follows: (1) A series of statistical analyses was performed in order to estimate the design values both of the yield strength (Sy) and the tensile strength (Su) for aluminum alloys which are widely used as structural materials for research reactors. (2) Statistical analyses were accomplished by using the cyclic crack growth rate data for nuclear pressure vessel steels, and comparisons were made on variability and/or reproducibility of the data between obtained by ΔK-increasing and ΔK-constant type tests. (author)

  2. Development of the mouse cochlea database (MCD).

    Science.gov (United States)

    Santi, Peter A; Rapson, Ian; Voie, Arne

    2008-09-01

    The mouse cochlea database (MCD) provides an interactive, image database of the mouse cochlea for learning its anatomy and data mining of its resources. The MCD website is hosted on a centrally maintained, high-speed server at the following URL: (http://mousecochlea.umn.edu). The MCD contains two types of image resources, serial 2D image stacks and 3D reconstructions of cochlear structures. Complete image stacks of the cochlea from two different mouse strains were obtained using orthogonal plane fluorescence optical microscopy (OPFOS). 2D images of the cochlea are presented on the MCD website as: viewable images within a stack, 2D atlas of the cochlea, orthogonal sections, and direct volume renderings combined with isosurface reconstructions. In order to assess cochlear structures quantitatively, "true" cross-sections of the scala media along the length of the basilar membrane were generated by virtual resectioning of a cochlea orthogonal to a cochlear structure, such as the centroid of the basilar membrane or the scala media. 3D images are presented on the MCD website as: direct volume renderings, movies, interactive QuickTime VRs, flythrough, and isosurface 3D reconstructions of different cochlear structures. 3D computer models can also be used for solid model fabrication by rapid prototyping and models from different cochleas can be combined to produce an average 3D model. The MCD is the first comprehensive image resource on the mouse cochlea and is a new paradigm for understanding the anatomy of the cochlea, and establishing morphometric parameters of cochlear structures in normal and mutant mice.

  3. Failure and Maintenance Analysis Using Web-Based Reliability Database System

    International Nuclear Information System (INIS)

    Hwang, Seok Won; Kim, Myoung Su; Seong, Ki Yeoul; Na, Jang Hwan; Jerng, Dong Wook

    2007-01-01

    Korea Hydro and Nuclear Power Company has lunched the development of a database system for PSA and Maintenance Rule implementation. It focuses on the easy processing of raw data into a credible and useful database for the risk-informed environment of nuclear power plant operation and maintenance. Even though KHNP had recently completed the PSA for all domestic NPPs as a requirement of the severe accident mitigation strategy, the component failure data were only gathered as a means of quantification purposes for the relevant project. So, the data were not efficient enough for the Living PSA or other generic purposes. Another reason to build a real time database is for the newly adopted Maintenance Rule, which requests the utility to continuously monitor the plant risk based on its operation and maintenance performance. Furthermore, as one of the pre-condition for the Risk Informed Regulation and Application, the nuclear regulatory agency of Korea requests the development and management of domestic database system. KHNP is stacking up data of operation and maintenance on the Enterprise Resource Planning (ERP) system since its first opening on July, 2003. But, so far a systematic review has not been performed to apply the component failure and maintenance history for PSA and other reliability analysis. The data stored in PUMAS before the ERP system is introduced also need to be converted and managed into the new database structure and methodology. This reliability database system is a web-based interface on a UNIX server with Oracle relational database. It is designed to be applicable for all domestic NPPs with a common database structure and the web interfaces, therefore additional program development would not be necessary for data acquisition and processing in the near future. Categorization standards for systems and components have been implemented to analyze all domestic NPPs. For example, SysCode (for a system code) and CpCode (for a component code) were newly

  4. THE ACTIVE FAULTS OF EURASIA DATABASE

    Directory of Open Access Journals (Sweden)

    D. M. Bachmanov

    2017-01-01

    Full Text Available This paper describes the technique used to create and maintain the Active Faults of Eurasia Database (AFED based on the uniform format that ensures integrating the materials accumulated by many researchers, inclu­ding the authors of the AFED. The AFED includes the data on more than 20 thousand objects: faults, fault zones and associated structural forms that show the signs of latest displacements in the Late Pleistocene and Holocene. The geographical coordinates are given for each object. The AFED scale is 1:500000; the demonstration scale is 1:1000000. For each object, the AFED shows two kinds of characteristics: justification attributes, and estimated attributes. The justification attributes inform the AFED user about an object: the object’s name; morphology; kinematics; the amplitudes of displacement for different periods of time; displacement rates estimated from the amplitudes; the age of the latest recorded signs of activity, seismicity and paleoseismicity; the relationship of the given objects with the parameters of crustal earthquakes; etc. The sources of information are listed in the AFED appendix. The estimated attributes are represented by the system of indices reflecting the fault kinematics according to the classification of the faults by types, as accepted in structural geology, and includes three ranks of the Late Quaternary movements and four degrees of reliability of identifying the structures as active ones. With reference to the indices, the objects can be compared with each other, considering any of the attributes, or with any other digitized information. The comparison can be performed by any GIS software. The AFED is an efficient tool for obtaining the information on the faults and solving general problems, such as thematic mapping, determining the parameters of modern geodynamic processes, estima­ting seismic and other geodynamic hazards, identifying the tectonic development trends in the Pliocene–Quaternary stage of

  5. HCVpro: Hepatitis C virus protein interaction database

    KAUST Repository

    Kwofie, Samuel K.

    2011-12-01

    It is essential to catalog characterized hepatitis C virus (HCV) protein-protein interaction (PPI) data and the associated plethora of vital functional information to augment the search for therapies, vaccines and diagnostic biomarkers. In furtherance of these goals, we have developed the hepatitis C virus protein interaction database (HCVpro) by integrating manually verified hepatitis C virus-virus and virus-human protein interactions curated from literature and databases. HCVpro is a comprehensive and integrated HCV-specific knowledgebase housing consolidated information on PPIs, functional genomics and molecular data obtained from a variety of virus databases (VirHostNet, VirusMint, HCVdb and euHCVdb), and from BIND and other relevant biology repositories. HCVpro is further populated with information on hepatocellular carcinoma (HCC) related genes that are mapped onto their encoded cellular proteins. Incorporated proteins have been mapped onto Gene Ontologies, canonical pathways, Online Mendelian Inheritance in Man (OMIM) and extensively cross-referenced to other essential annotations. The database is enriched with exhaustive reviews on structure and functions of HCV proteins, current state of drug and vaccine development and links to recommended journal articles. Users can query the database using specific protein identifiers (IDs), chromosomal locations of a gene, interaction detection methods, indexed PubMed sources as well as HCVpro, BIND and VirusMint IDs. The use of HCVpro is free and the resource can be accessed via http://apps.sanbi.ac.za/hcvpro/ or http://cbrc.kaust.edu.sa/hcvpro/. © 2011 Elsevier B.V.

  6. 3D DIGITAL MODEL DATABASE APPLIED TO CONSERVATION AND RESEARCH OF WOODEN CONSTRUCTION IN CHINA

    Directory of Open Access Journals (Sweden)

    Y. Zheng

    2013-07-01

    Full Text Available Protected by the Tai-Hang Mountains, Shanxi Province, located in north central China, is a highly prosperous, densely populated valley and considered to be one of the cradles of Chinese civilization. Its continuous habitation and rich culture have given rise to a large number of temple complexes and pavilions. Among these structures, 153 can be dated as early as from the Tang dynasty (618- 907C.E. to the end of the Yuan dynasty (1279–1368C.E. in Southern Shanxi area. The buildings are the best-preserved examples of wooden Chinese architecture in existence, exemplifying historic building technology and displaying highly intricate architectural decoration and detailing. They have survived war, earthquakes, and, in the last hundred years, neglect. In 2005, a decade-long conservation project was initiated by the State Administration of Cultural Heritage of China (SACH to conserve and document these important buildings. The conservation process requires stabilization, conservation of important features, and, where necessary, partial dismantlement in order to replace unsound structural elements. Project team of CHCC have developed a practical recording system that created a record of all building components prior to and during the conservation process. After that we are trying to establish a comprehensive database which include all of the 153 earlier buildings, through which we can easily entering, browse, indexing information of the wooden construction, even deep into component details. The Database can help us to carry out comparative studies of these wooden structures, and, provide important support for the continued conservation of these heritage buildings. For some of the most important wooden structure, we have established three-dimensional models. Connected the Database with 3D Digital Model based on ArcGIS, we have developed 3D Digital Model Database for these cherish buildings. The 3D Digital Model Database helps us set up an integrate

  7. Adding Hierarchical Objects to Relational Database General-Purpose XML-Based Information Managements

    Science.gov (United States)

    Lin, Shu-Chun; Knight, Chris; La, Tracy; Maluf, David; Bell, David; Tran, Khai Peter; Gawdiak, Yuri

    2006-01-01

    NETMARK is a flexible, high-throughput software system for managing, storing, and rapid searching of unstructured and semi-structured documents. NETMARK transforms such documents from their original highly complex, constantly changing, heterogeneous data formats into well-structured, common data formats in using Hypertext Markup Language (HTML) and/or Extensible Markup Language (XML). The software implements an object-relational database system that combines the best practices of the relational model utilizing Structured Query Language (SQL) with those of the object-oriented, semantic database model for creating complex data. In particular, NETMARK takes advantage of the Oracle 8i object-relational database model using physical-address data types for very efficient keyword searches of records across both context and content. NETMARK also supports multiple international standards such as WEBDAV for drag-and-drop file management and SOAP for integrated information management using Web services. The document-organization and -searching capabilities afforded by NETMARK are likely to make this software attractive for use in disciplines as diverse as science, auditing, and law enforcement.

  8. NABIC marker database: A molecular markers information network of agricultural crops.

    Science.gov (United States)

    Kim, Chang-Kug; Seol, Young-Joo; Lee, Dong-Jun; Jeong, In-Seon; Yoon, Ung-Han; Lee, Gang-Seob; Hahn, Jang-Ho; Park, Dong-Suk

    2013-01-01

    In 2013, National Agricultural Biotechnology Information Center (NABIC) reconstructs a molecular marker database for useful genetic resources. The web-based marker database consists of three major functional categories: map viewer, RSN marker and gene annotation. It provides 7250 marker locations, 3301 RSN marker property, 3280 molecular marker annotation information in agricultural plants. The individual molecular marker provides information such as marker name, expressed sequence tag number, gene definition and general marker information. This updated marker-based database provides useful information through a user-friendly web interface that assisted in tracing any new structures of the chromosomes and gene positional functions using specific molecular markers. The database is available for free at http://nabic.rda.go.kr/gere/rice/molecularMarkers/

  9. NREL: U.S. Life Cycle Inventory Database - About the LCI Database Project

    Science.gov (United States)

    About the LCI Database Project The U.S. Life Cycle Inventory (LCI) Database is a publicly available database that allows users to objectively review and compare analysis results that are based on similar source of critically reviewed LCI data through its LCI Database Project. NREL's High-Performance

  10. OxDBase: a database of oxygenases involved in biodegradation

    Directory of Open Access Journals (Sweden)

    Raghava Gajendra PS

    2009-04-01

    Full Text Available Abstract Background Oxygenases belong to the oxidoreductive group of enzymes (E.C. Class 1, which oxidize the substrates by transferring oxygen from molecular oxygen (O2 and utilize FAD/NADH/NADPH as the co-substrate. Oxygenases can further be grouped into two categories i.e. monooxygenases and dioxygenases on the basis of number of oxygen atoms used for oxidation. They play a key role in the metabolism of organic compounds by increasing their reactivity or water solubility or bringing about cleavage of the aromatic ring. Findings We compiled a database of biodegradative oxygenases (OxDBase which provides a compilation of the oxygenase data as sourced from primary literature in the form of web accessible database. There are two separate search engines for searching into the database i.e. mono and dioxygenases database respectively. Each enzyme entry contains its common name and synonym, reaction in which enzyme is involved, family and subfamily, structure and gene link and literature citation. The entries are also linked to several external database including BRENDA, KEGG, ENZYME and UM-BBD providing wide background information. At present the database contains information of over 235 oxygenases including both dioxygenases and monooxygenases. This database is freely available online at http://www.imtech.res.in/raghava/oxdbase/. Conclusion OxDBase is the first database that is dedicated only to oxygenases and provides comprehensive information about them. Due to the importance of the oxygenases in chemical synthesis of drug intermediates and oxidation of xenobiotic compounds, OxDBase database would be very useful tool in the field of synthetic chemistry as well as bioremediation.

  11. CQL: a database in smart card for health care applications.

    Science.gov (United States)

    Paradinas, P C; Dufresnes, E; Vandewalle, J J

    1995-01-01

    The CQL-Card is the first smart card in the world to use Database Management Systems (DBMS) concepts. The CQL-Card is particularly suited to a portable file in health applications where the information is required by many different partners, such as health insurance organizations, emergency services, and General Practitioners. All the information required by these different partners can be shared with independent security mechanisms. Database engine functions are carried out by the card, which manages tables, views, and dictionaries. Medical Information is stored in tables and views are logical and dynamic subsets of tables. For owner-partners like MIS (Medical Information System), it is possible to grant privileges (select, insert, update, and delete on table or view) to other partners. Furthermore, dictionaries are structures that contain requested descriptions and which allow adaptation to computer environments. Health information held in the CQL-Card is accessed using CQL (Card Query Language), a high level database query language which is a subset of the standard SQL (Structured Query Language). With this language, CQL-Card can be easily integrated into Medical Information Systems.

  12. DisFace: A Database of Human Facial Disorders

    Directory of Open Access Journals (Sweden)

    Paramjit Kaur

    2017-10-01

    database lies in the fact that it is the first of its kind. The front end will be developed using HTML (Hyper Text Mark-up Language and CSS (Cascading Style Sheets. The back end will be developed using PHP (Hypertext Pre-processor. JAVA Script will be used as scripting language and MySQL (Structured Query Language will be used for database development as it is most widely used RDBMS (Relational Database Management System. XAMPP (X (cross platform, Apache, MySQL, PHP, Perl open source web application software will be used as the server.

  13. The standardization of data relational mode in the materials database for nuclear power engineering

    International Nuclear Information System (INIS)

    Wang Xinxuan

    1996-01-01

    A relational database needs standard data relation ships. The data relation ships include hierarchical structures and repeat set records. Code database is created and the relational database is created between spare parts and materials and properties of the materials. The data relation ships which are not standard are eliminated and all the relation modes are made to meet the demands of the 3NF (Third Norm Form)

  14. NIRS database of the original research database

    International Nuclear Information System (INIS)

    Morita, Kyoko

    1991-01-01

    Recently, library staffs arranged and compiled the original research papers that have been written by researchers for 33 years since National Institute of Radiological Sciences (NIRS) established. This papers describes how the internal database of original research papers has been created. This is a small sample of hand-made database. This has been cumulating by staffs who have any knowledge about computer machine or computer programming. (author)

  15. Teaching Case: Adapting the Access Northwind Database to Support a Database Course

    Science.gov (United States)

    Dyer, John N.; Rogers, Camille

    2015-01-01

    A common problem encountered when teaching database courses is that few large illustrative databases exist to support teaching and learning. Most database textbooks have small "toy" databases that are chapter objective specific, and thus do not support application over the complete domain of design, implementation and management concepts…

  16. SoyDB: a knowledge database of soybean transcription factors

    Directory of Open Access Journals (Sweden)

    Valliyodan Babu

    2010-01-01

    Full Text Available Abstract Background Transcription factors play the crucial rule of regulating gene expression and influence almost all biological processes. Systematically identifying and annotating transcription factors can greatly aid further understanding their functions and mechanisms. In this article, we present SoyDB, a user friendly database containing comprehensive knowledge of soybean transcription factors. Description The soybean genome was recently sequenced by the Department of Energy-Joint Genome Institute (DOE-JGI and is publicly available. Mining of this sequence identified 5,671 soybean genes as putative transcription factors. These genes were comprehensively annotated as an aid to the soybean research community. We developed SoyDB - a knowledge database for all the transcription factors in the soybean genome. The database contains protein sequences, predicted tertiary structures, putative DNA binding sites, domains, homologous templates in the Protein Data Bank (PDB, protein family classifications, multiple sequence alignments, consensus protein sequence motifs, web logo of each family, and web links to the soybean transcription factor database PlantTFDB, known EST sequences, and other general protein databases including Swiss-Prot, Gene Ontology, KEGG, EMBL, TAIR, InterPro, SMART, PROSITE, NCBI, and Pfam. The database can be accessed via an interactive and convenient web server, which supports full-text search, PSI-BLAST sequence search, database browsing by protein family, and automatic classification of a new protein sequence into one of 64 annotated transcription factor families by hidden Markov models. Conclusions A comprehensive soybean transcription factor database was constructed and made publicly accessible at http://casp.rnet.missouri.edu/soydb/.

  17. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...

  18. Fund Finder: A case study of database-to-ontology mapping

    OpenAIRE

    Barrasa Rodríguez, Jesús; Corcho, Oscar; Gómez-Pérez, A.

    2003-01-01

    The mapping between databases and ontologies is a basic problem when trying to "upgrade" deep web content to the semantic web. Our approach suggests the declarative definition of mappings as a way to achieve domain independency and reusability. A specific language (expressive enough to cover some real world mapping situations like lightly structured databases or not 1st normal form ones) is defined for this purpose. Along with this mapping description language, the ODEMapster processor is in ...

  19. A comparison of database systems for XML-type data.

    Science.gov (United States)

    Risse, Judith E; Leunissen, Jack A M

    2010-01-01

    In the field of bioinformatics interchangeable data formats based on XML are widely used. XML-type data is also at the core of most web services. With the increasing amount of data stored in XML comes the need for storing and accessing the data. In this paper we analyse the suitability of different database systems for storing and querying large datasets in general and Medline in particular. All reviewed database systems perform well when tested with small to medium sized datasets, however when the full Medline dataset is queried a large variation in query times is observed. There is not one system that is vastly superior to the others in this comparison and, depending on the database size and the query requirements, different systems are most suitable. The best all-round solution is the Oracle 11~g database system using the new binary storage option. Alias-i's Lingpipe is a more lightweight, customizable and sufficiently fast solution. It does however require more initial configuration steps. For data with a changing XML structure Sedna and BaseX as native XML database systems or MySQL with an XML-type column are suitable.

  20. SPARQLGraph: a web-based platform for graphically querying biological Semantic Web databases.

    Science.gov (United States)

    Schweiger, Dominik; Trajanoski, Zlatko; Pabinger, Stephan

    2014-08-15

    Semantic Web has established itself as a framework for using and sharing data across applications and database boundaries. Here, we present a web-based platform for querying biological Semantic Web databases in a graphical way. SPARQLGraph offers an intuitive drag & drop query builder, which converts the visual graph into a query and executes it on a public endpoint. The tool integrates several publicly available Semantic Web databases, including the databases of the just recently released EBI RDF platform. Furthermore, it provides several predefined template queries for answering biological questions. Users can easily create and save new query graphs, which can also be shared with other researchers. This new graphical way of creating queries for biological Semantic Web databases considerably facilitates usability as it removes the requirement of knowing specific query languages and database structures. The system is freely available at http://sparqlgraph.i-med.ac.at.