WorldWideScience

Sample records for structure database icsd

  1. Interpenetrating metal-organic and inorganic 3D networks: a computer-aided systematic investigation. Part II [1]. Analysis of the Inorganic Crystal Structure Database (ICSD)

    Baburin, I.A.; Blatov, V.A.; Carlucci, L.; Ciani, G.; Proserpio, D.M.

    2005-01-01

    Interpenetration in metal-organic and inorganic networks has been investigated by a systematic analysis of the crystallographic structural databases. We have used a version of TOPOS (a package for multipurpose crystallochemical analysis) adapted for searching for interpenetration and based on the concept of Voronoi-Dirichlet polyhedra and on the representation of a crystal structure as a reduced finite graph. In this paper, we report comprehensive lists of interpenetrating inorganic 3D structures from the Inorganic Crystal Structure Database (ICSD), inclusive of 144 Collection Codes for equivalent interpenetrating nets, analyzed on the basis of their topologies. Distinct Classes, corresponding to the different modes in which individual identical motifs can interpenetrate, have been attributed to the entangled structures. Interpenetrating nets of different nature as well as interpenetrating H-bonded nets were also examined

  2. Developments in diffraction databases

    Jenkins, R.

    1999-01-01

    Full text: There are a number of databases available to the diffraction community. Two of the more important of these are the Powder Diffraction File (PDF) maintained by the International Centre for Diffraction Data (ICDD), and the Inorganic Crystal Structure Database (ICSD) maintained by Fachsinformationzentrum (FIZ, Karlsruhe). In application, the PDF has been used as an indispensable tool in phase identification and identification of unknowns. The ICSD database has extensive and explicit reference to the structures of compounds: atomic coordinates, space group and even thermal vibration parameters. A similar database, but for organic compounds, is maintained by the Cambridge Crystallographic Data Centre. These databases are often used as independent sources of information. However, little thought has been given on how to exploit the combined properties of structural database tools. A recently completed agreement between ICDD and FIZ, plus ICDD and Cambridge, provides a first step in complementary use of the PDF and the ICSD databases. The focus of this paper (as indicated below) is to examine ways of exploiting the combined properties of both databases. In 1996, there were approximately 76,000 entries in the PDF and approximately 43,000 entries in the ICSD database. The ICSD database has now been used to calculate entries in the PDF. Thus, to derive d-spacing and peak intensity data requires the synthesis of full diffraction patterns, i.e., we use the structural data in the ICSD database and then add instrumental resolution information. The combined data from PDF and ICSD can be effectively used in many ways. For example, we can calculate PDF data for an ideally random crystal distribution and also in the absence of preferred orientation. Again, we can use systematic studies of intermediate members in solid solutions series to help produce reliable quantitative phase analyses. In some cases, we can study how solid solution properties vary with composition and

  3. HIV Structural Database

    SRD 102 HIV Structural Database (Web, free access)   The HIV Protease Structural Database is an archive of experimentally determined 3-D structures of Human Immunodeficiency Virus 1 (HIV-1), Human Immunodeficiency Virus 2 (HIV-2) and Simian Immunodeficiency Virus (SIV) Proteases and their complexes with inhibitors or products of substrate cleavage.

  4. Spectroscopic databases - A tool for structure elucidation

    Luksch, P [Fachinformationszentrum Karlsruhe, Gesellschaft fuer Wissenschaftlich-Technische Information mbH, Eggenstein-Leopoldshafen (Germany)

    1990-05-01

    Spectroscopic databases have developed to useful tools in the process of structure elucidation. Besides the conventional library searches, new intelligent programs have been added, that are able to predict structural features from measured spectra or to simulate for a given structure. The example of the C13NMR/IR database developed at BASF and available on STN is used to illustrate the present capabilities of online database. New developments in the field of spectrum simulation and methods for the prediction of complete structures from spectroscopic information are reviewed. (author). 10 refs, 5 figs.

  5. The ICSD-3 and DSM-5 guidelines for diagnosing narcolepsy: clinical relevance and practicality.

    Ruoff, Chad; Rye, David

    2016-07-20

    Narcolepsy is a chronic neurological disease manifesting as difficulty with maintaining continuous wake and sleep. Clinical presentation varies but requires excessive daytime sleepiness (EDS) occurring alone or together with features of rapid-eye movement (REM) sleep dissociation (e.g., cataplexy, hypnagogic/hypnopompic hallucinations, sleep paralysis), and disrupted nighttime sleep. Narcolepsy with cataplexy is associated with reductions of cerebrospinal fluid (CSF) hypocretin due to destruction of hypocretin peptide-producing neurons in the hypothalamus in individuals with a specific genetic predisposition. Updated diagnostic criteria include the Diagnostic and Statistical Manual of Mental Disorders Fifth Edition (DSM-5) and International Classification of Sleep Disorders Third Edition (ICSD-3). DSM-5 criteria require EDS in association with any one of the following: (1) cataplexy; (2) CSF hypocretin deficiency; (3) REM sleep latency ≤15 minutes on nocturnal polysomnography (PSG); or (4) mean sleep latency ≤8 minutes on multiple sleep latency testing (MSLT) with ≥2 sleep-onset REM-sleep periods (SOREMPs). ICSD-3 relies more upon objective data in addition to EDS, somewhat complicating the diagnostic criteria: 1) cataplexy and either positive MSLT/PSG findings or CSF hypocretin deficiency; (2) MSLT criteria similar to DSM-5 except that a SOREMP on PSG may count as one of the SOREMPs required on MSLT; and (3) distinct division of narcolepsy into type 1, which requires the presence of cataplexy or documented CSF hypocretin deficiency, and type 2, where cataplexy is absent, and CSF hypocretin levels are either normal or undocumented. We discuss limitations of these criteria such as variability in clinical presentation of cataplexy, particularly when cataplexy may be ambiguous, as well as by age; multiple and/or invasive CSF diagnostic test requirements; and lack of normative diagnostic test data (e.g., MSLT) in certain populations. While ICSD-3 criteria

  6. Protein structure database search and evolutionary classification.

    Yang, Jinn-Moon; Tung, Chi-Hua

    2006-01-01

    As more protein structures become available and structural genomics efforts provide structural models in a genome-wide strategy, there is a growing need for fast and accurate methods for discovering homologous proteins and evolutionary classifications of newly determined structures. We have developed 3D-BLAST, in part, to address these issues. 3D-BLAST is as fast as BLAST and calculates the statistical significance (E-value) of an alignment to indicate the reliability of the prediction. Using this method, we first identified 23 states of the structural alphabet that represent pattern profiles of the backbone fragments and then used them to represent protein structure databases as structural alphabet sequence databases (SADB). Our method enhanced BLAST as a search method, using a new structural alphabet substitution matrix (SASM) to find the longest common substructures with high-scoring structured segment pairs from an SADB database. Using personal computers with Intel Pentium4 (2.8 GHz) processors, our method searched more than 10 000 protein structures in 1.3 s and achieved a good agreement with search results from detailed structure alignment methods. [3D-BLAST is available at http://3d-blast.life.nctu.edu.tw].

  7. An image database structure for pediatric radiology

    Mankovich, N.J.

    1987-01-01

    The operation of the Clinical Radiology Imaging System (CRIS) in Pediatric Radiology at UCLA relies on the orderly flow of text and image data among the three basic subsystems including acquisition, storage, and display. CRIS provides the radiologist, clinician, and technician with data at clinical image workstations by maintaining comprehensive database. CRIS is made up of sub-systems, each composed of one more programs or tasks which operate in parallel on a VAX-11/750 microcomputer in Pediatric Radiology. Tasks are coordinated through dynamic data structures that include system event flags and disk-resident queues. This report outlines: (1) the CRIS data model, (2) the flow of information among CRIS components, (3) the underlying database structures which support the acquisition, display, and storage of text and image information, and (4) current database statistics

  8. Fast Structural Search in Phylogenetic Databases

    William H. Piel

    2005-01-01

    Full Text Available As the size of phylogenetic databases grows, the need for efficiently searching these databases arises. Thanks to previous and ongoing research, searching by attribute value and by text has become commonplace in these databases. However, searching by topological or physical structure, especially for large databases and especially for approximate matches, is still an art. We propose structural search techniques that, given a query or pattern tree P and a database of phylogenies D, find trees in D that are sufficiently close to P . The “closeness” is a measure of the topological relationships in P that are found to be the same or similar in a tree D in D. We develop a filtering technique that accelerates searches and present algorithms for rooted and unrooted trees where the trees can be weighted or unweighted. Experimental results on comparing the similarity measure with existing tree metrics and on evaluating the efficiency of the search techniques demonstrate that the proposed approach is promising

  9. Inference Attacks and Control on Database Structures

    Muhamed Turkanovic

    2015-02-01

    Full Text Available Today’s databases store information with sensitivity levels that range from public to highly sensitive, hence ensuring confidentiality can be highly important, but also requires costly control. This paper focuses on the inference problem on different database structures. It presents possible treats on privacy with relation to the inference, and control methods for mitigating these treats. The paper shows that using only access control, without any inference control is inadequate, since these models are unable to protect against indirect data access. Furthermore, it covers new inference problems which rise from the dimensions of new technologies like XML, semantics, etc.

  10. Database on wind characteristics - Structure and philosophy

    Larsen, G.C.; Hansen, K.S.

    2001-11-01

    The main objective of IEA R and D Wind Annex XVII - Database on Wind Characteristics - is to provide wind energy planners and designers, as well as the international wind engineering community in general, with easy access to quality controlled measured wind field time series observed in a wide range of environments. The project partners are Sweden, Norway, U.S.A., The Netherlands, Japan and Denmark, with Denmark as the Operating Agent. The reporting of IEA R and D Annex XVII falls in three separate parts. Part one deals with the overall structure and philosophy behind the database, part two accounts in details for the available data in the established database bank and part three is the Users Manual describing the various ways to access and analyse the data. The present report constitutes the first part of the Annex XVII reporting, and it contains a detailed description of the database structure, the data quality control procedures, the selected indexing of the data and the hardware system. (au)

  11. The Structural Ceramics Database: Technical Foundations

    Munro, R. G.; Hwang, F. Y.; Hubbard, C. R.

    1989-01-01

    The development of a computerized database on advanced structural ceramics can play a critical role in fostering the widespread use of ceramics in industry and in advanced technologies. A computerized database may be the most effective means of accelerating technology development by enabling new materials to be incorporated into designs far more rapidly than would have been possible with traditional information transfer processes. Faster, more efficient access to critical data is the basis for creating this technological advantage. Further, a computerized database provides the means for a more consistent treatment of data, greater quality control and product reliability, and improved continuity of research and development programs. A preliminary system has been completed as phase one of an ongoing program to establish the Structural Ceramics Database system. The system is designed to be used on personal computers. Developed in a modular design, the preliminary system is focused on the thermal properties of monolithic ceramics. The initial modules consist of materials specification, thermal expansion, thermal conductivity, thermal diffusivity, specific heat, thermal shock resistance, and a bibliography of data references. Query and output programs also have been developed for use with these modules. The latter program elements, along with the database modules, will be subjected to several stages of testing and refinement in the second phase of this effort. The goal of the refinement process will be the establishment of this system as a user-friendly prototype. Three primary considerations provide the guidelines to the system’s development: (1) The user’s needs; (2) The nature of materials properties; and (3) The requirements of the programming language. The present report discusses the manner and rationale by which each of these considerations leads to specific features in the design of the system. PMID:28053397

  12. The Structure-Function Linkage Database.

    Akiva, Eyal; Brown, Shoshana; Almonacid, Daniel E; Barber, Alan E; Custer, Ashley F; Hicks, Michael A; Huang, Conrad C; Lauck, Florian; Mashiyama, Susan T; Meng, Elaine C; Mischel, David; Morris, John H; Ojha, Sunil; Schnoes, Alexandra M; Stryke, Doug; Yunes, Jeffrey M; Ferrin, Thomas E; Holliday, Gemma L; Babbitt, Patricia C

    2014-01-01

    The Structure-Function Linkage Database (SFLD, http://sfld.rbvi.ucsf.edu/) is a manually curated classification resource describing structure-function relationships for functionally diverse enzyme superfamilies. Members of such superfamilies are diverse in their overall reactions yet share a common ancestor and some conserved active site features associated with conserved functional attributes such as a partial reaction. Thus, despite their different functions, members of these superfamilies 'look alike', making them easy to misannotate. To address this complexity and enable rational transfer of functional features to unknowns only for those members for which we have sufficient functional information, we subdivide superfamily members into subgroups using sequence information, and lastly into families, sets of enzymes known to catalyze the same reaction using the same mechanistic strategy. Browsing and searching options in the SFLD provide access to all of these levels. The SFLD offers manually curated as well as automatically classified superfamily sets, both accompanied by search and download options for all hierarchical levels. Additional information includes multiple sequence alignments, tab-separated files of functional and other attributes, and sequence similarity networks. The latter provide a new and intuitively powerful way to visualize functional trends mapped to the context of sequence similarity.

  13. Design of Qualitative HRA Database Structure

    Kim, Seunghwan; Kim, Yochan; Choi, Sun Yeong; Park, Jinkyun; Jung, Wondea

    2015-01-01

    HRA DB is to collect and store the data in a database form to manage and maintain them from the perspective of human reliability analysis. All information on the human errors taken by operators in the power plant should be systematically collected and documented in its management. KAERI is developing the simulator-based HRA data handbook. In this study, the information required to store and manage the data necessary to perform an HRA as to store the HRA data to be stored in the handbook is identified and summarized. Especially this study is to summarize the collection and classification of qualitative data as the raw data to organize the data required to draw the HEP and its DB process. Qualitative HRA DB is a storehouse of all sub-information needed to receive the human error probability for Pasa. In this study, the requirements for structural design and implementation of qualitative HRA DB must be implemented for HRA DB were summarized. The follow-up study of the quantitative HRA DB implementation should be followed to draw the substantial HEP

  14. Database structure and file layout of Nuclear Power Plant Database. Database for design information on Light Water Reactors in Japan

    Yamamoto, Nobuo; Izumi, Fumio.

    1995-12-01

    The Nuclear Power Plant Database (PPD) has been developed at the Japan Atomic Energy Research Institute (JAERI) to provide plant design information on domestic Light Water Reactors (LWRs) to be used for nuclear safety research and so forth. This database can run on the main frame computer in the JAERI Tokai Establishment. The PPD contains the information on the plant design concepts, the numbers, capacities, materials, structures and types of equipment and components, etc, based on the safety analysis reports of the domestic LWRs. This report describes the details of the PPD focusing on the database structure and layout of data files so that the users can utilize it efficiently. (author)

  15. RNA STRAND: The RNA Secondary Structure and Statistical Analysis Database

    Andronescu Mirela

    2008-08-01

    Full Text Available Abstract Background The ability to access, search and analyse secondary structures of a large set of known RNA molecules is very important for deriving improved RNA energy models, for evaluating computational predictions of RNA secondary structures and for a better understanding of RNA folding. Currently there is no database that can easily provide these capabilities for almost all RNA molecules with known secondary structures. Results In this paper we describe RNA STRAND – the RNA secondary STRucture and statistical ANalysis Database, a curated database containing known secondary structures of any type and organism. Our new database provides a wide collection of known RNA secondary structures drawn from public databases, searchable and downloadable in a common format. Comprehensive statistical information on the secondary structures in our database is provided using the RNA Secondary Structure Analyser, a new tool we have developed to analyse RNA secondary structures. The information thus obtained is valuable for understanding to which extent and with which probability certain structural motifs can appear. We outline several ways in which the data provided in RNA STRAND can facilitate research on RNA structure, including the improvement of RNA energy models and evaluation of secondary structure prediction programs. In order to keep up-to-date with new RNA secondary structure experiments, we offer the necessary tools to add solved RNA secondary structures to our database and invite researchers to contribute to RNA STRAND. Conclusion RNA STRAND is a carefully assembled database of trusted RNA secondary structures, with easy on-line tools for searching, analyzing and downloading user selected entries, and is publicly available at http://www.rnasoft.ca/strand.

  16. Construction of crystal structure prototype database: methods and applications

    Su, Chuanxun; Lv, Jian; Wang, Hui; Wang, Yanchao; Ma, Yanming; Li, Quan; Zhang, Lijun

    2017-01-01

    Crystal structure prototype data have become a useful source of information for materials discovery in the fields of crystallography, chemistry, physics, and materials science. This work reports the development of a robust and efficient method for assessing the similarity of structures on the basis of their interatomic distances. Using this method, we proposed a simple and unambiguous definition of crystal structure prototype based on hierarchical clustering theory, and constructed the crystal structure prototype database (CSPD) by filtering the known crystallographic structures in a database. With similar method, a program structure prototype analysis package (SPAP) was developed to remove similar structures in CALYPSO prediction results and extract predicted low energy structures for a separate theoretical structure database. A series of statistics describing the distribution of crystal structure prototypes in the CSPD was compiled to provide an important insight for structure prediction and high-throughput calculations. Illustrative examples of the application of the proposed database are given, including the generation of initial structures for structure prediction and determination of the prototype structure in databases. These examples demonstrate the CSPD to be a generally applicable and useful tool for materials discovery. (paper)

  17. Construction of crystal structure prototype database: methods and applications.

    Su, Chuanxun; Lv, Jian; Li, Quan; Wang, Hui; Zhang, Lijun; Wang, Yanchao; Ma, Yanming

    2017-04-26

    Crystal structure prototype data have become a useful source of information for materials discovery in the fields of crystallography, chemistry, physics, and materials science. This work reports the development of a robust and efficient method for assessing the similarity of structures on the basis of their interatomic distances. Using this method, we proposed a simple and unambiguous definition of crystal structure prototype based on hierarchical clustering theory, and constructed the crystal structure prototype database (CSPD) by filtering the known crystallographic structures in a database. With similar method, a program structure prototype analysis package (SPAP) was developed to remove similar structures in CALYPSO prediction results and extract predicted low energy structures for a separate theoretical structure database. A series of statistics describing the distribution of crystal structure prototypes in the CSPD was compiled to provide an important insight for structure prediction and high-throughput calculations. Illustrative examples of the application of the proposed database are given, including the generation of initial structures for structure prediction and determination of the prototype structure in databases. These examples demonstrate the CSPD to be a generally applicable and useful tool for materials discovery.

  18. Validation of the ICSD-2 criteria for CSF hypocretin-1 measurements in the diagnosis of narcolepsy in the Danish population

    Knudsen, Stine; Jennum, Poul J; Alving, Jørgen

    2010-01-01

    STUDY OBJECTIVES: The International Classification of Sleep Disorders (ICSD-2) criteria for low CSF hypocretin-1 levels (CSF hcrt-1) still need validation as a diagnostic tool for narcolepsy in different populations because inter-assay variability and different definitions of hypocretin deficiency...... complicate direct comparisons of study results. DESIGN AND PARTICIPANTS: Interviews, polysomnography, multiple sleep latency test, HLA-typing, and CSF hcrt-1 measurements in Danish patients with narcolepsy with cataplexy (NC) and narcolepsy without cataplexy (NwC), CSF hcrt-1 measurements in other......). MEASUREMENTS AND RESULTS: In Danes, low CSF hcrt-1 was present in 40/46 NC, 3/14 NwC and 0/106 controls (P sleep latency, more sleep...

  19. E-MSD: the European Bioinformatics Institute Macromolecular Structure Database.

    Boutselakis, H; Dimitropoulos, D; Fillon, J; Golovin, A; Henrick, K; Hussain, A; Ionides, J; John, M; Keller, P A; Krissinel, E; McNeil, P; Naim, A; Newman, R; Oldfield, T; Pineda, J; Rachedi, A; Copeland, J; Sitnov, A; Sobhany, S; Suarez-Uruena, A; Swaminathan, J; Tagari, M; Tate, J; Tromm, S; Velankar, S; Vranken, W

    2003-01-01

    The E-MSD macromolecular structure relational database (http://www.ebi.ac.uk/msd) is designed to be a single access point for protein and nucleic acid structures and related information. The database is derived from Protein Data Bank (PDB) entries. Relational database technologies are used in a comprehensive cleaning procedure to ensure data uniformity across the whole archive. The search database contains an extensive set of derived properties, goodness-of-fit indicators, and links to other EBI databases including InterPro, GO, and SWISS-PROT, together with links to SCOP, CATH, PFAM and PROSITE. A generic search interface is available, coupled with a fast secondary structure domain search tool.

  20. A new relational database structure and online interface for the HITRAN database

    Hill, Christian; Gordon, Iouli E.; Rothman, Laurence S.; Tennyson, Jonathan

    2013-01-01

    A new format for the HITRAN database is proposed. By storing the line-transition data in a number of linked tables described by a relational database schema, it is possible to overcome the limitations of the existing format, which have become increasingly apparent over the last few years as new and more varied data are being used by radiative-transfer models. Although the database in the new format can be searched using the well-established Structured Query Language (SQL), a web service, HITRANonline, has been deployed to allow users to make most common queries of the database using a graphical user interface in a web page. The advantages of the relational form of the database to ensuring data integrity and consistency are explored, and the compatibility of the online interface with the emerging standards of the Virtual Atomic and Molecular Data Centre (VAMDC) project is discussed. In particular, the ability to access HITRAN data using a standard query language from other websites, command line tools and from within computer programs is described. -- Highlights: • A new, interactive version of the HITRAN database is presented. • The data is stored in a structured fashion in a relational database. • The new HITRANonline interface offers increased functionality and easier error correction

  1. Molecule database framework: a framework for creating database applications with chemical structure search capability.

    Kiener, Joos

    2013-12-11

    Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework

  2. StraPep: a structure database of bioactive peptides

    Wang, Jian; Yin, Tailang; Xiao, Xuwen; He, Dan; Xue, Zhidong; Jiang, Xinnong; Wang, Yan

    2018-01-01

    Abstract Bioactive peptides, with a variety of biological activities and wide distribution in nature, have attracted great research interest in biological and medical fields, especially in pharmaceutical industry. The structural information of bioactive peptide is important for the development of peptide-based drugs. Many databases have been developed cataloguing bioactive peptides. However, to our knowledge, database dedicated to collect all the bioactive peptides with known structure is not available yet. Thus, we developed StraPep, a structure database of bioactive peptides. StraPep holds 3791 bioactive peptide structures, which belong to 1312 unique bioactive peptide sequences. About 905 out of 1312 (68%) bioactive peptides in StraPep contain disulfide bonds, which is significantly higher than that (21%) of PDB. Interestingly, 150 out of 616 (24%) bioactive peptides with three or more disulfide bonds form a structural motif known as cystine knot, which confers considerable structural stability on proteins and is an attractive scaffold for drug design. Detailed information of each peptide, including the experimental structure, the location of disulfide bonds, secondary structure, classification, post-translational modification and so on, has been provided. A wide range of user-friendly tools, such as browsing, sequence and structure-based searching and so on, has been incorporated into StraPep. We hope that this database will be helpful for the research community. Database URL: http://isyslab.info/StraPep PMID:29688386

  3. Database structure for plasma modeling programs

    Dufresne, M.; Silvester, P.P.

    1993-01-01

    Continuum plasma models often use a finite element (FE) formulation. Another approach is simulation models based on particle-in-cell (PIC) formulation. The model equations generally include four nonlinear differential equations specifying the plasma parameters. In simulation a large number of equations must be integrated iteratively to determine the plasma evolution from an initial state. The complexity of the resulting programs is a combination of the physics involved and the numerical method used. The data structure requirements of plasma programs are stated by defining suitable abstract data types. These abstractions are then reduced to data structures and a group of associated algorithms. These are implemented in an object oriented language (C++) as object classes. Base classes encapsulate data management into a group of common functions such as input-output management, instance variable updating and selection of objects by Boolean operations on their instance variables. Operations are thereby isolated from specific element types and uniformity of treatment is guaranteed. Creation of the data structures and associated functions for a particular plasma model is reduced merely to defining the finite element matrices for each equation, or the equations of motion for PIC models. Changes in numerical method or equation alterations are readily accommodated through the mechanism of inheritance, without modification of the data management software. The central data type is an n-relation implemented as a tuple of variable internal structure. Any finite element program may be described in terms of five relational tables: nodes, boundary conditions, sources, material/particle descriptions, and elements. Equivalently, plasma simulation programs may be described using four relational tables: cells, boundary conditions, sources, and particle descriptions

  4. Economic and Structural Database for the MEDPRO Project

    Paroussos, Leonidas; Tsani, Stella; Vrontisi, Zoi

    2013-01-01

    This report presents the economic and structural database compiled for the MEDPRO project. The database includes governance, infrastructure, finance, environment, energy, agricultural data and development indicators for the 11 southern and eastern Mediterranean countries (SEMCs) studied in the MEDPRO project. The report further details the data and the methods used for the construction of social accounting, bilateral trade, consumption and investment matrices for each of the SEMCs.

  5. Cross-cultural and comparative epidemiology of insomnia: the Diagnostic and statistical manual (DSM), International classification of diseases (ICD) and International classification of sleep disorders (ICSD).

    Chung, Ka-Fai; Yeung, Wing-Fai; Ho, Fiona Yan-Yee; Yung, Kam-Ping; Yu, Yee-Man; Kwok, Chi-Wa

    2015-04-01

    To compare the prevalence of insomnia according to symptoms, quantitative criteria, and Diagnostic and Statistical Manual of Mental Disorders, 4th and 5th Edition (DSM-IV and DSM-5), International Classification of Diseases, 10th Revision (ICD-10), and International Classification of Sleep Disorders, 2nd Edition (ICSD-2), and to compare the prevalence of insomnia disorder between Hong Kong and the United States by adopting a similar methodology used by the America Insomnia Survey (AIS). Population-based epidemiological survey respondents (n = 2011) completed the Brief Insomnia Questionnaire (BIQ), a validated scale generating DSM-IV, DSM-5, ICD-10, and ICSD-2 insomnia disorder. The weighted prevalence of difficulty falling asleep, difficulty staying asleep, waking up too early, and non-restorative sleep that occurred ≥3 days per week was 14.0%, 28.3%, 32.1%, and 39.9%, respectively. When quantitative criteria were included, the prevalence dropped the most from 39.9% to 8.4% for non-restorative sleep, and the least from 14.0% to 12.9% for difficulty falling asleep. The weighted prevalence of DSM-IV, ICD-10, ICSD-2, and any of the three insomnia disorders was 22.1%, 4.7%, 15.1%, and 22.1%, respectively; for DSM-5 insomnia disorder, it was 10.8%. Compared with 22.1%, 3.9%, and 14.7% for DSM-IV, ICD-10, and ICSD-2 in the AIS, cross-cultural difference in the prevalence of insomnia disorder is less than what is expected. The prevalence is reduced by half from DSM-IV to DSM-5. ICD-10 insomnia disorder has the lowest prevalence, perhaps because excessive concern and preoccupation, one of its diagnostic criteria, is not always present in people with insomnia. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Engineering method to build the composite structure ply database

    Qinghua Shi

    Full Text Available In this paper, a new method to build a composite ply database with engineering design constraints is proposed. This method has two levels: the core stacking sequence design and the whole stacking sequence design. The core stacking sequences are obtained by the full permutation algorithm considering the ply ratio requirement and the dispersion character which characterizes the dispersion of ply angles. The whole stacking sequences are the combinations of the core stacking sequences. By excluding the ply sequences which do not meet the engineering requirements, the final ply database is obtained. One example with the constraints that the total layer number is 100 and the ply ratio is 30:60:10 is presented to validate the method. This method provides a new way to set up the ply database based on the engineering requirements without adopting intelligent optimization algorithms. Keywords: Composite ply database, VBA program, Structure design, Stacking sequence

  7. Scheme of database structure on decommissioning of the research reactor

    Park, H. S.; Park, S. K.; Kim, H. R.; Lee, D. K.; Jung, K. J.

    2001-01-01

    ISP (Information Strategy Planning), which is the first step of the whole database development, has been studied to manage effectively information and data related to the decommissioning activities of the Korea Research Reactor 1 and 2 (KRR-1 and 2). Since Korea has not acquired the technology of the decommissioning database management system, some record management system (RMS) of large nuclear facilities of national experience such as in the U.S.A, Japan, Belgium, and Russian were reviewed. In order to construct the database structure of the whole decommissioning activities such as the working information, radioactive waste treatment, and radiological surveying and analysis has been extracted from the whole dismantling process. These information and data will be used as the basic data to analyzed the matrix to find the entity relationship diagram and will contribute to the establishment of a business system design and the development of a decommissioning database system as well

  8. Discrete Optimization of Internal Part Structure via SLM Unit Structure-Performance Database

    Li Tang

    2018-01-01

    Full Text Available The structural optimization of the internal structure of parts based on three-dimensional (3D printing has been recognized as being important in the field of mechanical design. The purpose of this paper is to present a creation of a unit structure-performance database based on the selective laser melting (SLM, which contains various structural units with different functions and records their structure and performance characteristics so that we can optimize the internal structure of parts directly, according to the database. The method of creating the unit structure-performance database was introduced in this paper and several structural units of the unit structure-performance database were introduced. The bow structure unit was used to show how to create the structure-performance database of the unit as an example. Some samples of the bow structure unit were designed and manufactured by SLM. These samples were tested in the WDW-100 compression testing machine to obtain their performance characteristics. After this, the paper collected all data regarding unit structure parameters, weight, performance characteristics, and other data; and, established a complete set of data from the bow structure unit for the unit structure-performance database. Furthermore, an aircraft part was reconstructed conveniently to be more lightweight according to the unit structure-performance database. Its weight was reduced by 36.8% when compared with the original structure, while the strength far exceeded the requirements.

  9. A new relational database structure and online interface for the HITRAN database

    Hill, Christian; Gordon, Iouli E.; Rothman, Laurence S.; Tennyson, Jonathan

    2013-11-01

    A new format for the HITRAN database is proposed. By storing the line-transition data in a number of linked tables described by a relational database schema, it is possible to overcome the limitations of the existing format, which have become increasingly apparent over the last few years as new and more varied data are being used by radiative-transfer models. Although the database in the new format can be searched using the well-established Structured Query Language (SQL), a web service, HITRANonline, has been deployed to allow users to make most common queries of the database using a graphical user interface in a web page. The advantages of the relational form of the database to ensuring data integrity and consistency are explored, and the compatibility of the online interface with the emerging standards of the Virtual Atomic and Molecular Data Centre (VAMDC) project is discussed. In particular, the ability to access HITRAN data using a standard query language from other websites, command line tools and from within computer programs is described.

  10. SKPDB: a structural database of shikimate pathway enzymes

    de Azevedo Walter F

    2010-01-01

    Full Text Available Abstract Background The functional and structural characterisation of enzymes that belong to microbial metabolic pathways is very important for structure-based drug design. The main interest in studying shikimate pathway enzymes involves the fact that they are essential for bacteria but do not occur in humans, making them selective targets for design of drugs that do not directly impact humans. Description The ShiKimate Pathway DataBase (SKPDB is a relational database applied to the study of shikimate pathway enzymes in microorganisms and plants. The current database is updated regularly with the addition of new data; there are currently 8902 enzymes of the shikimate pathway from different sources. The database contains extensive information on each enzyme, including detailed descriptions about sequence, references, and structural and functional studies. All files (primary sequence, atomic coordinates and quality scores are available for downloading. The modeled structures can be viewed using the Jmol program. Conclusions The SKPDB provides a large number of structural models to be used in docking simulations, virtual screening initiatives and drug design. It is freely accessible at http://lsbzix.rc.unesp.br/skpdb/.

  11. Databases

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  12. Databases

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  13. An information integration system for structured documents, Web, and databases

    Morishima, Atsuyuki

    1998-01-01

    Rapid advance in computer network technology has changed the style of computer utilization. Distributed computing resources over world-wide computer networks are available from our local computers. They include powerful computers and a variety of information sources. This change is raising more advanced requirements. Integration of distributed information sources is one of such requirements. In addition to conventional databases, structured documents have been widely used, and have increasing...

  14. UbSRD: The Ubiquitin Structural Relational Database.

    Harrison, Joseph S; Jacobs, Tim M; Houlihan, Kevin; Van Doorslaer, Koenraad; Kuhlman, Brian

    2016-02-22

    The structurally defined ubiquitin-like homology fold (UBL) can engage in several unique protein-protein interactions and many of these complexes have been characterized with high-resolution techniques. Using Rosetta's structural classification tools, we have created the Ubiquitin Structural Relational Database (UbSRD), an SQL database of features for all 509 UBL-containing structures in the PDB, allowing users to browse these structures by protein-protein interaction and providing a platform for quantitative analysis of structural features. We used UbSRD to define the recognition features of ubiquitin (UBQ) and SUMO observed in the PDB and the orientation of the UBQ tail while interacting with certain types of proteins. While some of the interaction surfaces on UBQ and SUMO overlap, each molecule has distinct features that aid in molecular discrimination. Additionally, we find that the UBQ tail is malleable and can adopt a variety of conformations upon binding. UbSRD is accessible as an online resource at rosettadesign.med.unc.edu/ubsrd. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Database structures and interfaces for W7-X

    Heimann, P.; Bluhm, T.; Hennig, Ch.; Kroiss, H.; Kuehner, G.; Maier, J.; Riemann, H.; Zilker, M.

    2008-01-01

    The W7-X experiment of the IPP, under construction in Greifswald Germany, is designed to operate in a quasi-steady-state scenario. The database structures and interfaces used for discharge description and execution have to reflect this continuous mode of operation. In close collaboration between the control group of W7-X and the data acquisition group a combined design of the data structures used for describing the configuration and the operation of the experiment was developed. To guarantee access to this information from all participating stations a TCP/IP portal and a proxy server were developed. This portal enables especially the VxWorks real-time operating systems of the control stations to access the information in the object-oriented database. The database schema includes now a more functional description of the experiment and gives the physicists a more simplified view of the necessary definitions of operational parameters. The scheduling of the long discharges of W7-X will be done by predefining operational parameters in segments and scenarios, where a scenario is a fixed sequence of segments with a common physical background. To hide the specialized information contained in the basic parameters from the experiment leader or physicist an abstraction layer was introduced that only shows physically interesting information. An executable segment will be generated after verifying the consistency of the high-level parameters by using a transformation function for every basic parameter needed. Since the database contains all configurations and discharge definitions necessary to operate the experiment, it is very important to give the user a tool to manipulate this information in an intuitive way. A special editor (ConfiX) was designed and implemented for this task. At the moment the basic functionality for dealing with all kind of objects in the database is available. Future releases will extend the functionality to defining and editing configurations, segments

  16. Structure and needs of global loss databases about natural disaster

    Steuer, Markus

    2010-05-01

    Global loss databases are used for trend analyses and statistics in scientific projects, studies for governmental and nongovernmental organizations and for the insurance and finance industry as well. At the moment three global data sets are established: EM-DAT (CRED), Sigma (Swiss Re) and NatCatSERVICE (Munich Re). Together with the Asian Disaster Reduction Center (ADRC) and United Nations Development Program (UNDP) started a collaborative initiative in 2007 with the aim to agreed on and implemented a common "Disaster Category Classification and Peril Terminology for Operational Databases". This common classification has been established through several technical meetings and working groups and represents a first and important step in the development of a standardized international classification of disasters and terminology of perils. This means concrete to set up a common hierarchy and terminology for all global and regional databases on natural disasters and establish a common and agreed definition of disaster groups, main types and sub-types of events. Also the theme of georeferencing, temporal aspects, methodology and sourcing were other issues that have been identified and will be discussed. The implementation of the new and defined structure for global loss databases is already set up for Munich Re NatCatSERVICE. In the following oral session we will show the structure of the global databases as defined and in addition to give more transparency of the data sets behind published statistics and analyses. The special focus will be on the catastrophe classification from a moderate loss event up to a great natural catastrophe, also to show the quality of sources and give inside information about the assessment of overall and insured losses. Keywords: disaster category classification, peril terminology, overall and insured losses, definition

  17. A database structure for radiological optimization analyses of decommissioning operations

    Zeevaert, T.; Van de Walle, B.

    1995-09-01

    The structure of a database for decommissioning experiences is described. Radiological optimization is a major radiation protection principle in practices and interventions, involving radiological protection factors, economic costs, social factors. An important lack of knowledge with respect to these factors exists in the domain of the decommissioning of nuclear power plants, due to the low number of decommissioning operations already performed. Moreover, decommissioning takes place only once for a installation. Tasks, techniques, and procedures are in most cases rather specific, limiting the use of past experiences in the radiological optimization analyses of new decommissioning operations. Therefore, it is important that relevant data or information be acquired from decommissioning experiences. These data have to be stored in a database in a way they can be used efficiently in ALARA analyses of future decommissioning activities

  18. Serialization and persistent objects turning data structures into efficient databases

    Soukup, Jiri

    2014-01-01

    Recently, the pressure for fast processing and efficient storage of large data with complex?relations increased beyond the capability of traditional databases. Typical examples include iPhone applications, computer aided design - both electrical and mechanical, biochemistry applications, and incremental compilers. Serialization, which is sometimes used in such situations is notoriously tedious and error prone.In this book, Jiri Soukup and Petr Macha?ek show in detail how to write programs which store their internal data automatically and transparently to disk. Together with special data structure libraries which treat relations among objects as first-class entities, and with a UML class-diagram generator, the core application code is much simplified. The benchmark chapter shows a typical example where persistent data is faster by the order of magnitude than with a traditional database, in both traversing and accessing the data.The authors explore and exploit advanced features of object-oriented languages in a...

  19. Structure health monitoring system using internet and database technologies

    Kwon, Il Bum; Kim, Chi Yeop; Choi, Man Yong; Lee, Seung Seok

    2003-01-01

    Structural health monitoring system should developed to be based on internet and database technology in order to manage efficiently large structures. This system is operated by internet connected with the side of structures. The monitoring system has some functions: self monitoring, self diagnosis, and self control etc. Self monitoring is the function of sensor fault detection. If some sensors are not normally worked, then this system can detect the fault sensors. Also Self diagnosis function repair the abnormal condition of sensors. And self control is the repair function of the monitoring system. Especially, the monitoring system can identify the replacement of sensors. For further study, the real application test will be performed to check some unconvince.

  20. Structural health monitoring system using internet and database technologies

    Kim, Chi Yeop; Choi, Man Yong; Kwon, Il Bum; Lee, Seung Seok [Nonstructive Measurment Lab., KRISS, Daejeon (Korea, Republic of)

    2003-07-01

    Structure health monitoring system should develope to be based on internet and database technology in order to manage efficiency large structures. This system is operated by internet connected with the side of structures. The monitoring system has some functions: self monitoring, self diagnosis, and self control etc. Self monitoring is the function of sensor fault detection. If some sensors are not normally worked, then this system can detect the fault sensors. Also Self diagnosis function repair the abnormal condition of sensors. And self control is the repair function of the monitoring system. Especially, the monitoring system can identify the replacement of sensors. For further study, the real application test will be performed to check some unconviniences.

  1. Structure health monitoring system using internet and database technologies

    Kwon, Il Bum; Kim, Chi Yeop; Choi, Man Yong; Lee, Seung Seok [Smart Measurment Group. Korea Resarch Institute of Standards and Science, Saejeon (Korea, Republic of)

    2003-05-15

    Structural health monitoring system should developed to be based on internet and database technology in order to manage efficiently large structures. This system is operated by internet connected with the side of structures. The monitoring system has some functions: self monitoring, self diagnosis, and self control etc. Self monitoring is the function of sensor fault detection. If some sensors are not normally worked, then this system can detect the fault sensors. Also Self diagnosis function repair the abnormal condition of sensors. And self control is the repair function of the monitoring system. Especially, the monitoring system can identify the replacement of sensors. For further study, the real application test will be performed to check some unconvince.

  2. Structural health monitoring system using internet and database technologies

    Kim, Chi Yeop; Choi, Man Yong; Kwon, Il Bum; Lee, Seung Seok

    2003-01-01

    Structure health monitoring system should develope to be based on internet and database technology in order to manage efficiency large structures. This system is operated by internet connected with the side of structures. The monitoring system has some functions: self monitoring, self diagnosis, and self control etc. Self monitoring is the function of sensor fault detection. If some sensors are not normally worked, then this system can detect the fault sensors. Also Self diagnosis function repair the abnormal condition of sensors. And self control is the repair function of the monitoring system. Especially, the monitoring system can identify the replacement of sensors. For further study, the real application test will be performed to check some unconviniences.

  3. Structural load inventory database for the Kansas City federal complex

    Hashimoto, P.S.; Johnson, M.W.; Nakaki, D.K.; Lynch, D.T.; Drury, M.A.

    1995-01-01

    A structural load inventory database (LID) has been developed to support configuration management at the DOE Kansas City Plant (KCP). The objective of the LID is to record loads supported by the plant structures and to provide rapid assessments of the impact of future facility modifications on structural adequacy. Development of the LID was initiated for the KCP's Main Manufacturing Building. Field walkdowns were performed to determine all significant loads supported by the structure, including the weight of piping, service equipment, etc. These loads were compiled in the LID. Structural analyses for natural phenomena hazards were performed in accordance with UCRL-15910. Software to calculate demands on the structural members due to gravity loads, total demands including both gravity and seismic loads, and structural member demand-to-capacity ratios were also developed and integrated into the LID. Operation of the LID is menu-driven. The LID user has options to review and print existing loads and corresponding demand-to-capacity ratios, and to update the supported loads and demand-to-capacity ratios for any future facility modifications

  4. Structural load inventory database for the Kansas City Plant

    Hashimoto, P.S.; Johnson, M.W.; Nakaki, D.K.; Wilson, J.J.; Lynch, D.T.; Drury, M.A.

    1993-01-01

    A structural load inventory database (LID) has been developed to support configuration management at the DOE Kansas City Plant (KCP). The objective of the LID is to record loads supported by the plant structures and to provide rapid assessments of the impact of future facility modifications on structural adequacy. Development of the LID was initiated for the KCP's Main Manufacturing Building. Field walkdowns were performed to determine all significant loads supported by the structure, including the weight of piping, service equipment, etc. These loads were compiled in the LID. Structural analyses for natural phenomena hazards were performed in accordance with UCRL-15910. Software to calculate demands on the structural members due to gravity loads, total demands including both gravity and seismic loads, and structural member demand-to-capacity ratios were also developed and integrated into the LID. Operation of the LID is menu-driven. The LID user has options to review and print existing loads and corresponding demand-to-capacity ratios, and to update the supported loads and demand-to-capacity ratios for any future facility modifications

  5. Weighted voting-based consensus clustering for chemical structure databases

    Saeed, Faisal; Ahmed, Ali; Shamsir, Mohd Shahir; Salim, Naomie

    2014-06-01

    The cluster-based compound selection is used in the lead identification process of drug discovery and design. Many clustering methods have been used for chemical databases, but there is no clustering method that can obtain the best results under all circumstances. However, little attention has been focused on the use of combination methods for chemical structure clustering, which is known as consensus clustering. Recently, consensus clustering has been used in many areas including bioinformatics, machine learning and information theory. This process can improve the robustness, stability, consistency and novelty of clustering. For chemical databases, different consensus clustering methods have been used including the co-association matrix-based, graph-based, hypergraph-based and voting-based methods. In this paper, a weighted cumulative voting-based aggregation algorithm (W-CVAA) was developed. The MDL Drug Data Report (MDDR) benchmark chemical dataset was used in the experiments and represented by the AlogP and ECPF_4 descriptors. The results from the clustering methods were evaluated by the ability of the clustering to separate biologically active molecules in each cluster from inactive ones using different criteria, and the effectiveness of the consensus clustering was compared to that of Ward's method, which is the current standard clustering method in chemoinformatics. This study indicated that weighted voting-based consensus clustering can overcome the limitations of the existing voting-based methods and improve the effectiveness of combining multiple clusterings of chemical structures.

  6. The Cambridge Structural Database in retrospect and prospect.

    Groom, Colin R; Allen, Frank H

    2014-01-13

    The Cambridge Crystallographic Data Centre (CCDC) was established in 1965 to record numerical, chemical and bibliographic data relating to published organic and metal-organic crystal structures. The Cambridge Structural Database (CSD) now stores data for nearly 700,000 structures and is a comprehensive and fully retrospective historical archive of small-molecule crystallography. Nearly 40,000 new structures are added each year. As X-ray crystallography celebrates its centenary as a subject, and the CCDC approaches its own 50th year, this article traces the origins of the CCDC as a publicly funded organization and its onward development into a self-financing charitable institution. Principally, however, we describe the growth of the CSD and its extensive associated software system, and summarize its impact and value as a basis for research in structural chemistry, materials science and the life sciences, including drug discovery and drug development. Finally, the article considers the CCDC's funding model in relation to open access and open data paradigms. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. 1.15 - Structural Chemogenomics Databases to Navigate Protein–Ligand Interaction Space

    Kanev, G.K.; Kooistra, A.J.; de Esch, I.J.P.; de Graaf, C.

    2017-01-01

    Structural chemogenomics databases allow the integration and exploration of heterogeneous genomic, structural, chemical, and pharmacological data in order to extract useful information that is applicable for the discovery of new protein targets and biologically active molecules. Integrated databases

  8. Applications of the Cambridge Structural Database in chemical education1

    Battle, Gary M.; Ferrence, Gregory M.; Allen, Frank H.

    2010-01-01

    The Cambridge Structural Database (CSD) is a vast and ever growing compendium of accurate three-dimensional structures that has massive chemical diversity across organic and metal–organic compounds. For these reasons, the CSD is finding significant uses in chemical education, and these applications are reviewed. As part of the teaching initiative of the Cambridge Crystallographic Data Centre (CCDC), a teaching subset of more than 500 CSD structures has been created that illustrate key chemical concepts, and a number of teaching modules have been devised that make use of this subset in a teaching environment. All of this material is freely available from the CCDC website, and the subset can be freely viewed and interrogated using WebCSD, an internet application for searching and displaying CSD information content. In some cases, however, the complete CSD System is required for specific educational applications, and some examples of these more extensive teaching modules are also discussed. The educational value of visualizing real three-dimensional structures, and of handling real experimental results, is stressed throughout. PMID:20877495

  9. Databases, Repositories, and Other Data Resources in Structural Biology.

    Zheng, Heping; Porebski, Przemyslaw J; Grabowski, Marek; Cooper, David R; Minor, Wladek

    2017-01-01

    Structural biology, like many other areas of modern science, produces an enormous amount of primary, derived, and "meta" data with a high demand on data storage and manipulations. Primary data come from various steps of sample preparation, diffraction experiments, and functional studies. These data are not only used to obtain tangible results, like macromolecular structural models, but also to enrich and guide our analysis and interpretation of various biomedical problems. Herein we define several categories of data resources, (a) Archives, (b) Repositories, (c) Databases, and (d) Advanced Information Systems, that can accommodate primary, derived, or reference data. Data resources may be used either as web portals or internally by structural biology software. To be useful, each resource must be maintained, curated, as well as integrated with other resources. Ideally, the system of interconnected resources should evolve toward comprehensive "hubs", or Advanced Information Systems. Such systems, encompassing the PDB and UniProt, are indispensable not only for structural biology, but for many related fields of science. The categories of data resources described herein are applicable well beyond our usual scientific endeavors.

  10. A manufacturing database of advanced materials used in spacecraft structures

    Bao, Han P.

    1994-12-01

    Cost savings opportunities over the life cycle of a product are highest in the early exploratory phase when different design alternatives are evaluated not only for their performance characteristics but also their methods of fabrication which really control the ultimate manufacturing costs of the product. In the past, Design-To-Cost methodologies for spacecraft design concentrated on the sizing and weight issues more than anything else at the early so-called 'Vehicle Level' (Ref: DOD/NASA Advanced Composites Design Guide). Given the impact of manufacturing cost, the objective of this study is to identify the principal cost drivers for each materials technology and propose a quantitative approach to incorporating these cost drivers into the family of optimization tools used by the Vehicle Analysis Branch of NASA LaRC to assess various conceptual vehicle designs. The advanced materials being considered include aluminum-lithium alloys, thermoplastic graphite-polyether etherketone composites, graphite-bismaleimide composites, graphite- polyimide composites, and carbon-carbon composites. Two conventional materials are added to the study to serve as baseline materials against which the other materials are compared. These two conventional materials are aircraft aluminum alloys series 2000 and series 7000, and graphite-epoxy composites T-300/934. The following information is available in the database. For each material type, the mechanical, physical, thermal, and environmental properties are first listed. Next the principal manufacturing processes are described. Whenever possible, guidelines for optimum processing conditions for specific applications are provided. Finally, six categories of cost drivers are discussed. They include, design features affecting processing, tooling, materials, fabrication, joining/assembly, and quality assurance issues. It should be emphasized that this database is not an exhaustive database. Its primary use is to make the vehicle designer

  11. Investigation on structuring the human body function database; Shintai kino database no kochiku ni kansuru chosa kenkyu

    NONE

    1995-03-01

    Based on the concept of human life engineering database, a study was made to know how to technically make such a database fittable to the old people in the age-advancing society. It was then proposed that the old people`s human life engineering database should be prepared to serve for the development and design of life technology to be applied into the age-advancing society. An executive method of structuring the database was established through the `bathing` and `going out` selected as an action to be casestudied in the daily life of old people. As a result of the study, the proposal was made that the old people`s human body function database should be prepared as a R and D base for the life technology in the aged society. Based on the above proposal, a master plan was mapped out to structure this database with the concrete method studied for putting it into action. At the first investigation stage of the above study, documentation was made through utilizing the existing documentary database. Enterprises were also interviewed for the investigation. Pertaining to the function of old people, about 500 documents were extracted with many vague points not clarified yet. The investigation will restart in the next fiscal year. 4 refs., 38 figs., 30 tabs.

  12. Carotenoids Database: structures, chemical fingerprints and distribution among organisms.

    Yabuzaki, Junko

    2017-01-01

    To promote understanding of how organisms are related via carotenoids, either evolutionarily or symbiotically, or in food chains through natural histories, we built the Carotenoids Database. This provides chemical information on 1117 natural carotenoids with 683 source organisms. For extracting organisms closely related through the biosynthesis of carotenoids, we offer a new similarity search system 'Search similar carotenoids' using our original chemical fingerprint 'Carotenoid DB Chemical Fingerprints'. These Carotenoid DB Chemical Fingerprints describe the chemical substructure and the modification details based upon International Union of Pure and Applied Chemistry (IUPAC) semi-systematic names of the carotenoids. The fingerprints also allow (i) easier prediction of six biological functions of carotenoids: provitamin A, membrane stabilizers, odorous substances, allelochemicals, antiproliferative activity and reverse MDR activity against cancer cells, (ii) easier classification of carotenoid structures, (iii) partial and exact structure searching and (iv) easier extraction of structural isomers and stereoisomers. We believe this to be the first attempt to establish fingerprints using the IUPAC semi-systematic names. For extracting close profiled organisms, we provide a new tool 'Search similar profiled organisms'. Our current statistics show some insights into natural history: carotenoids seem to have been spread largely by bacteria, as they produce C30, C40, C45 and C50 carotenoids, with the widest range of end groups, and they share a small portion of C40 carotenoids with eukaryotes. Archaea share an even smaller portion with eukaryotes. Eukaryotes then have evolved a considerable variety of C40 carotenoids. Considering carotenoids, eukaryotes seem more closely related to bacteria than to archaea aside from 16S rRNA lineage analysis. : http://carotenoiddb.jp. © The Author(s) 2017. Published by Oxford University Press.

  13. The IAEA Illicit Trafficking Database Programme: Operations and Structure

    2010-01-01

    the IAEA I TDB currently has 90 states participating voluntarily to the database. Information on about 827 incidents of which 500 involved radioactive sources has been reported. States provide information by submitting an Information Notification Form. The incident is assigned an identification number and entered in the database. Information from open sources is collected daily and reviewed. If the information warrants it a new incident is created in the database.

  14. Protein Structural Change Data - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Full Text Available List Contact us PSCDB Protein Structural Change Data Data detail Data name Protein Structural Change Data DO...History of This Database Site Policy | Contact Us Protein Structural Change Data - PSCDB | LSDB Archive ...

  15. Nuclear Reaction and Structure Databases of the National Nuclear Data Center

    Pritychenko, B.; Arcilla, R.; Herman, M. W.; Oblozinsky, P.; Rochman, D.; Sonzogni, A. A.; Tuli, J. K.; Winchell, D. F.

    2006-01-01

    The National Nuclear Data Center (NNDC) collects, evaluates, and disseminates nuclear physics data for basic research and applied nuclear technologies. In 2004, the NNDC migrated all databases into modern relational database software, installed new generation of Linux servers and developed new Java-based Web service. This nuclear database development means much faster, more flexible and more convenient service to all users in the United States. These nuclear reaction and structure database developments as well as related Web services are briefly described

  16. NIMS structural materials databases and cross search engine - MatNavi

    Yamazaki, M.; Xu, Y.; Murata, M.; Tanaka, H.; Kamihira, K.; Kimura, K. [National Institute for Materials Science, Tokyo (Japan)

    2007-06-15

    Materials Database Station (MDBS) of National Institute for Materials Science (NIMS) owns the world's largest Internet materials database for academic and industry purpose, which is composed of twelve databases: five concerning structural materials, five concerning basic physical properties, one for superconducting materials and one for polymers. All of theses databases are opened to Internet access at the website of http://mits.nims.go.jp/en. Online tools for predicting properties of polymers and composite materials are also available. The NIMS structural materials databases are composed of structural materials data sheet online version (creep, fatigue, corrosion and space use materials strength), microstructure for crept material database, Pressure vessel materials database and CCT diagram for welding. (orig.)

  17. Utilizing Structural Knowledge for Information Retrieval in XML Databases

    Mihajlovic, V.; Hiemstra, Djoerd; Blok, H.E.; Apers, Peter M.G.

    In this paper we address the problem of immediate translation of eXtensible Mark-up Language (XML) information retrieval (IR) queries to relational database expressions and stress the benefits of using an intermediate XML-specific algebra over relational algebra. We show how adding an XML-specific

  18. A subsoil compaction database: its development, structure and content

    Trautner, A.; Akker, van den J.J.H.; Fleige, H.; Arvidsson, J.; Horn, R.

    2003-01-01

    A database which holds results of field and laboratory experiments on the impact of subsoil compaction on physical and mechanical soil parameters and on crop yields and environmental impact is being developed within the EU-sponsored concerted action (CA) project "Experiences with the impact of

  19. Intelligent Access to Sequence and Structure Databases (IASSD) - an interface for accessing information from major web databases.

    Ganguli, Sayak; Gupta, Manoj Kumar; Basu, Protip; Banik, Rahul; Singh, Pankaj Kumar; Vishal, Vineet; Bera, Abhisek Ranjan; Chakraborty, Hirak Jyoti; Das, Sasti Gopal

    2014-01-01

    With the advent of age of big data and advances in high throughput technology accessing data has become one of the most important step in the entire knowledge discovery process. Most users are not able to decipher the query result that is obtained when non specific keywords or a combination of keywords are used. Intelligent access to sequence and structure databases (IASSD) is a desktop application for windows operating system. It is written in Java and utilizes the web service description language (wsdl) files and Jar files of E-utilities of various databases such as National Centre for Biotechnology Information (NCBI) and Protein Data Bank (PDB). Apart from that IASSD allows the user to view protein structure using a JMOL application which supports conditional editing. The Jar file is freely available through e-mail from the corresponding author.

  20. Report on the database structuring project in fiscal 1996 related to the 'surveys on making databases for energy saving (2)'; 1996 nendo database kochiku jigyo hokokusho. Sho energy database system ka ni kansuru chosa 2

    NONE

    1998-03-01

    With an objective to support promotion of energy conservation in such countries as Japan, China, Indonesia, the Philippines, Thailand, Malaysia, Taiwan and Korea, primary information on energy conservation in each country was collected, and the database was structured. This paper summarizes the achievements in fiscal 1996. Based on the survey result on the database project having been progressed to date, and on various data having been collected, this fiscal year has discussed structuring the database for distribution and proliferation of the database. In the discussion, requirements for the functions to be possessed by the database, items of data to be recorded in the database, and processing of the recorded data were put into order referring to propositions on the database circumstances. Demonstrations for the database of a proliferation version were performed in the Philippines, Indonesia and China. Three hundred CDs for distribution in each country were prepared. Adjustments and confirmation on operation of the supplied computers were carried out, and the operation explaining meetings were held in China and the Philippines. (NEDO)

  1. Structure and function design for nuclear facilities decommissioning information database

    Liu Yongkuo; Song Yi; Wu Xiaotian; Liu Zhen

    2014-01-01

    The decommissioning of nuclear facilities is a radioactive and high-risk project which has to consider the effect of radiation and nuclear waste disposal, so the information system of nuclear facilities decommissioning project must be established to ensure the safety of the project. In this study, by collecting the decommissioning activity data, the decommissioning database was established, and based on the database, the decommissioning information database (DID) was developed. The DID can perform some basic operations, such as input, delete, modification and query of the decommissioning information data, and in accordance with processing characteristics of various types of information data, it can also perform information management with different function models. On this basis, analysis of the different information data will be done. The system is helpful for enhancing the management capability of the decommissioning process and optimizing the arrangements of the project, it also can reduce radiation dose of the workers, so the system is quite necessary for safe decommissioning of nuclear facilities. (authors)

  2. Using the structure-function linkage database to characterize functional domains in enzymes.

    Brown, Shoshana; Babbitt, Patricia

    2014-12-12

    The Structure-Function Linkage Database (SFLD; http://sfld.rbvi.ucsf.edu/) is a Web-accessible database designed to link enzyme sequence, structure, and functional information. This unit describes the protocols by which a user may query the database to predict the function of uncharacterized enzymes and to correct misannotated functional assignments. The information in this unit is especially useful in helping a user discriminate functional capabilities of a sequence that is only distantly related to characterized sequences in publicly available databases. Copyright © 2014 John Wiley & Sons, Inc.

  3. URS DataBase: universe of RNA structures and their motifs.

    Baulin, Eugene; Yacovlev, Victor; Khachko, Denis; Spirin, Sergei; Roytberg, Mikhail

    2016-01-01

    The Universe of RNA Structures DataBase (URSDB) stores information obtained from all RNA-containing PDB entries (2935 entries in October 2015). The content of the database is updated regularly. The database consists of 51 tables containing indexed data on various elements of the RNA structures. The database provides a web interface allowing user to select a subset of structures with desired features and to obtain various statistical data for a selected subset of structures or for all structures. In particular, one can easily obtain statistics on geometric parameters of base pairs, on structural motifs (stems, loops, etc.) or on different types of pseudoknots. The user can also view and get information on an individual structure or its selected parts, e.g. RNA-protein hydrogen bonds. URSDB employs a new original definition of loops in RNA structures. That definition fits both pseudoknot-free and pseudoknotted secondary structures and coincides with the classical definition in case of pseudoknot-free structures. To our knowledge, URSDB is the first database supporting searches based on topological classification of pseudoknots and on extended loop classification.Database URL: http://server3.lpm.org.ru/urs/. © The Author(s) 2016. Published by Oxford University Press.

  4. PACSY, a relational database management system for protein structure and chemical shift analysis.

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L

    2012-10-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu.

  5. PACSY, a relational database management system for protein structure and chemical shift analysis

    Lee, Woonghee, E-mail: whlee@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison, and Biochemistry Department (United States); Yu, Wookyung [Center for Proteome Biophysics, Pusan National University, Department of Physics (Korea, Republic of); Kim, Suhkmann [Pusan National University, Department of Chemistry and Chemistry Institute for Functional Materials (Korea, Republic of); Chang, Iksoo [Center for Proteome Biophysics, Pusan National University, Department of Physics (Korea, Republic of); Lee, Weontae, E-mail: wlee@spin.yonsei.ac.kr [Yonsei University, Structural Biochemistry and Molecular Biophysics Laboratory, Department of Biochemistry (Korea, Republic of); Markley, John L., E-mail: markley@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison, and Biochemistry Department (United States)

    2012-10-15

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.eduhttp://pacsy.nmrfam.wisc.edu.

  6. PACSY, a relational database management system for protein structure and chemical shift analysis

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo

    2012-01-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu. PMID:22903636

  7. PACSY, a relational database management system for protein structure and chemical shift analysis

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L.

    2012-01-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.eduhttp://pacsy.nmrfam.wisc.edu.

  8. HD-RNAS: An automated hierarchical database of RNA structures

    Shubhra Sankar eRay

    2012-04-01

    Full Text Available One of the important goals of most biological investigations is to classify and organize the experimental findings so that they are readily useful for deriving generalized rules. Although there is a huge amount of information on RNA structures in PDB, there are redundant files, ambiguous synthetic sequences etc. Moreover, a systematic hierarchical organization, reflecting RNA classification, is missing in PDB. In this investigation, we have classified all the available RNA crystal structures from PDB through a programmatic approach. Hence, it would be now a simple assignment to regularly update the classification as and when new structures are released. The classification can further determine (i a non-redundant set of RNA structures and (ii if available, a set of structures of identical sequence and function, which can highlight structural polymorphism, ligand-induced conformational alterations etc. Presently, we have classified the available structures (2095 PDB entries having RNA chain longer than 9 nucleotides solved by X-ray crystallography or NMR spectroscopy into nine functional classes. The structures of same function and same source are mostly seen to be similar with subtle differences depending on their functional complexation. The web-server is available online at http://www.saha.ac.in/biop/www/HD-RNAS.html and is updated regularly.

  9. Computer-aided visualization of database structural relationships

    Cahn, D.F.

    1980-04-01

    Interactive computer graphic displays can be extremely useful in augmenting understandability of data structures. In complexly interrelated domains such as bibliographic thesauri and energy information systems, node and link displays represent one such tool. This paper presents examples of data structure representations found useful in these domains and discusses some of their generalizable components. 2 figures

  10. NESSY, a relational PC database for nuclear structure and decay data

    Boboshin, I.N.; Varlamov, V.V.; Trukhanov, S.K.

    1994-11-01

    The universal relational database NESSY (New ENSDF Search SYstem) based on the international ENSDF system (Evaluated Nuclear Structure Data File) is described. NESSY, which was developed for IBM compatible PC, provides high efficiency processing of ENSDF information for searches and retrievals of nuclear physics data. The principle of the database development and examples of applications are presented. (author)

  11. Geologic structure mapping database Spent Fuel Test - Climax, Nevada Test Site

    Yow, J.L. Jr.

    1984-01-01

    Information on over 2500 discontinuities mapped at the SFT-C is contained in the geologic structure mapping database. Over 1800 of these features include complete descriptions of their orientations. This database is now available for use by other researchers. 6 references, 3 figures, 2 tables

  12. TRANSFORMATION OF DEVELOPMENTAL NEUROTOXICITY DATA INTO STRUCTURE-SEARCHABLE TOXML DATABASE IN SUPPORT OF STRUCTURE-ACTIVITY RELATIONSHIP (SAR) WORKFLOW.

    Early hazard identification of new chemicals is often difficult due to lack of data on the novel material for toxicity endpoints, including neurotoxicity. At present, there are no structure searchable neurotoxicity databases. A working group was formed to construct a database to...

  13. Current Challenges in Development of a Database of Three-Dimensional Chemical Structures

    Maeda, Miki H.

    2015-01-01

    We are developing a database named 3DMET, a three-dimensional structure database of natural metabolites. There are two major impediments to the creation of 3D chemical structures from a set of planar structure drawings: the limited accuracy of computer programs and insufficient human resources for manual curation. We have tested some 2D–3D converters to convert 2D structure files from external databases. These automatic conversion processes yielded an excessive number of improper conversions. To ascertain the quality of the conversions, we compared IUPAC Chemical Identifier and canonical SMILES notations before and after conversion. Structures whose notations correspond to each other were regarded as a correct conversion in our present work. We found that chiral inversion is the most serious factor during the improper conversion. In the current stage of our database construction, published books or articles have been resources for additions to our database. Chemicals are usually drawn as pictures on the paper. To save human resources, an optical structure reader was introduced. The program was quite useful but some particular errors were observed during our operation. We hope our trials for producing correct 3D structures will help other developers of chemical programs and curators of chemical databases. PMID:26075200

  14. Development of conventional fatigue database for structure materials of nuclear power plant

    Yang Bing

    2002-01-01

    Management system of the conventional fatigue database for structure materials of nuclear power plant (NPP) is developed. The database included the parameters of design curves, i.e., the stress-life, survival probability-stress-life, strain-life, survival probability-strain-life, stress-strain and survival probability-stress-strain curves, and corresponding information of materials and testing conditions. Two ways, by materials name or by the inter-bounds of material mechanical properties, are constructed to search the database. From the searched information it can be conveniently performed of the conventional fatigue design analysis and reliability assessment of structures

  15. Development of Database Assisted Structure Identification (DASI Methods for Nontargeted Metabolomics

    Lochana C. Menikarachchi

    2016-05-01

    Full Text Available Metabolite structure identification remains a significant challenge in nontargeted metabolomics research. One commonly used strategy relies on searching biochemical databases using exact mass. However, this approach fails when the database does not contain the unknown metabolite (i.e., for unknown-unknowns. For these cases, constrained structure generation with combinatorial structure generators provides a potential option. Here we evaluated structure generation constraints based on the specification of: (1 substructures required (i.e., seed structures; (2 substructures not allowed; and (3 filters to remove incorrect structures. Our approach (database assisted structure identification, DASI used predictive models in MolFind to find candidate structures with chemical and physical properties similar to the unknown. These candidates were then used for seed structure generation using eight different structure generation algorithms. One algorithm was able to generate correct seed structures for 21/39 test compounds. Eleven of these seed structures were large enough to constrain the combinatorial structure generator to fewer than 100,000 structures. In 35/39 cases, at least one algorithm was able to generate a correct seed structure. The DASI method has several limitations and will require further experimental validation and optimization. At present, it seems most useful for identifying the structure of unknown-unknowns with molecular weights <200 Da.

  16. Characterising non-covalent interactions with the Cambridge Structural Database.

    Lommerse, J P; Taylor, R

    1997-02-01

    This review describes how the CSD can be used to study non-covalent interactions. Several different types of information may be obtained. First, the relative frequencies of various interactions can be studied; for example, we have shown that the terminal oxygen atoms of phosphate groups accept hydrogen bonds far more often than the linkage oxygens. Secondly, information can be obtained about the geometries of nonbonded contacts; for example, hydrogen bonds to P-O groups rarely form along the extension of the P-O bond, whereas short contacts between oxygen and carbon-bound iodine show a strong preference for linear C-I ... O angles. Thirdly, the CSD can be searched for novel interactions which may be exploited in inhibitor design; for example, the I ... O contacts just mentioned, and N-H ... pi hydrogen bonds. Finally, the CSD can suggest synthetic targets for medicinal chemistry; for example, molecules containing delocalised electron deficient groups such as trimethylammonium, pyridinium, thaizolium and dinitrophenyl have a good chance of binding to an active-site tryptophan. Although the CSD contains small-molecule crystal structures, not protein-ligand complexes, there is considerable evidence that the contacts seen in the two types of structures are similar. We have illustrated this a number of times in the present review and additional evidence has been given previously by Klebe. The major advantages of the CSD are its size, diversity and experimental accuracy. For these reasons, it is a useful tool for modellers engaged in rational inhibitor design.

  17. Synthesis and structural and electrical characterization of new materials Bi3R2FeTi3O15

    Gil Novoa, O.D.; Landínez Téllez, D.A.; Roa-Rojas, J.

    2012-01-01

    In this work we report the synthesis of polycrystalline samples of Bi 5 FeTi 3 O 15 and Bi 3 R 2 FeTi 3 O 15 new compounds with R=Nd, Sm, Gd, Dy, Ho and Yb. The materials were synthesized by the standard solid state reaction recipe from high purity (99.99%) powders. The structural characteristics of materials were analyzed by X-ray diffraction experiments. Rietveld refinement by the GSAS code was performed, taking the input data from the ICSD 74037 database. Results reveal that materials crystallized in orthorhombic single-phase structures and space group Fmm2. Measurements of polarization as a function of applied electric field were carried out using a Radiant Technology polarimeter. We determine the occurrence of hysteretic behaviors, which are characteristic of ferroelectric materials. The main values of remnant and coercive applied fields were observed for substitutions with Yb and Nd, which have the main atomic radii.

  18. NETMARK: A Schema-less Extension for Relational Databases for Managing Semi-structured Data Dynamically

    Maluf, David A.; Tran, Peter B.

    2003-01-01

    Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.

  19. SHEETSPAIR: A Database of Amino Acid Pairs in Protein Sheet Structures

    Ning Zhang

    2007-10-01

    Full Text Available Within folded strands of a protein, amino acids (AAs on every adjacent two strands form a pair of AAs. To explore the interactions between strands in a protein sheet structure, we have established an Internet-accessible relational database named SheetsPairs based on SQL Server 2000. The database has collected AAs pairs in proteins with detailed information. Furthermore, it utilizes a non-freetext database structure to store protein sequences and a specific database table with a unique number to store strands, which provides more searching options and rapid and accurate access to data queries. An IIS web server has been set up for data retrieval through a custom web interface, which enables complex data queries. Also searchable are parallel or anti-parallel folded strands and the list of strands in a specified protein.

  20. TIPdb-3D: the three-dimensional structure database of phytochemicals from Taiwan indigenous plants.

    Tung, Chun-Wei; Lin, Ying-Chi; Chang, Hsun-Shuo; Wang, Chia-Chi; Chen, Ih-Sheng; Jheng, Jhao-Liang; Li, Jih-Heng

    2014-01-01

    The rich indigenous and endemic plants in Taiwan serve as a resourceful bank for biologically active phytochemicals. Based on our TIPdb database curating bioactive phytochemicals from Taiwan indigenous plants, this study presents a three-dimensional (3D) chemical structure database named TIPdb-3D to support the discovery of novel pharmacologically active compounds. The Merck Molecular Force Field (MMFF94) was used to generate 3D structures of phytochemicals in TIPdb. The 3D structures could facilitate the analysis of 3D quantitative structure-activity relationship, the exploration of chemical space and the identification of potential pharmacologically active compounds using protein-ligand docking. Database URL: http://cwtung.kmu.edu.tw/tipdb. © The Author(s) 2014. Published by Oxford University Press.

  1. YAdumper: extracting and translating large information volumes from relational databases to structured flat files.

    Fernández, José M; Valencia, Alfonso

    2004-10-12

    Downloading the information stored in relational databases into XML and other flat formats is a common task in bioinformatics. This periodical dumping of information requires considerable CPU time, disk and memory resources. YAdumper has been developed as a purpose-specific tool to deal with the integral structured information download of relational databases. YAdumper is a Java application that organizes database extraction following an XML template based on an external Document Type Declaration. Compared with other non-native alternatives, YAdumper substantially reduces memory requirements and considerably improves writing performance.

  2. A protein relational database and protein family knowledge bases to facilitate structure-based design analyses.

    Mobilio, Dominick; Walker, Gary; Brooijmans, Natasja; Nilakantan, Ramaswamy; Denny, R Aldrin; Dejoannis, Jason; Feyfant, Eric; Kowticwar, Rupesh K; Mankala, Jyoti; Palli, Satish; Punyamantula, Sairam; Tatipally, Maneesh; John, Reji K; Humblet, Christine

    2010-08-01

    The Protein Data Bank is the most comprehensive source of experimental macromolecular structures. It can, however, be difficult at times to locate relevant structures with the Protein Data Bank search interface. This is particularly true when searching for complexes containing specific interactions between protein and ligand atoms. Moreover, searching within a family of proteins can be tedious. For example, one cannot search for some conserved residue as residue numbers vary across structures. We describe herein three databases, Protein Relational Database, Kinase Knowledge Base, and Matrix Metalloproteinase Knowledge Base, containing protein structures from the Protein Data Bank. In Protein Relational Database, atom-atom distances between protein and ligand have been precalculated allowing for millisecond retrieval based on atom identity and distance constraints. Ring centroids, centroid-centroid and centroid-atom distances and angles have also been included permitting queries for pi-stacking interactions and other structural motifs involving rings. Other geometric features can be searched through the inclusion of residue pair and triplet distances. In Kinase Knowledge Base and Matrix Metalloproteinase Knowledge Base, the catalytic domains have been aligned into common residue numbering schemes. Thus, by searching across Protein Relational Database and Kinase Knowledge Base, one can easily retrieve structures wherein, for example, a ligand of interest is making contact with the gatekeeper residue.

  3. BtoxDB: a comprehensive database of protein structural data on toxin-antitoxin systems.

    Barbosa, Luiz Carlos Bertucci; Garrido, Saulo Santesso; Marchetto, Reinaldo

    2015-03-01

    Toxin-antitoxin (TA) systems are diverse and abundant genetic modules in prokaryotic cells that are typically formed by two genes encoding a stable toxin and a labile antitoxin. Because TA systems are able to repress growth or kill cells and are considered to be important actors in cell persistence (multidrug resistance without genetic change), these modules are considered potential targets for alternative drug design. In this scenario, structural information for the proteins in these systems is highly valuable. In this report, we describe the development of a web-based system, named BtoxDB, that stores all protein structural data on TA systems. The BtoxDB database was implemented as a MySQL relational database using PHP scripting language. Web interfaces were developed using HTML, CSS and JavaScript. The data were collected from the PDB, UniProt and Entrez databases. These data were appropriately filtered using specialized literature and our previous knowledge about toxin-antitoxin systems. The database provides three modules ("Search", "Browse" and "Statistics") that enable searches, acquisition of contents and access to statistical data. Direct links to matching external databases are also available. The compilation of all protein structural data on TA systems in one platform is highly useful for researchers interested in this content. BtoxDB is publicly available at http://www.gurupi.uft.edu.br/btoxdb. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. EDCs DataBank: 3D-Structure database of endocrine disrupting chemicals.

    Montes-Grajales, Diana; Olivero-Verbel, Jesus

    2015-01-02

    Endocrine disrupting chemicals (EDCs) are a group of compounds that affect the endocrine system, frequently found in everyday products and epidemiologically associated with several diseases. The purpose of this work was to develop EDCs DataBank, the only database of EDCs with three-dimensional structures. This database was built on MySQL using the EU list of potential endocrine disruptors and TEDX list. It contains the three-dimensional structures available on PubChem, as well as a wide variety of information from different databases and text mining tools, useful for almost any kind of research regarding EDCs. The web platform was developed employing HTML, CSS and PHP languages, with dynamic contents in a graphic environment, facilitating information analysis. Currently EDCs DataBank has 615 molecules, including pesticides, natural and industrial products, cosmetics, drugs and food additives, among other low molecular weight xenobiotics. Therefore, this database can be used to study the toxicological effects of these molecules, or to develop pharmaceuticals targeting hormone receptors, through docking studies, high-throughput virtual screening and ligand-protein interaction analysis. EDCs DataBank is totally user-friendly and the 3D-structures of the molecules can be downloaded in several formats. This database is freely available at http://edcs.unicartagena.edu.co. Copyright © 2014. Published by Elsevier Ireland Ltd.

  5. [Construction of chemical information database based on optical structure recognition technique].

    Lv, C Y; Li, M N; Zhang, L R; Liu, Z M

    2018-04-18

    To create a protocol that could be used to construct chemical information database from scientific literature quickly and automatically. Scientific literature, patents and technical reports from different chemical disciplines were collected and stored in PDF format as fundamental datasets. Chemical structures were transformed from published documents and images to machine-readable data by using the name conversion technology and optical structure recognition tool CLiDE. In the process of molecular structure information extraction, Markush structures were enumerated into well-defined monomer molecules by means of QueryTools in molecule editor ChemDraw. Document management software EndNote X8 was applied to acquire bibliographical references involving title, author, journal and year of publication. Text mining toolkit ChemDataExtractor was adopted to retrieve information that could be used to populate structured chemical database from figures, tables, and textual paragraphs. After this step, detailed manual revision and annotation were conducted in order to ensure the accuracy and completeness of the data. In addition to the literature data, computing simulation platform Pipeline Pilot 7.5 was utilized to calculate the physical and chemical properties and predict molecular attributes. Furthermore, open database ChEMBL was linked to fetch known bioactivities, such as indications and targets. After information extraction and data expansion, five separate metadata files were generated, including molecular structure data file, molecular information, bibliographical references, predictable attributes and known bioactivities. Canonical simplified molecular input line entry specification as primary key, metadata files were associated through common key nodes including molecular number and PDF number to construct an integrated chemical information database. A reasonable construction protocol of chemical information database was created successfully. A total of 174 research

  6. MINIZOO in de Benelux : Structure and use of a database of skin irritating organisms

    Bronswijk, van J.E.M.H.; Reichl, E.R.

    1986-01-01

    MI NIZOO database is structured within the standard software package SIRv2 (= Scientific Information Retrieval version 2). This flexible program is installed on the university mainframe (a CYBER 180). The program dBASE II employed on a microcomputer (MICROSOL), can be used for part of data entry and

  7. Structures in the communication between lexicographer and programmer: Database and interface

    Tarp, Sven

    2015-01-01

    This paper deals exclusively with e-lexicography. It intends to answer the question how much a lexicographer in charge of a new e-dictionary project should know about lexicographical structures, and how this knowledge could be communicated to the IT programmer designing the underlying database...... and the corresponding user interfaces. With this purpose, it first defines the concepts of lexicographical database, e-dictionary and e-lexicographical structure. Then it discusses some of the new ways in which lexicographical structures express themselves in the digital environment. It stresses, above all......, their dynamic character and great complexity which make it extremely difficult and time-consuming for the lexicographer to get an overview of all the structures in an e-lexicographical project. Finally, and based upon the experience from the planning and design of five e-dictionaries, it shows how...

  8. Structural Design of HRA Database using generic task for Quantitative Analysis of Human Performance

    Kim, Seung Hwan; Kim, Yo Chan; Choi, Sun Yeong; Park, Jin Kyun; Jung Won Dea [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    This paper describes a design of generic task based HRA database for quantitative analysis of human performance in order to estimate the number of task conductions. The estimation method to get the total task conduction number using direct counting is not easy to realize and maintain its data collection framework. To resolve this problem, this paper suggests an indirect method and a database structure using generic task that enables to estimate the total number of conduction based on instructions of operating procedures of nuclear power plants. In order to reduce human errors, therefore, all information on the human errors taken by operators in the power plant should be systematically collected and examined in its management. Korea Atomic Energy Research Institute (KAERI) is carrying out a research to develop a data collection framework to establish a Human Reliability Analysis (HRA) database that could be employed as technical bases to generate human error probabilities (HEPs) and performance shaping factors (PSFs)]. As a result of the study, the essential table schema was designed to the generic task database which stores generic tasks, procedure lists and task tree structures, and other supporting tables. The number of task conduction based on the operating procedures for HEP estimation was enabled through the generic task database and framework. To verify the framework applicability, case study for the simulated experiments was performed and analyzed using graphic user interfaces developed in this study.

  9. Structural Design of HRA Database using generic task for Quantitative Analysis of Human Performance

    Kim, Seung Hwan; Kim, Yo Chan; Choi, Sun Yeong; Park, Jin Kyun; Jung Won Dea

    2016-01-01

    This paper describes a design of generic task based HRA database for quantitative analysis of human performance in order to estimate the number of task conductions. The estimation method to get the total task conduction number using direct counting is not easy to realize and maintain its data collection framework. To resolve this problem, this paper suggests an indirect method and a database structure using generic task that enables to estimate the total number of conduction based on instructions of operating procedures of nuclear power plants. In order to reduce human errors, therefore, all information on the human errors taken by operators in the power plant should be systematically collected and examined in its management. Korea Atomic Energy Research Institute (KAERI) is carrying out a research to develop a data collection framework to establish a Human Reliability Analysis (HRA) database that could be employed as technical bases to generate human error probabilities (HEPs) and performance shaping factors (PSFs)]. As a result of the study, the essential table schema was designed to the generic task database which stores generic tasks, procedure lists and task tree structures, and other supporting tables. The number of task conduction based on the operating procedures for HEP estimation was enabled through the generic task database and framework. To verify the framework applicability, case study for the simulated experiments was performed and analyzed using graphic user interfaces developed in this study.

  10. An Investigation of the Fine Spatial Structure of Meteor Streams Using the Relational Database ``Meteor''

    Karpov, A. V.; Yumagulov, E. Z.

    2003-05-01

    We have restored and ordered the archive of meteor observations carried out with a meteor radar complex ``KGU-M5'' since 1986. A relational database has been formed under the control of the Database Management System (DBMS) Oracle 8. We also improved and tested a statistical method for studying the fine spatial structure of meteor streams with allowance for the specific features of application of the DBMS. Statistical analysis of the results of observations made it possible to obtain information about the substance distribution in the Quadrantid, Geminid, and Perseid meteor streams.

  11. Extending the Intermediate Data Structure (IDS for longitudinal historical databases to include geographic data

    Finn Hedefalk

    2014-09-01

    Full Text Available The Intermediate Data Structure (IDS is a standardised database structure for longitudinal historical databases. Such a common structure facilitates data sharing and comparative research. In this study, we propose an extended version of IDS, named IDS-Geo, that also includes geographic data. The geographic data that will be stored in IDS-Geo are primarily buildings and/or property units, and the purpose of these geographic data is mainly to link individuals to places in space. When we want to assign such detailed spatial locations to individuals (in times before there were any detailed house addresses available, we often have to create tailored geographic datasets. In those cases, there are benefits of storing geographic data in the same structure as the demographic data. Moreover, we propose the export of data from IDS-Geo using an eXtensible Markup Language (XML Schema. IDS-Geo is implemented in a case study using historical property units, for the period 1804 to 1913, stored in a geographically extended version of the Scanian Economic Demographic Database (SEDD. To fit into the IDS-Geo data structure, we included an object lifeline representation of all of the property units (based on the snapshot time representation of single historical maps and poll-tax registers. The case study verifies that the IDS-Geo model is capable of handling geographic data that can be linked to demographic data.

  12. Ultra-Structure database design methodology for managing systems biology data and analyses

    Hemminger Bradley M

    2009-08-01

    Full Text Available Abstract Background Modern, high-throughput biological experiments generate copious, heterogeneous, interconnected data sets. Research is dynamic, with frequently changing protocols, techniques, instruments, and file formats. Because of these factors, systems designed to manage and integrate modern biological data sets often end up as large, unwieldy databases that become difficult to maintain or evolve. The novel rule-based approach of the Ultra-Structure design methodology presents a potential solution to this problem. By representing both data and processes as formal rules within a database, an Ultra-Structure system constitutes a flexible framework that enables users to explicitly store domain knowledge in both a machine- and human-readable form. End users themselves can change the system's capabilities without programmer intervention, simply by altering database contents; no computer code or schemas need be modified. This provides flexibility in adapting to change, and allows integration of disparate, heterogenous data sets within a small core set of database tables, facilitating joint analysis and visualization without becoming unwieldy. Here, we examine the application of Ultra-Structure to our ongoing research program for the integration of large proteomic and genomic data sets (proteogenomic mapping. Results We transitioned our proteogenomic mapping information system from a traditional entity-relationship design to one based on Ultra-Structure. Our system integrates tandem mass spectrum data, genomic annotation sets, and spectrum/peptide mappings, all within a small, general framework implemented within a standard relational database system. General software procedures driven by user-modifiable rules can perform tasks such as logical deduction and location-based computations. The system is not tied specifically to proteogenomic research, but is rather designed to accommodate virtually any kind of biological research. Conclusion We find

  13. Machine learning approach to detect intruders in database based on hexplet data structure

    Saad M. Darwish

    2016-09-01

    Full Text Available Most of valuable information resources for any organization are stored in the database; it is a serious subject to protect this information against intruders. However, conventional security mechanisms are not designed to detect anomalous actions of database users. An intrusion detection system (IDS, delivers an extra layer of security that cannot be guaranteed by built-in security tools, is the ideal solution to defend databases from intruders. This paper suggests an anomaly detection approach that summarizes the raw transactional SQL queries into a compact data structure called hexplet, which can model normal database access behavior (abstract the user's profile and recognize impostors specifically tailored for role-based access control (RBAC database system. This hexplet lets us to preserve the correlation among SQL statements in the same transaction by exploiting the information in the transaction-log entry with the aim to improve detection accuracy specially those inside the organization and behave strange behavior. The model utilizes naive Bayes classifier (NBC as the simplest supervised learning technique for creating profiles and evaluating the legitimacy of a transaction. Experimental results show the performance of the proposed model in the term of detection rate.

  14. Visualizing information across multidimensional post-genomic structured and textual databases.

    Tao, Ying; Friedman, Carol; Lussier, Yves A

    2005-04-15

    Visualizing relationships among biological information to facilitate understanding is crucial to biological research during the post-genomic era. Although different systems have been developed to view gene-phenotype relationships for specific databases, very few have been designed specifically as a general flexible tool for visualizing multidimensional genotypic and phenotypic information together. Our goal is to develop a method for visualizing multidimensional genotypic and phenotypic information and a model that unifies different biological databases in order to present the integrated knowledge using a uniform interface. We developed a novel, flexible and generalizable visualization tool, called PhenoGenesviewer (PGviewer), which in this paper was used to display gene-phenotype relationships from a human-curated database (OMIM) and from an automatic method using a Natural Language Processing tool called BioMedLEE. Data obtained from multiple databases were first integrated into a uniform structure and then organized by PGviewer. PGviewer provides a flexible query interface that allows dynamic selection and ordering of any desired dimension in the databases. Based on users' queries, results can be visualized using hierarchical expandable trees that present views specified by users according to their research interests. We believe that this method, which allows users to dynamically organize and visualize multiple dimensions, is a potentially powerful and promising tool that should substantially facilitate biological research. PhenogenesViewer as well as its support and tutorial are available at http://www.dbmi.columbia.edu/pgviewer/ Lussier@dbmi.columbia.edu.

  15. Biofuel Database

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  16. Integrating protein structures and precomputed genealogies in the Magnum database: Examples with cellular retinoid binding proteins

    Bradley Michael E

    2006-02-01

    Full Text Available Abstract Background When accurate models for the divergent evolution of protein sequences are integrated with complementary biological information, such as folded protein structures, analyses of the combined data often lead to new hypotheses about molecular physiology. This represents an excellent example of how bioinformatics can be used to guide experimental research. However, progress in this direction has been slowed by the lack of a publicly available resource suitable for general use. Results The precomputed Magnum database offers a solution to this problem for ca. 1,800 full-length protein families with at least one crystal structure. The Magnum deliverables include 1 multiple sequence alignments, 2 mapping of alignment sites to crystal structure sites, 3 phylogenetic trees, 4 inferred ancestral sequences at internal tree nodes, and 5 amino acid replacements along tree branches. Comprehensive evaluations revealed that the automated procedures used to construct Magnum produced accurate models of how proteins divergently evolve, or genealogies, and correctly integrated these with the structural data. To demonstrate Magnum's capabilities, we asked for amino acid replacements requiring three nucleotide substitutions, located at internal protein structure sites, and occurring on short phylogenetic tree branches. In the cellular retinoid binding protein family a site that potentially modulates ligand binding affinity was discovered. Recruitment of cellular retinol binding protein to function as a lens crystallin in the diurnal gecko afforded another opportunity to showcase the predictive value of a browsable database containing branch replacement patterns integrated with protein structures. Conclusion We integrated two areas of protein science, evolution and structure, on a large scale and created a precomputed database, known as Magnum, which is the first freely available resource of its kind. Magnum provides evolutionary and structural

  17. A Reference Database for Circular Dichroism Spectroscopy Covering Fold and Secondary Structure Space

    Lees, J.; Miles, A.; Wien, F.; Wallace, B.

    2006-01-01

    Circular Dichroism (CD) spectroscopy is a long-established technique for studying protein secondary structures in solution. Empirical analyses of CD data rely on the availability of reference datasets comprised of far-UV CD spectra of proteins whose crystal structures have been determined. This article reports on the creation of a new reference dataset which effectively covers both secondary structure and fold space, and uses the higher information content available in synchrotron radiation circular dichroism (SRCD) spectra to more accurately predict secondary structure than has been possible with existing reference datasets. It also examines the effects of wavelength range, structural redundancy and different means of categorizing secondary structures on the accuracy of the analyses. In addition, it describes a novel use of hierarchical cluster analyses to identify protein relatedness based on spectral properties alone. The databases are shown to be applicable in both conventional CD and SRCD spectroscopic analyses of proteins. Hence, by combining new bioinformatics and biophysical methods, a database has been produced that should have wide applicability as a tool for structural molecular biology

  18. LAND-deFeND - An innovative database structure for landslides and floods and their consequences.

    Napolitano, Elisabetta; Marchesini, Ivan; Salvati, Paola; Donnini, Marco; Bianchi, Cinzia; Guzzetti, Fausto

    2018-02-01

    Information on historical landslides and floods - collectively called "geo-hydrological hazards - is key to understand the complex dynamics of the events, to estimate the temporal and spatial frequency of damaging events, and to quantify their impact. A number of databases on geo-hydrological hazards and their consequences have been developed worldwide at different geographical and temporal scales. Of the few available database structures that can handle information on both landslides and floods some are outdated and others were not designed to store, organize, and manage information on single phenomena or on the type and monetary value of the damages and the remediation actions. Here, we present the LANDslides and Floods National Database (LAND-deFeND), a new database structure able to store, organize, and manage in a single digital structure spatial information collected from various sources with different accuracy. In designing LAND-deFeND, we defined four groups of entities, namely: nature-related, human-related, geospatial-related, and information-source-related entities that collectively can describe fully the geo-hydrological hazards and their consequences. In LAND-deFeND, the main entities are the nature-related entities, encompassing: (i) the "phenomenon", a single landslide or local inundation, (ii) the "event", which represent the ensemble of the inundations and/or landslides occurred in a conventional geographical area in a limited period, and (iii) the "trigger", which is the meteo-climatic or seismic cause (trigger) of the geo-hydrological hazards. LAND-deFeND maintains the relations between the nature-related entities and the human-related entities even where the information is missing partially. The physical model of the LAND-deFeND contains 32 tables, including nine input tables, 21 dictionary tables, and two association tables, and ten views, including specific views that make the database structure compliant with the EC INSPIRE and the Floods

  19. The FP4026 Research Database on the fundamental period of RC infilled frame structures.

    Asteris, Panagiotis G

    2016-12-01

    The fundamental period of vibration appears to be one of the most critical parameters for the seismic design of buildings because it strongly affects the destructive impact of the seismic forces. In this article, important research data (entitled FP4026 Research Database (Fundamental Period-4026 cases of infilled frames) based on a detailed and in-depth analytical research on the fundamental period of reinforced concrete structures is presented. In particular, the values of the fundamental period which have been analytically determined are presented, taking into account the majority of the involved parameters. This database can be extremely valuable for the development of new code proposals for the estimation of the fundamental period of reinforced concrete structures fully or partially infilled with masonry walls.

  20. The FP4026 Research Database on the fundamental period of RC infilled frame structures

    Panagiotis G. Asteris

    2016-12-01

    Full Text Available The fundamental period of vibration appears to be one of the most critical parameters for the seismic design of buildings because it strongly affects the destructive impact of the seismic forces. In this article, important research data (entitled FP4026 Research Database (Fundamental Period-4026 cases of infilled frames based on a detailed and in-depth analytical research on the fundamental period of reinforced concrete structures is presented. In particular, the values of the fundamental period which have been analytically determined are presented, taking into account the majority of the involved parameters. This database can be extremely valuable for the development of new code proposals for the estimation of the fundamental period of reinforced concrete structures fully or partially infilled with masonry walls.

  1. Protein structure determination by exhaustive search of Protein Data Bank derived databases.

    Stokes-Rees, Ian; Sliz, Piotr

    2010-12-14

    Parallel sequence and structure alignment tools have become ubiquitous and invaluable at all levels in the study of biological systems. We demonstrate the application and utility of this same parallel search paradigm to the process of protein structure determination, benefitting from the large and growing corpus of known structures. Such searches were previously computationally intractable. Through the method of Wide Search Molecular Replacement, developed here, they can be completed in a few hours with the aide of national-scale federated cyberinfrastructure. By dramatically expanding the range of models considered for structure determination, we show that small (less than 12% structural coverage) and low sequence identity (less than 20% identity) template structures can be identified through multidimensional template scoring metrics and used for structure determination. Many new macromolecular complexes can benefit significantly from such a technique due to the lack of known homologous protein folds or sequences. We demonstrate the effectiveness of the method by determining the structure of a full-length p97 homologue from Trichoplusia ni. Example cases with the MHC/T-cell receptor complex and the EmoB protein provide systematic estimates of minimum sequence identity, structure coverage, and structural similarity required for this method to succeed. We describe how this structure-search approach and other novel computationally intensive workflows are made tractable through integration with the US national computational cyberinfrastructure, allowing, for example, rapid processing of the entire Structural Classification of Proteins protein fragment database.

  2. DOE Order 5480.28 natural phenomena hazards mitigation system, structure, component database

    Conrads, T.J.

    1997-01-01

    This document describes the Prioritization Phase Database that was prepared for the Project Hanford Management Contractors to support the implementation of DOE Order 5480.28. Included within this document are three appendices which contain the prioritized list of applicable Project Hanford Management Contractors Systems, Structures, and Components. These appendices include those assets that comply with the requirements of DOE Order 5480.28, assets for which a waiver will be recommended, and assets requiring additional information before compliance can be ascertained

  3. Information structure design for databases a practical guide to data modelling

    Mortimer, Andrew J

    2014-01-01

    Computer Weekly Professional Series: Information Structure Design for Databases: A Practical Guide to Data modeling focuses on practical data modeling covering business and information systems. The publication first offers information on data and information, business analysis, and entity relationship model basics. Discussions cover degree of relationship symbols, relationship rules, membership markers, types of information systems, data driven systems, cost and value of information, importance of data modeling, and quality of information. The book then takes a look at entity relationship mode

  4. Structure and representation of data elements on factual database - SIST activity in Japan

    Nakamoto, H.; Onodera, N.

    1990-05-01

    A factual database has a variety of forms and types of data structure that produces various kinds of records composed of a great number of data items, which differ from file to file. Second, a factual database needs higher speciality in preparation on content analysis, and users wish to process download-ed data successively for analysis, diagnosis, simulation, projecting, design, linguistic processing and so on. A meaningful quantitative datum can be divided into some consistent sub-elements. In addition to this fine structure of data elements, representation of data elements is also very important to integrate factual data on to public files. In this paper we shall discuss problems and thoughts about the structure and representation of data elements contained in numerical information on a practical basis. The guideline discussed here is under draft by sponsorship of the Government and is being implemented to build database of space experiments. The guideline involves expression, unification, notification and handling of data for numerical information in machine readable form, such as numerical value, numerical formula, graphics, semi-quantitative value, significant figures, ranged data, accuracy and precision, conversion of unit, semi-quantitative values, error information and so on. (author)

  5. Biological knowledge bases using Wikis: combining the flexibility of Wikis with the structure of databases.

    Brohée, Sylvain; Barriot, Roland; Moreau, Yves

    2010-09-01

    In recent years, the number of knowledge bases developed using Wiki technology has exploded. Unfortunately, next to their numerous advantages, classical Wikis present a critical limitation: the invaluable knowledge they gather is represented as free text, which hinders their computational exploitation. This is in sharp contrast with the current practice for biological databases where the data is made available in a structured way. Here, we present WikiOpener an extension for the classical MediaWiki engine that augments Wiki pages by allowing on-the-fly querying and formatting resources external to the Wiki. Those resources may provide data extracted from databases or DAS tracks, or even results returned by local or remote bioinformatics analysis tools. This also implies that structured data can be edited via dedicated forms. Hence, this generic resource combines the structure of biological databases with the flexibility of collaborative Wikis. The source code and its documentation are freely available on the MediaWiki website: http://www.mediawiki.org/wiki/Extension:WikiOpener.

  6. CMD: A Database to Store the Bonding States of Cysteine Motifs with Secondary Structures

    Hamed Bostan

    2012-01-01

    Full Text Available Computational approaches to the disulphide bonding state and its connectivity pattern prediction are based on various descriptors. One descriptor is the amino acid sequence motifs flanking the cysteine residue motifs. Despite the existence of disulphide bonding information in many databases and applications, there is no complete reference and motif query available at the moment. Cysteine motif database (CMD is the first online resource that stores all cysteine residues, their flanking motifs with their secondary structure, and propensity values assignment derived from the laboratory data. We extracted more than 3 million cysteine motifs from PDB and UniProt data, annotated with secondary structure assignment, propensity value assignment, and frequency of occurrence and coefficiency of their bonding status. Removal of redundancies generated 15875 unique flanking motifs that are always bonded and 41577 unique patterns that are always nonbonded. Queries are based on the protein ID, FASTA sequence, sequence motif, and secondary structure individually or in batch format using the provided APIs that allow remote users to query our database via third party software and/or high throughput screening/querying. The CMD offers extensive information about the bonded, free cysteine residues, and their motifs that allows in-depth characterization of the sequence motif composition.

  7. Structure and representation of data elements on factual database - SIST activity in Japan

    Nakamoto, H [Integrated Researches for Information Science, Tokyo (Japan); Onodera, N [Japan Information Center of Science and Technology, Tokyo (Japan)

    1990-05-01

    A factual database has a variety of forms and types of data structure that produces various kinds of records composed of a great number of data items, which differ from file to file. Second, a factual database needs higher speciality in preparation on content analysis, and users wish to process download-ed data successively for analysis, diagnosis, simulation, projecting, design, linguistic processing and so on. A meaningful quantitative datum can be divided into some consistent sub-elements. In addition to this fine structure of data elements, representation of data elements is also very important to integrate factual data on to public files. In this paper we shall discuss problems and thoughts about the structure and representation of data elements contained in numerical information on a practical basis. The guideline discussed here is under draft by sponsorship of the Government and is being implemented to build database of space experiments. The guideline involves expression, unification, notification and handling of data for numerical information in machine readable form, such as numerical value, numerical formula, graphics, semi-quantitative value, significant figures, ranged data, accuracy and precision, conversion of unit, semi-quantitative values, error information and so on. (author).

  8. SeqHound: biological sequence and structure database as a platform for bioinformatics research

    Dumontier Michel

    2002-10-01

    Full Text Available Abstract Background SeqHound has been developed as an integrated biological sequence, taxonomy, annotation and 3-D structure database system. It provides a high-performance server platform for bioinformatics research in a locally-hosted environment. Results SeqHound is based on the National Center for Biotechnology Information data model and programming tools. It offers daily updated contents of all Entrez sequence databases in addition to 3-D structural data and information about sequence redundancies, sequence neighbours, taxonomy, complete genomes, functional annotation including Gene Ontology terms and literature links to PubMed. SeqHound is accessible via a web server through a Perl, C or C++ remote API or an optimized local API. It provides functionality necessary to retrieve specialized subsets of sequences, structures and structural domains. Sequences may be retrieved in FASTA, GenBank, ASN.1 and XML formats. Structures are available in ASN.1, XML and PDB formats. Emphasis has been placed on complete genomes, taxonomy, domain and functional annotation as well as 3-D structural functionality in the API, while fielded text indexing functionality remains under development. SeqHound also offers a streamlined WWW interface for simple web-user queries. Conclusions The system has proven useful in several published bioinformatics projects such as the BIND database and offers a cost-effective infrastructure for research. SeqHound will continue to develop and be provided as a service of the Blueprint Initiative at the Samuel Lunenfeld Research Institute. The source code and examples are available under the terms of the GNU public license at the Sourceforge site http://sourceforge.net/projects/slritools/ in the SLRI Toolkit.

  9. CamMedNP: building the Cameroonian 3D structural natural products database for virtual screening.

    Ntie-Kang, Fidele; Mbah, James A; Mbaze, Luc Meva'a; Lifongo, Lydia L; Scharfe, Michael; Hanna, Joelle Ngo; Cho-Ngwa, Fidelis; Onguéné, Pascal Amoa; Owono Owono, Luc C; Megnassan, Eugene; Sippl, Wolfgang; Efange, Simon M N

    2013-04-16

    Computer-aided drug design (CADD) often involves virtual screening (VS) of large compound datasets and the availability of such is vital for drug discovery protocols. We present CamMedNP - a new database beginning with more than 2,500 compounds of natural origin, along with some of their derivatives which were obtained through hemisynthesis. These are pure compounds which have been previously isolated and characterized using modern spectroscopic methods and published by several research teams spread across Cameroon. In the present study, 224 distinct medicinal plant species belonging to 55 plant families from the Cameroonian flora have been considered. About 80 % of these have been previously published and/or referenced in internationally recognized journals. For each compound, the optimized 3D structure, drug-like properties, plant source, collection site and currently known biological activities are given, as well as literature references. We have evaluated the "drug-likeness" of this database using Lipinski's "Rule of Five". A diversity analysis has been carried out in comparison with the ChemBridge diverse database. CamMedNP could be highly useful for database screening and natural product lead generation programs.

  10. Managing expectations: assessment of chemistry databases generated by automated extraction of chemical structures from patents.

    Senger, Stefan; Bartek, Luca; Papadatos, George; Gaulton, Anna

    2015-12-01

    First public disclosure of new chemical entities often takes place in patents, which makes them an important source of information. However, with an ever increasing number of patent applications, manual processing and curation on such a large scale becomes even more challenging. An alternative approach better suited for this large corpus of documents is the automated extraction of chemical structures. A number of patent chemistry databases generated by using the latter approach are now available but little is known that can help to manage expectations when using them. This study aims to address this by comparing two such freely available sources, SureChEMBL and IBM SIIP (IBM Strategic Intellectual Property Insight Platform), with manually curated commercial databases. When looking at the percentage of chemical structures successfully extracted from a set of patents, using SciFinder as our reference, 59 and 51 % were also found in our comparison in SureChEMBL and IBM SIIP, respectively. When performing this comparison with compounds as starting point, i.e. establishing if for a list of compounds the databases provide the links between chemical structures and patents they appear in, we obtained similar results. SureChEMBL and IBM SIIP found 62 and 59 %, respectively, of the compound-patent pairs obtained from Reaxys. In our comparison of automatically generated vs. manually curated patent chemistry databases, the former successfully provided approximately 60 % of links between chemical structure and patents. It needs to be stressed that only a very limited number of patents and compound-patent pairs were used for our comparison. Nevertheless, our results will hopefully help to manage expectations of users of patent chemistry databases of this type and provide a useful framework for more studies like ours as well as guide future developments of the workflows used for the automated extraction of chemical structures from patents. The challenges we have encountered

  11. Palingol: a declarative programming language to describe nucleic acids' secondary structures and to scan sequence database.

    Billoud, B; Kontic, M; Viari, A

    1996-01-01

    At the DNA/RNA level, biological signals are defined by a combination of spatial structures and sequence motifs. Until now, few attempts had been made in writing general purpose search programs that take into account both sequence and structure criteria. Indeed, the most successful structure scanning programs are usually dedicated to particular structures and are written using general purpose programming languages through a complex and time consuming process where the biological problem of defining the structure and the computer engineering problem of looking for it are intimately intertwined. In this paper, we describe a general representation of structures, suitable for database scanning, together with a programming language, Palingol, designed to manipulate it. Palingol has specific data types, corresponding to structural elements-basically helices-that can be arranged in any way to form a complex structure. As a consequence of the declarative approach used in Palingol, the user should only focus on 'what to search for' while the language engine takes care of 'how to look for it'. Therefore, it becomes simpler to write a scanning program and the structural constraints that define the required structure are more clearly identified. PMID:8628670

  12. OECD Structural Analysis Databases: Sectoral Principles in the Study of Markets for Goods and Services

    Marina D. Simonova

    2015-01-01

    Full Text Available This study focuses on the characteristics of the information database of the OECD structural business statistics in the analysis of markets of goods and services, and macroeconomic trends. The system of indicators of structural statistics is presented in OECD publications and on-line access to a wide range of users. Collected data sources generated by the OECD offices are based on the national statistical offices of country-members, Russia and the BRICS. Data on the development of economic sectors are calculated according to the methodology of individual countries, regional and international standards: annual national accounts of countries, annual industry and business surveys, methodology of short-term indicators, statistics of international trade in goods. Data are aggregated on the basis of complex indicators statements of the enterprises' questionnaire and business surveys. Information system of structural statistics which is available and continuously updated, has certain features. It is composed of several subsystems: Structural Statistics on Industry and Services, EU entrepreneurship statistics, Indicators of Industry and Services, International Trade in Commodities Statistics. The grouping of industries is based on the International standard industrial classification of all economic activities (ISIC. Classification of foreign trade flows is made in accordance with the Harmonized system of description and coding of goods. The structural statistics databases comprise four classes of industries' grouping according to the technology intensity. The paper discusses the main reasons for the non-comparability of data in the subsystems in certain time intervals.

  13. Structural pattern recognition methods based on string comparison for fusion databases

    Dormido-Canto, S.; Farias, G.; Dormido, R.; Vega, J.; Sanchez, J.; Duro, N.; Vargas, H.; Ratta, G.; Pereira, A.; Portas, A.

    2008-01-01

    Databases for fusion experiments are designed to store several million waveforms. Temporal evolution signals show the same patterns under the same plasma conditions and, therefore, pattern recognition techniques allow the identification of similar plasma behaviours. This article is focused on the comparison of structural pattern recognition methods. A pattern can be composed of simpler sub-patterns, where the most elementary sub-patterns are known as primitives. Selection of primitives is an essential issue in structural pattern recognition methods, because they determine what types of structural components can be constructed. However, it should be noted that there is not a general solution to extract structural features (primitives) from data. So, four different ways to compute the primitives of plasma waveforms are compared: (1) constant length primitives, (2) adaptive length primitives, (3) concavity method and (4) concavity method for noisy signals. Each method defines a code alphabet and, in this way, the pattern recognition problem is carried out via string comparisons. Results of the four methods with the TJ-II stellarator databases will be discussed

  14. Structural pattern recognition methods based on string comparison for fusion databases

    Dormido-Canto, S. [Dpto. Informatica y Automatica - UNED 28040, Madrid (Spain)], E-mail: sebas@dia.uned.es; Farias, G.; Dormido, R. [Dpto. Informatica y Automatica - UNED 28040, Madrid (Spain); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, 28040, Madrid (Spain); Sanchez, J.; Duro, N.; Vargas, H. [Dpto. Informatica y Automatica - UNED 28040, Madrid (Spain); Ratta, G.; Pereira, A.; Portas, A. [Asociacion EURATOM/CIEMAT para Fusion, 28040, Madrid (Spain)

    2008-04-15

    Databases for fusion experiments are designed to store several million waveforms. Temporal evolution signals show the same patterns under the same plasma conditions and, therefore, pattern recognition techniques allow the identification of similar plasma behaviours. This article is focused on the comparison of structural pattern recognition methods. A pattern can be composed of simpler sub-patterns, where the most elementary sub-patterns are known as primitives. Selection of primitives is an essential issue in structural pattern recognition methods, because they determine what types of structural components can be constructed. However, it should be noted that there is not a general solution to extract structural features (primitives) from data. So, four different ways to compute the primitives of plasma waveforms are compared: (1) constant length primitives, (2) adaptive length primitives, (3) concavity method and (4) concavity method for noisy signals. Each method defines a code alphabet and, in this way, the pattern recognition problem is carried out via string comparisons. Results of the four methods with the TJ-II stellarator databases will be discussed.

  15. Strabo: An App and Database for Structural Geology and Tectonics Data

    Newman, J.; Williams, R. T.; Tikoff, B.; Walker, J. D.; Good, J.; Michels, Z. D.; Ash, J.

    2016-12-01

    Strabo is a data system designed to facilitate digital storage and sharing of structural geology and tectonics data. The data system allows researchers to store and share field and laboratory data as well as construct new multi-disciplinary data sets. Strabo is built on graph database technology, as opposed to a relational database, which provides the flexibility to define relationships between objects of any type. This framework allows observations to be linked in a complex and hierarchical manner that is not possible in traditional database topologies. Thus, the advantage of the Strabo data structure is the ability of graph databases to link objects in both numerous and complex ways, in a manner that more accurately reflects the realities of the collecting and organizing of geological data sets. The data system is accessible via a mobile interface (iOS and Android devices) that allows these data to be stored, visualized, and shared during primary collection in the field or the laboratory. The Strabo Data System is underlain by the concept of a "Spot," which we define as any observation that characterizes a specific area. This can be anything from a strike and dip measurement of bedding to cross-cutting relationships between faults in complex dissected terrains. Each of these spots can then contain other Spots and/or measurements (e.g., lithology, slickenlines, displacement magnitude.) Hence, the Spot concept is applicable to all relationships and observation sets. Strabo is therefore capable of quantifying and digitally storing large spatial variations and complex geometries of naturally deformed rocks within hierarchically related maps and images. These approaches provide an observational fidelity comparable to a traditional field book, but with the added benefits of digital data storage, processing, and ease of sharing. This approach allows Strabo to integrate seamlessly into the workflow of most geologists. Future efforts will focus on extending Strabo to

  16. Agricultural Conservation Planning Framework: 3. Land Use and Field Boundary Database Development and Structure.

    Tomer, Mark D; James, David E; Sandoval-Green, Claudette M J

    2017-05-01

    Conservation planning information is important for identifying options for watershed water quality improvement and can be developed for use at field, farm, and watershed scales. Translation across scales is a key issue impeding progress at watershed scales because watershed improvement goals must be connected with implementation of farm- and field-level conservation practices to demonstrate success. This is particularly true when examining alternatives for "trap and treat" practices implemented at agricultural-field edges to control (or influence) water flows through fields, landscapes, and riparian corridors within agricultural watersheds. We propose that database structures used in developing conservation planning information can achieve translation across conservation-planning scales, and we developed the Agricultural Conservation Planning Framework (ACPF) to enable practical planning applications. The ACPF comprises a planning concept, a database to facilitate field-level and watershed-scale analyses, and an ArcGIS toolbox with Python scripts to identify specific options for placement of conservation practices. This paper appends two prior publications and describes the structure of the ACPF database, which contains land use, crop history, and soils information and is available for download for 6091 HUC12 watersheds located across Iowa, Illinois, Minnesota, and parts of Kansas, Missouri, Nebraska, and Wisconsin and comprises information on 2.74 × 10 agricultural fields (available through /). Sample results examining land use trends across Iowa and Illinois are presented here to demonstrate potential uses of the database. While designed for use with the ACPF toolbox, users are welcome to use the ACPF watershed data in a variety of planning and modeling approaches. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  17. Using the Cambridge structure database of organic and organometalic compounds in structure biology

    Hašek, Jindřich

    2010-01-01

    Roč. 17, 1a (2010), b24-b26 ISSN 1211-5894. [Discussions in Structural Molecular Biology /8./. Nové Hrady, 18.03.2010-20.03.2010] R&D Projects: GA AV ČR IAA500500701; GA ČR GA305/07/1073 Institutional research plan: CEZ:AV0Z40500505 Keywords : organic chemistry * Cambridge Structure Data base * molecular structure Subject RIV: CD - Macromolecular Chemistry http://xray.cz/ms/bul2010-1a/friday2.pdf

  18. Searching the protein structure database for ligand-binding site similarities using CPASS v.2

    Caprez Adam

    2011-01-01

    Full Text Available Abstract Background A recent analysis of protein sequences deposited in the NCBI RefSeq database indicates that ~8.5 million protein sequences are encoded in prokaryotic and eukaryotic genomes, where ~30% are explicitly annotated as "hypothetical" or "uncharacterized" protein. Our Comparison of Protein Active-Site Structures (CPASS v.2 database and software compares the sequence and structural characteristics of experimentally determined ligand binding sites to infer a functional relationship in the absence of global sequence or structure similarity. CPASS is an important component of our Functional Annotation Screening Technology by NMR (FAST-NMR protocol and has been successfully applied to aid the annotation of a number of proteins of unknown function. Findings We report a major upgrade to our CPASS software and database that significantly improves its broad utility. CPASS v.2 is designed with a layered architecture to increase flexibility and portability that also enables job distribution over the Open Science Grid (OSG to increase speed. Similarly, the CPASS interface was enhanced to provide more user flexibility in submitting a CPASS query. CPASS v.2 now allows for both automatic and manual definition of ligand-binding sites and permits pair-wise, one versus all, one versus list, or list versus list comparisons. Solvent accessible surface area, ligand root-mean square difference, and Cβ distances have been incorporated into the CPASS similarity function to improve the quality of the results. The CPASS database has also been updated. Conclusions CPASS v.2 is more than an order of magnitude faster than the original implementation, and allows for multiple simultaneous job submissions. Similarly, the CPASS database of ligand-defined binding sites has increased in size by ~ 38%, dramatically increasing the likelihood of a positive search result. The modification to the CPASS similarity function is effective in reducing CPASS similarity scores

  19. Fiftieth Anniversary of the Cambridge Structural Database and Thirty Years of Its Use in Croatia

    Kojić-Prodić B.

    2015-07-01

    Full Text Available This article is dedicated to the memory of Dr. F. H. Allen and the 50th anniversary of the Cambridge Crystallographic Data Centre (CCDC; the world-renowned centre for deposition and control of crystallographic data including atomic coordinates that define the three-dimensional structures of organic molecules and metal complexes containing organic ligands. The mission exposed at the web site (http://www.ccdc.cam.ac.uk is clearly stated: “The Cambridge Crystallographic Data Centre (CCDC is dedicated to the advancement of chemistry and crystallography for the public benefit through providing high quality information, software and services.” The Cambridge Structural Database (CSD, one among the first established electronic databases, nowadays is one of the most significant crystallographic databases in the world. In the International Year of Crystallography 2014, the CSD announced in December over 750,000 deposited structures. The use of the extensive and rapidly growing database needs support of sophisticated and efficient software for checking, searching, analysing, and visualising structural data. The seminal role of the CSD in researches related to crystallography, chemistry, materials science, solid state physics and chemistry, (biotechnology, life sciences, and pharmacology is widely known. The important issues of the CCDC are the accuracy of deposited data and development of software for checking the data. Therefore, the Crystallographic Information File (CIF is introduced as the standard text file format for representing crystallographic information. Among the most important software for users is ConQuest, which enables searching all the CSD information fields, and the web implementation WebCSD software. Mercury is available for visualisation of crystal structures and crystal morphology including intra- and intermolecular interactions with graph-set notations of hydrogen bonds, and analysis of geometrical parameters. The CCDC gives even

  20. A Preliminary Study on the Multiple Mapping Structure of Classification Systems for Heterogeneous Databases

    Seok-Hyoung Lee

    2012-06-01

    Full Text Available While science and technology information service portals and heterogeneous databases produced in Korea and other countries are integrated, methods of connecting the unique classification systems applied to each database have been studied. Results of technologists' research, such as, journal articles, patent specifications, and research reports, are organically related to each other. In this case, if the most basic and meaningful classification systems are not connected, it is difficult to achieve interoperability of the information and thus not easy to implement meaningful science technology information services through information convergence. This study aims to address the aforementioned issue by analyzing mapping systems between classification systems in order to design a structure to connect a variety of classification systems used in the academic information database of the Korea Institute of Science and Technology Information, which provides science and technology information portal service. This study also aims to design a mapping system for the classification systems to be applied to actual science and technology information services and information management systems.

  1. New Path Based Index Structure for Processing CAS Queries over XML Database

    Krishna Asawa

    2017-01-01

    Full Text Available Querying nested data has become one of the most challenging issues for retrieving desired information from the Web. Today diverse applications generate a tremendous amount of data in different formats. These data and information exchanged on the Web are commonly expressed as nested representation such as XML, JSON, etc. Unlike the traditional database system, they don't have a rigid schema. In general, the nested data is managed by storing data and its structures separately which significantly reduces the performance of data retrieving. Ensuring efficiency of processing queries which locates the exact positions of the elements has become a big challenging issue. There are different indexing structures which have been proposed in the literature to improve the performance of the query processing on the nested structure. Most of the past researches on nested structure concentrate on the structure alone. This paper proposes new index structure which combines siblings of the terminal nodes as one path which efficiently processes twig queries with less number of lookups and joins. The proposed approach is compared with some of the existing approaches. The results also show that they are processed with better performance compared to the existing ones.

  2. Fine-structure resolved rotational transitions and database for CN+H2 collisions

    Burton, Hannah; Mysliwiec, Ryan; Forrey, Robert C.; Yang, B. H.; Stancil, P. C.; Balakrishnan, N.

    2018-06-01

    Cross sections and rate coefficients for CN+H2 collisions are calculated using the coupled states (CS) approximation. The calculations are benchmarked against more accurate close-coupling (CC) calculations for transitions between low-lying rotational states. Comparisons are made between the two formulations for collision energies greater than 10 cm-1. The CS approximation is used to construct a database which includes highly excited rotational states that are beyond the practical limitations of the CC method. The database includes fine-structure resolved rotational quenching transitions for v = 0 and j ≤ 40, where v and j are the vibrational and rotational quantum numbers of the initial state of the CN molecule. Rate coefficients are computed for both para-H2 and ortho-H2 colliders. The results are shown to be in good agreement with previous calculations, however, the rates are substantially different from mass-scaled CN+He rates that are often used in astrophysical models.

  3. A framework for analysing relationships between chemical composition and crystal structure in metal oxides

    Thomas, N.W.

    1991-01-01

    A computer program has been written to characterize the coordination polyhedra of metal cations in terms of their volumes and polyhedral elements, i.e. corners, edges and faces. The sharing of these corners, edges and faces between polyhedra is also quantitatively monitored. In order to develop the methodology, attention is focused on ternary oxides containing the Al 3+ ion, whose structures were retrieved from the Inorganic Crystal Structure Database (ICSD). This also permits an objective assessment of the applicability of Pauling's rules. The influence of ionic valence on the structures of these compounds is examined, by calculating electrostatic bond strengths. Although Pauling's second rule is not supported in detail, the calculation of oxygen-ion valence reveals a basic structural requirement, that the average calculated oxygen-ion valence in any ionic oxide structure is equal to 2. The analysis is further developed to define a general method for the prediction of novel chemical compositions likely to adopt a given desired structure. The polyhedral volumes of this structure are calculated, and use is made of standard ionic radii for cations in sixfold coordination. The electroneutrality principle is invoked to take valence considerations into account. This method can be used to guide the development of new compositions of ceramic materials with certain desirable physical properties. (orig.)

  4. Non-Price Competition and the Structure of the Online Information Industry: Q-Analysis of Medical Databases and Hosts.

    Davies, Roy

    1987-01-01

    Discussion of the online information industry emphasizes the effects of non-price competition on its structure and the firms involved. Q-analysis is applied to data on medical databases and hosts, changes over a three-year period are identified, and an optimum structure for the industry based on economic theory is considered. (Author/LRW)

  5. A database paradigm for the management of DICOM-RT structure sets using a geographic information system

    Shao, Weber; Kupelian, Patrick A; Wang, Jason; Low, Daniel A; Ruan, Dan

    2014-01-01

    We devise a paradigm for representing the DICOM-RT structure sets in a database management system, in such way that secondary calculations of geometric information can be performed quickly from the existing contour definitions. The implementation of this paradigm is achieved using the PostgreSQL database system and the PostGIS extension, a geographic information system commonly used for encoding geographical map data. The proposed paradigm eliminates the overhead of retrieving large data records from the database, as well as the need to implement various numerical and data parsing routines, when additional information related to the geometry of the anatomy is desired.

  6. A database paradigm for the management of DICOM-RT structure sets using a geographic information system

    Shao, Weber; Kupelian, Patrick A.; Wang, Jason; Low, Daniel A.; Ruan, Dan

    2014-03-01

    We devise a paradigm for representing the DICOM-RT structure sets in a database management system, in such way that secondary calculations of geometric information can be performed quickly from the existing contour definitions. The implementation of this paradigm is achieved using the PostgreSQL database system and the PostGIS extension, a geographic information system commonly used for encoding geographical map data. The proposed paradigm eliminates the overhead of retrieving large data records from the database, as well as the need to implement various numerical and data parsing routines, when additional information related to the geometry of the anatomy is desired.

  7. PubChemQC Project: A Large-Scale First-Principles Electronic Structure Database for Data-Driven Chemistry.

    Nakata, Maho; Shimazaki, Tomomi

    2017-06-26

    Large-scale molecular databases play an essential role in the investigation of various subjects such as the development of organic materials, in silico drug design, and data-driven studies with machine learning. We have developed a large-scale quantum chemistry database based on first-principles methods. Our database currently contains the ground-state electronic structures of 3 million molecules based on density functional theory (DFT) at the B3LYP/6-31G* level, and we successively calculated 10 low-lying excited states of over 2 million molecules via time-dependent DFT with the B3LYP functional and the 6-31+G* basis set. To select the molecules calculated in our project, we referred to the PubChem Project, which was used as the source of the molecular structures in short strings using the InChI and SMILES representations. Accordingly, we have named our quantum chemistry database project "PubChemQC" ( http://pubchemqc.riken.jp/ ) and placed it in the public domain. In this paper, we show the fundamental features of the PubChemQC database and discuss the techniques used to construct the data set for large-scale quantum chemistry calculations. We also present a machine learning approach to predict the electronic structure of molecules as an example to demonstrate the suitability of the large-scale quantum chemistry database.

  8. Structuring osteosarcoma knowledge: an osteosarcoma-gene association database based on literature mining and manual annotation.

    Poos, Kathrin; Smida, Jan; Nathrath, Michaela; Maugg, Doris; Baumhoer, Daniel; Neumann, Anna; Korsching, Eberhard

    2014-01-01

    Osteosarcoma (OS) is the most common primary bone cancer exhibiting high genomic instability. This genomic instability affects multiple genes and microRNAs to a varying extent depending on patient and tumor subtype. Massive research is ongoing to identify genes including their gene products and microRNAs that correlate with disease progression and might be used as biomarkers for OS. However, the genomic complexity hampers the identification of reliable biomarkers. Up to now, clinico-pathological factors are the key determinants to guide prognosis and therapeutic treatments. Each day, new studies about OS are published and complicate the acquisition of information to support biomarker discovery and therapeutic improvements. Thus, it is necessary to provide a structured and annotated view on the current OS knowledge that is quick and easily accessible to researchers of the field. Therefore, we developed a publicly available database and Web interface that serves as resource for OS-associated genes and microRNAs. Genes and microRNAs were collected using an automated dictionary-based gene recognition procedure followed by manual review and annotation by experts of the field. In total, 911 genes and 81 microRNAs related to 1331 PubMed abstracts were collected (last update: 29 October 2013). Users can evaluate genes and microRNAs according to their potential prognostic and therapeutic impact, the experimental procedures, the sample types, the biological contexts and microRNA target gene interactions. Additionally, a pathway enrichment analysis of the collected genes highlights different aspects of OS progression. OS requires pathways commonly deregulated in cancer but also features OS-specific alterations like deregulated osteoclast differentiation. To our knowledge, this is the first effort of an OS database containing manual reviewed and annotated up-to-date OS knowledge. It might be a useful resource especially for the bone tumor research community, as specific

  9. Modeling livestock population structure: a geospatial database for Ontario swine farms.

    Khan, Salah Uddin; O'Sullivan, Terri L; Poljak, Zvonimir; Alsop, Janet; Greer, Amy L

    2018-01-30

    Infectious diseases in farmed animals have economic, social, and health consequences. Foreign animal diseases (FAD) of swine are of significant concern. Mathematical and simulation models are often used to simulate FAD outbreaks and best practices for control. However, simulation outcomes are sensitive to the population structure used. Within Canada, access to individual swine farm population data with which to parameterize models is a challenge because of privacy concerns. Our objective was to develop a methodology to model the farmed swine population in Ontario, Canada that could represent the existing population structure and improve the efficacy of simulation models. We developed a swine population model based on the factors such as facilities supporting farm infrastructure, land availability, zoning and local regulations, and natural geographic barriers that could affect swine farming in Ontario. Assigned farm locations were equal to the swine farm density described in the 2011 Canadian Census of Agriculture. Farms were then randomly assigned to farm types proportional to the existing swine herd types. We compared the swine population models with a known database of swine farm locations in Ontario and found that the modeled population was representative of farm locations with a high accuracy (AUC: 0.91, Standard deviation: 0.02) suggesting that our algorithm generated a reasonable approximation of farm locations in Ontario. In the absence of a readily accessible dataset providing details of the relative locations of swine farms in Ontario, development of a model livestock population that captures key characteristics of the true population structure while protecting privacy concerns is an important methodological advancement. This methodology will be useful for individuals interested in modeling the spread of pathogens between farms across a landscape and using these models to evaluate disease control strategies.

  10. Database Description - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Full Text Available abase Description General information of database Database name PSCDB Alternative n...rial Science and Technology (AIST) Takayuki Amemiya E-mail: Database classification Structure Databases - Protein structure Database...554-D558. External Links: Original website information Database maintenance site Graduate School of Informat...available URL of Web services - Need for user registration Not available About This Database Database Descri...ption Download License Update History of This Database Site Policy | Contact Us Database Description - PSCDB | LSDB Archive ...

  11. DSSTOX STRUCTURE-SEARCHABLE PUBLIC TOXICITY DATABASE NETWORK: CURRENT PROGRESS AND NEW INITIATIVES TO IMPROVE CHEMO-BIOINFORMATICS CAPABILITIES

    The EPA DSSTox website (http://www/epa.gov/nheerl/dsstox) publishes standardized, structure-annotated toxicity databases, covering a broad range of toxicity disciplines. Each DSSTox database features documentation written in collaboration with the source authors and toxicity expe...

  12. Collecting Taxes Database

    US Agency for International Development — The Collecting Taxes Database contains performance and structural indicators about national tax systems. The database contains quantitative revenue performance...

  13. Characteristic conformation of Mosher's amide elucidated using the cambridge structural database.

    Ichikawa, Akio; Ono, Hiroshi; Mikata, Yuji

    2015-07-16

    Conformations of the crystalline 3,3,3-trifluoro-2-methoxy-2-phenylpropanamide derivatives (MTPA amides) deposited in the Cambridge Structural Database (CSD) were examined statistically as Racid-enantiomers. The majority of dihedral angles (48/58, ca. 83%) of the amide carbonyl groups and the trifluoromethyl groups ranged from -30° to 0° with an average angle θ1 of -13°. The other conformational properties were also clarified: (1) one of the fluorine atoms was antiperiplanar (ap) to the amide carbonyl group, forming a staggered conformation; (2) the MTPA amides prepared from primary amines showed a Z form in amide moieties; (3) in the case of the MTPA amide prepared from a primary amine possessing secondary alkyl groups (i.e., Mosher-type MTPA amide), the dihedral angles between the methine groups and the carbonyl groups were syn and indicative of a moderate conformational flexibility; (4) the phenyl plane was inclined from the O-Cchiral bond of the methoxy moiety with an average dihedral angle θ2 of +21°; (5) the methyl group of the methoxy moiety was ap to the ipso-carbon atom of the phenyl group.

  14. Database Description - ConfC | LSDB Archive [Life Science Database Archive metadata

    Full Text Available abase Description General information of database Database name ConfC Alternative name Database...amotsu Noguchi Tel: 042-495-8736 E-mail: Database classification Structure Database...s - Protein structure Structure Databases - Small molecules Structure Databases - Nucleic acid structure Database... services - Need for user registration - About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Database Description - ConfC | LSDB Archive ...

  15. MPID-T2: a database for sequence-structure-function analyses of pMHC and TR/pMHC structures.

    Khan, Javed Mohammed; Cheruku, Harish Reddy; Tong, Joo Chuan; Ranganathan, Shoba

    2011-04-15

    Sequence-structure-function information is critical in understanding the mechanism of pMHC and TR/pMHC binding and recognition. A database for sequence-structure-function information on pMHC and TR/pMHC interactions, MHC-Peptide Interaction Database-TR version 2 (MPID-T2), is now available augmented with the latest PDB and IMGT/3Dstructure-DB data, advanced features and new parameters for the analysis of pMHC and TR/pMHC structures. http://biolinfo.org/mpid-t2. shoba.ranganathan@mq.edu.au Supplementary data are available at Bioinformatics online.

  16. OECD-FIRE PR02. OECD-FIRE database record structure

    Kolar, L.

    2005-12-01

    In the coding guidelines, the scope, format, and details of any record required to input a real fire event at a nuclear reactor unit to the international OECD-FIRE database are described in detail. The database was set up in the OECD-FIRE-PR02 code

  17. PROCARB: A Database of Known and Modelled Carbohydrate-Binding Protein Structures with Sequence-Based Prediction Tools

    Adeel Malik

    2010-01-01

    Full Text Available Understanding of the three-dimensional structures of proteins that interact with carbohydrates covalently (glycoproteins as well as noncovalently (protein-carbohydrate complexes is essential to many biological processes and plays a significant role in normal and disease-associated functions. It is important to have a central repository of knowledge available about these protein-carbohydrate complexes as well as preprocessed data of predicted structures. This can be significantly enhanced by tools de novo which can predict carbohydrate-binding sites for proteins in the absence of structure of experimentally known binding site. PROCARB is an open-access database comprising three independently working components, namely, (i Core PROCARB module, consisting of three-dimensional structures of protein-carbohydrate complexes taken from Protein Data Bank (PDB, (ii Homology Models module, consisting of manually developed three-dimensional models of N-linked and O-linked glycoproteins of unknown three-dimensional structure, and (iii CBS-Pred prediction module, consisting of web servers to predict carbohydrate-binding sites using single sequence or server-generated PSSM. Several precomputed structural and functional properties of complexes are also included in the database for quick analysis. In particular, information about function, secondary structure, solvent accessibility, hydrogen bonds and literature reference, and so forth, is included. In addition, each protein in the database is mapped to Uniprot, Pfam, PDB, and so forth.

  18. Database development and management

    Chao, Lee

    2006-01-01

    Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database

  19. Refactoring databases evolutionary database design

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  20. A Preliminary Study on the Multiple Mapping Structure of Classification Systems for Heterogeneous Databases

    Seok-Hyoung Lee; Hwan-Min Kim; Ho-Seop Choe

    2012-01-01

    While science and technology information service portals and heterogeneous databases produced in Korea and other countries are integrated, methods of connecting the unique classification systems applied to each database have been studied. Results of technologists' research, such as, journal articles, patent specifications, and research reports, are organically related to each other. In this case, if the most basic and meaningful classification systems are not connected, it is difficult to ach...

  1. MetalS(3), a database-mining tool for the identification of structurally similar metal sites.

    Valasatava, Yana; Rosato, Antonio; Cavallaro, Gabriele; Andreini, Claudia

    2014-08-01

    We have developed a database search tool to identify metal sites having structural similarity to a query metal site structure within the MetalPDB database of minimal functional sites (MFSs) contained in metal-binding biological macromolecules. MFSs describe the local environment around the metal(s) independently of the larger context of the macromolecular structure. Such a local environment has a determinant role in tuning the chemical reactivity of the metal, ultimately contributing to the functional properties of the whole system. The database search tool, which we called MetalS(3) (Metal Sites Similarity Search), can be accessed through a Web interface at http://metalweb.cerm.unifi.it/tools/metals3/ . MetalS(3) uses a suitably adapted version of an algorithm that we previously developed to systematically compare the structure of the query metal site with each MFS in MetalPDB. For each MFS, the best superposition is kept. All these superpositions are then ranked according to the MetalS(3) scoring function and are presented to the user in tabular form. The user can interact with the output Web page to visualize the structural alignment or the sequence alignment derived from it. Options to filter the results are available. Test calculations show that the MetalS(3) output correlates well with expectations from protein homology considerations. Furthermore, we describe some usage scenarios that highlight the usefulness of MetalS(3) to obtain mechanistic and functional hints regardless of homology.

  2. Geroprotectors.org: a new, structured and curated database of current therapeutic interventions in aging and age-related disease

    Moskalev, Alexey; Chernyagina, Elizaveta; de Magalhães, João Pedro; Barardo, Diogo; Thoppil, Harikrishnan; Shaposhnikov, Mikhail; Budovsky, Arie; Fraifeld, Vadim E.; Garazha, Andrew; Tsvetkov, Vasily; Bronovitsky, Evgeny; Bogomolov, Vladislav; Scerbacov, Alexei; Kuryan, Oleg; Gurinovich, Roman; Jellen, Leslie C.; Kennedy, Brian; Mamoshina, Polina; Dobrovolskaya, Evgeniya; Aliper, Alex; Kaminsky, Dmitry; Zhavoronkov, Alex

    2015-01-01

    As the level of interest in aging research increases, there is a growing number of geroprotectors, or therapeutic interventions that aim to extend the healthy lifespan and repair or reduce aging-related damage in model organisms and, eventually, in humans. There is a clear need for a manually-curated database of geroprotectors to compile and index their effects on aging and age-related diseases and link these effects to relevant studies and multiple biochemical and drug databases. Here, we introduce the first such resource, Geroprotectors (http://geroprotectors.org). Geroprotectors is a public, rapidly explorable database that catalogs over 250 experiments involving over 200 known or candidate geroprotectors that extend lifespan in model organisms. Each compound has a comprehensive profile complete with biochemistry, mechanisms, and lifespan effects in various model organisms, along with information ranging from chemical structure, side effects, and toxicity to FDA drug status. These are presented in a visually intuitive, efficient framework fit for casual browsing or in-depth research alike. Data are linked to the source studies or databases, providing quick and convenient access to original data. The Geroprotectors database facilitates cross-study, cross-organism, and cross-discipline analysis and saves countless hours of inefficient literature and web searching. Geroprotectors is a one-stop, knowledge-sharing, time-saving resource for researchers seeking healthy aging solutions. PMID:26342919

  3. Comprehensive analysis of the N-glycan biosynthetic pathway using bioinformatics to generate UniCorn: A theoretical N-glycan structure database.

    Akune, Yukie; Lin, Chi-Hung; Abrahams, Jodie L; Zhang, Jingyu; Packer, Nicolle H; Aoki-Kinoshita, Kiyoko F; Campbell, Matthew P

    2016-08-05

    Glycan structures attached to proteins are comprised of diverse monosaccharide sequences and linkages that are produced from precursor nucleotide-sugars by a series of glycosyltransferases. Databases of these structures are an essential resource for the interpretation of analytical data and the development of bioinformatics tools. However, with no template to predict what structures are possible the human glycan structure databases are incomplete and rely heavily on the curation of published, experimentally determined, glycan structure data. In this work, a library of 45 human glycosyltransferases was used to generate a theoretical database of N-glycan structures comprised of 15 or less monosaccharide residues. Enzyme specificities were sourced from major online databases including Kyoto Encyclopedia of Genes and Genomes (KEGG) Glycan, Consortium for Functional Glycomics (CFG), Carbohydrate-Active enZymes (CAZy), GlycoGene DataBase (GGDB) and BRENDA. Based on the known activities, more than 1.1 million theoretical structures and 4.7 million synthetic reactions were generated and stored in our database called UniCorn. Furthermore, we analyzed the differences between the predicted glycan structures in UniCorn and those contained in UniCarbKB (www.unicarbkb.org), a database which stores experimentally described glycan structures reported in the literature, and demonstrate that UniCorn can be used to aid in the assignment of ambiguous structures whilst also serving as a discovery database. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. MEGADOCK-Web: an integrated database of high-throughput structure-based protein-protein interaction predictions.

    Hayashi, Takanori; Matsuzaki, Yuri; Yanagisawa, Keisuke; Ohue, Masahito; Akiyama, Yutaka

    2018-05-08

    Protein-protein interactions (PPIs) play several roles in living cells, and computational PPI prediction is a major focus of many researchers. The three-dimensional (3D) structure and binding surface are important for the design of PPI inhibitors. Therefore, rigid body protein-protein docking calculations for two protein structures are expected to allow elucidation of PPIs different from known complexes in terms of 3D structures because known PPI information is not explicitly required. We have developed rapid PPI prediction software based on protein-protein docking, called MEGADOCK. In order to fully utilize the benefits of computational PPI predictions, it is necessary to construct a comprehensive database to gather prediction results and their predicted 3D complex structures and to make them easily accessible. Although several databases exist that provide predicted PPIs, the previous databases do not contain a sufficient number of entries for the purpose of discovering novel PPIs. In this study, we constructed an integrated database of MEGADOCK PPI predictions, named MEGADOCK-Web. MEGADOCK-Web provides more than 10 times the number of PPI predictions than previous databases and enables users to conduct PPI predictions that cannot be found in conventional PPI prediction databases. In MEGADOCK-Web, there are 7528 protein chains and 28,331,628 predicted PPIs from all possible combinations of those proteins. Each protein structure is annotated with PDB ID, chain ID, UniProt AC, related KEGG pathway IDs, and known PPI pairs. Additionally, MEGADOCK-Web provides four powerful functions: 1) searching precalculated PPI predictions, 2) providing annotations for each predicted protein pair with an experimentally known PPI, 3) visualizing candidates that may interact with the query protein on biochemical pathways, and 4) visualizing predicted complex structures through a 3D molecular viewer. MEGADOCK-Web provides a huge amount of comprehensive PPI predictions based on

  5. An integrated photogrammetric and spatial database management system for producing fully structured data using aerial and remote sensing images.

    Ahmadi, Farshid Farnood; Ebadi, Hamid

    2009-01-01

    3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs); direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium) standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS) is presented.

  6. An Integrated Photogrammetric and Spatial Database Management System for Producing Fully Structured Data Using Aerial and Remote Sensing Images

    Farshid Farnood Ahmadi

    2009-03-01

    Full Text Available 3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs; direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS is presented.

  7. DMPD: Structural and functional analyses of bacterial lipopolysaccharides. [Dynamic Macrophage Pathway CSML Database

    Full Text Available 12106784 Structural and functional analyses of bacterial lipopolysaccharides. Carof...html) (.csml) Show Structural and functional analyses of bacterial lipopolysaccharides. PubmedID 12106784 Title Structural and functi...onal analyses of bacterial lipopolysaccharides. Authors

  8. RNA secondary structure image - fRNAdb | LSDB Archive [Life Science Database Archive metadata

    Full Text Available List Contact us fRNAdb RNA secondary structure image Data detail Data name RNA secondary structure image DOI... 10.18908/lsdba.nbdc00452-005 Description of data contents RNA secondary structure images - png.zip: RNA secondary structure image...s (PNG) - pdf.zip: RNA secondary structure images (PDF) - thumbnail.zip: Thumbnails of... RNA secondary structure images Data file File name: RNA_secondary_structure_image... File URL: ftp://ftp.biosciencedbc.jp/archive/frnadb/LATEST/RNA_secondary_structure_image File size: 9.6 GB

  9. Database Description - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Full Text Available abase Description General information of database Database name ASTRA Alternative n...tics Journal Search: Contact address Database classification Nucleotide Sequence Databases - Gene structure,...3702 Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The database represents classified p...(10):1211-6. External Links: Original website information Database maintenance site National Institute of Ad... for user registration Not available About This Database Database Description Dow

  10. The GEM Global Active Faults Database: The growth and synthesis of a worldwide database of active structures for PSHA, research, and education

    Styron, R. H.; Garcia, J.; Pagani, M.

    2017-12-01

    A global catalog of active faults is a resource of value to a wide swath of the geoscience, earthquake engineering, and hazards risk communities. Though construction of such a dataset has been attempted now and again through the past few decades, success has been elusive. The Global Earthquake Model (GEM) Foundation has been working on this problem, as a fundamental step in its goal of making a global seismic hazard model. Progress on the assembly of the database is rapid, with the concatenation of many national—, orogen—, and continental—scale datasets produced by different research groups throughout the years. However, substantial data gaps exist throughout much of the deforming world, requiring new mapping based on existing publications as well as consideration of seismicity, geodesy and remote sensing data. Thus far, new fault datasets have been created for the Caribbean and Central America, North Africa, and northeastern Asia, with Madagascar, Canada and a few other regions in the queue. The second major task, as formidable as the initial data concatenation, is the 'harmonization' of data. This entails the removal or recombination of duplicated structures, reconciliation of contrastinginterpretations in areas of overlap, and the synthesis of many different types of attributes or metadata into a consistent whole. In a project of this scale, the methods used in the database construction are as critical to project success as the data themselves. After some experimentation, we have settled on an iterative methodology that involves rapid accumulation of data followed by successive episodes of data revision, and a computer-scripted data assembly using GIS file formats that is flexible, reproducible, and as able as possible to cope with updates to the constituent datasets. We find that this approach of initially maximizing coverage and then increasing resolution is the most robust to regional data problems and the most amenable to continued updates and

  11. BIGNASim: a NoSQL database structure and analysis portal for nucleic acids simulation data

    Hospital, Adam; Andrio, Pau; Cugnasco, Cesare; Codo, Laia; Becerra, Yolanda; Dans, Pablo D.; Battistini, Federica; Torres, Jordi; Goñi, Ramón; Orozco, Modesto; Gelpí, Josep Ll.

    2016-01-01

    Molecular dynamics simulation (MD) is, just behind genomics, the bioinformatics tool that generates the largest amounts of data, and that is using the largest amount of CPU time in supercomputing centres. MD trajectories are obtained after months of calculations, analysed in situ, and in practice forgotten. Several projects to generate stable trajectory databases have been developed for proteins, but no equivalence exists in the nucleic acids world. We present here a novel database system to store MD trajectories and analyses of nucleic acids. The initial data set available consists mainly of the benchmark of the new molecular dynamics force-field, parmBSC1. It contains 156 simulations, with over 120 μs of total simulation time. A deposition protocol is available to accept the submission of new trajectory data. The database is based on the combination of two NoSQL engines, Cassandra for storing trajectories and MongoDB to store analysis results and simulation metadata. The analyses available include backbone geometries, helical analysis, NMR observables and a variety of mechanical analyses. Individual trajectories and combined meta-trajectories can be downloaded from the portal. The system is accessible through http://mmb.irbbarcelona.org/BIGNASim/. Supplementary Material is also available on-line at http://mmb.irbbarcelona.org/BIGNASim/SuppMaterial/. PMID:26612862

  12. BIGNASim: a NoSQL database structure and analysis portal for nucleic acids simulation data.

    Hospital, Adam; Andrio, Pau; Cugnasco, Cesare; Codo, Laia; Becerra, Yolanda; Dans, Pablo D; Battistini, Federica; Torres, Jordi; Goñi, Ramón; Orozco, Modesto; Gelpí, Josep Ll

    2016-01-04

    Molecular dynamics simulation (MD) is, just behind genomics, the bioinformatics tool that generates the largest amounts of data, and that is using the largest amount of CPU time in supercomputing centres. MD trajectories are obtained after months of calculations, analysed in situ, and in practice forgotten. Several projects to generate stable trajectory databases have been developed for proteins, but no equivalence exists in the nucleic acids world. We present here a novel database system to store MD trajectories and analyses of nucleic acids. The initial data set available consists mainly of the benchmark of the new molecular dynamics force-field, parmBSC1. It contains 156 simulations, with over 120 μs of total simulation time. A deposition protocol is available to accept the submission of new trajectory data. The database is based on the combination of two NoSQL engines, Cassandra for storing trajectories and MongoDB to store analysis results and simulation metadata. The analyses available include backbone geometries, helical analysis, NMR observables and a variety of mechanical analyses. Individual trajectories and combined meta-trajectories can be downloaded from the portal. The system is accessible through http://mmb.irbbarcelona.org/BIGNASim/. Supplementary Material is also available on-line at http://mmb.irbbarcelona.org/BIGNASim/SuppMaterial/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. UET: a database of evolutionarily-predicted functional determinants of protein sequences that cluster as functional sites in protein structures.

    Lua, Rhonald C; Wilson, Stephen J; Konecki, Daniel M; Wilkins, Angela D; Venner, Eric; Morgan, Daniel H; Lichtarge, Olivier

    2016-01-04

    The structure and function of proteins underlie most aspects of biology and their mutational perturbations often cause disease. To identify the molecular determinants of function as well as targets for drugs, it is central to characterize the important residues and how they cluster to form functional sites. The Evolutionary Trace (ET) achieves this by ranking the functional and structural importance of the protein sequence positions. ET uses evolutionary distances to estimate functional distances and correlates genotype variations with those in the fitness phenotype. Thus, ET ranks are worse for sequence positions that vary among evolutionarily closer homologs but better for positions that vary mostly among distant homologs. This approach identifies functional determinants, predicts function, guides the mutational redesign of functional and allosteric specificity, and interprets the action of coding sequence variations in proteins, people and populations. Now, the UET database offers pre-computed ET analyses for the protein structure databank, and on-the-fly analysis of any protein sequence. A web interface retrieves ET rankings of sequence positions and maps results to a structure to identify functionally important regions. This UET database integrates several ways of viewing the results on the protein sequence or structure and can be found at http://mammoth.bcm.tmc.edu/uet/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Beyond the ENDF format: A modern nuclear database structure. SG38 meeting, NEA Headquarters, 20-22 May 2015

    Brown, David; Jouanne, Cedric; Malvagi, Fausto; Coste-Delclaux, Mireille; Hawari, Ayman I.; Mattoon, Caleb; Zerkin, Viktor; ); Cornock, Mark; Conlin, Jeremy; Lloydge, Zhigang; Oleynik, Dmitry S.; Cabellos, Oscar; Mills, Robert W.; Kim, Do Heon; Leal, Luiz Carlos; White, Morgan c.; Dunn, Michael; Kahler, Albert C. skip; Mcnabb, Dennis P.; Roubtsov, Danila; Cho, Young-Sik; Beck, Bret; Haeck, Wim

    2015-05-01

    WPEC subgroup 38 (SG38) was formed to develop a new structure for storing nuclear reaction data, that is meant to eventually replace ENDF-6 as the standard way to store and share evaluations. The work of SG38 covers the following tasks: Designing flexible, general-purpose data containers; Determining a logical and easy-to-understand top-level hierarchy for storing evaluated nuclear reaction data; Creating a particle database for storing particles, masses and level schemes; Specifying the infrastructure (plotting, processing, etc.) that must accompany the new structure; Developing an Application Programming Interface or API to allow other codes to access data stored in the new structure; Specifying what tests need to be implemented for quality assurance of the new structure and associated infrastructure; Ensuring documentation and governance of the structure and associated infrastructure. This document is the proceedings of the SG38 meeting which took place at the NEA Headquarters, on 20-22 May 2015. It comprises all the available presentations (slides) given by the participants: - 1) Recent updates to the top-level hierarchy, review draft requirements document (D. Brown); - 2) Properties of Particles (PoPs) database (C.M. Mattoon); - 3) Proposed General Purpose Data Containers (B. Beck); - 4) Feedback on GND Specifications (C. Jouanne); - 5) Using XML in the IAEA-NDS: status, feedback and proposals (V. Zerkin); - 6) Observations Related to Thermal Neutron Scattering Law Data (A.I. Hawari); - 7) Meeting Notes (D.P. Mcnabb)

  15. Sting_RDB: a relational database of structural parameters for protein analysis with support for data warehousing and data mining.

    Oliveira, S R M; Almeida, G V; Souza, K R R; Rodrigues, D N; Kuser-Falcão, P R; Yamagishi, M E B; Santos, E H; Vieira, F D; Jardine, J G; Neshich, G

    2007-10-05

    An effective strategy for managing protein databases is to provide mechanisms to transform raw data into consistent, accurate and reliable information. Such mechanisms will greatly reduce operational inefficiencies and improve one's ability to better handle scientific objectives and interpret the research results. To achieve this challenging goal for the STING project, we introduce Sting_RDB, a relational database of structural parameters for protein analysis with support for data warehousing and data mining. In this article, we highlight the main features of Sting_RDB and show how a user can explore it for efficient and biologically relevant queries. Considering its importance for molecular biologists, effort has been made to advance Sting_RDB toward data quality assessment. To the best of our knowledge, Sting_RDB is one of the most comprehensive data repositories for protein analysis, now also capable of providing its users with a data quality indicator. This paper differs from our previous study in many aspects. First, we introduce Sting_RDB, a relational database with mechanisms for efficient and relevant queries using SQL. Sting_rdb evolved from the earlier, text (flat file)-based database, in which data consistency and integrity was not guaranteed. Second, we provide support for data warehousing and mining. Third, the data quality indicator was introduced. Finally and probably most importantly, complex queries that could not be posed on a text-based database, are now easily implemented. Further details are accessible at the Sting_RDB demo web page: http://www.cbi.cnptia.embrapa.br/StingRDB.

  16. Locating relevant patient information in electronic health record data using representations of clinical concepts and database structures.

    Pan, Xuequn; Cimino, James J

    2014-01-01

    Clinicians and clinical researchers often seek information in electronic health records (EHRs) that are relevant to some concept of interest, such as a disease or finding. The heterogeneous nature of EHRs can complicate retrieval, risking incomplete results. We frame this problem as the presence of two gaps: 1) a gap between clinical concepts and their representations in EHR data and 2) a gap between data representations and their locations within EHR data structures. We bridge these gaps with a knowledge structure that comprises relationships among clinical concepts (including concepts of interest and concepts that may be instantiated in EHR data) and relationships between clinical concepts and the database structures. We make use of available knowledge resources to develop a reproducible, scalable process for creating a knowledge base that can support automated query expansion from a clinical concept to all relevant EHR data.

  17. Logical database design principles

    Garmany, John; Clark, Terry

    2005-01-01

    INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint

  18. Beyond the ENDF format: A modern nuclear database structure. SG38 meeting, OECD Conference Centre, 9-11 May 2016

    Herman, Michal Wladyslaw; Cabellos De Francisco, Oscar; Beck, Bret; Trkov, Andrej; Conlin, Jeremy Lloyd; Mcnabb, Dennis P.; Malvagi, Fausto; Grudzevich, Oleg T.; Mattoon, Caleb; Wiarda, Dorothea; Brown, David; Chadwick, Mark; Roubtsov, Danila; Iwamoto, Osamu; Yokoyama, Kenji; White, Morgan C.; Coste-Delclaux, Mireille; Fiorito, Luca; Haeck, Wim; Dunn, Michael; Jouanne, Cedric

    2016-05-01

    WPEC subgroup 38 (SG38) was formed to develop a new structure for storing nuclear reaction data, that is meant to eventually replace ENDF-6 as the standard way to store and share evaluations. The work of SG38 covers the following tasks: Designing flexible, general-purpose data containers; Determining a logical and easy-to-understand top-level hierarchy for storing evaluated nuclear reaction data; Creating a particle database for storing particles, masses and level schemes; Specifying the infrastructure (plotting, processing, etc.) that must accompany the new structure; Developing an Application Programming Interface or API to allow other codes to access data stored in the new structure; Specifying what tests need to be implemented for quality assurance of the new structure and associated infrastructure; Ensuring documentation and governance of the structure and associated infrastructure. This document is the proceedings of the SG38 meeting which took place at the OECD Conference Centre, on 9-11 May 2016. It comprises all the available presentations (slides) given by the participants: - 1) Beyond the ENDF format: Working toward the first specifications (Dennis P. McNabb); - 2) Summary of LLNL/LANL/ORNL/BNL discussions on SG38 (Caleb M. Mattoon); - 3) Status of Top Level Hierarchy Requirements Document (D.A. Brown); - 4) GND: Storing multiple representations of a quantity using forms, components and styles (Bret Beck); - 5) Particle Database update (Caleb M. Mattoon); - 6) General Purpose Data Containers (Jeremy Lloyd Conlin); - 7) Functional data containers (Bret Beck); - 8) Top-level hierarchy specifications (D.A. Brown)

  19. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Full Text Available base Description General information of database Database name RPSD Alternative nam...e Rice Protein Structure Database DOI 10.18908/lsdba.nbdc00749-000 Creator Creator Name: Toshimasa Yamazaki ... Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Databas...e classification Structure Databases - Protein structure Organism Taxonomy Name: Or...or name(s): Journal: External Links: Original website information Database maintenance site National Institu

  20. Database Description - SAHG | LSDB Archive [Life Science Database Archive metadata

    Full Text Available base Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...e databases - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description... Links: Original website information Database maintenance site The Molecular Profiling Research Center for D...stration Not available About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - SAHG | LSDB Archive ...

  1. Towards a reduced activation structural materials database for fusion DEMO reactors

    Moeslang, A.; Diegele, E.; Laesser, R.; Klimiankou, M.; Lindau, R.; Materna-Morris, E.; Rieth, M.; Lucon, E.; Petersen, C.; Schneider, H.-C.; Pippan, R.; Rensman, J.W.; Schaaf, B. van der; Tavassoli, F.

    2005-01-01

    The development of First Wall, Blanket and Divertor materials which are capable of withstanding many years the high neutron and heat fluxes, is a critical path to fusion power. Therefore, the timely availability of a sound materials database has become an indispensable element in international fusion road maps. In order to provide materials design data for short term needs of ITER Test Blanket Modules and for a DEMOnstration fusion reactor, a wealth of R and D results on the European reduced activation ferritic-martensitic steel EUROFER, and on oxide dispersion strengthened variants are being characterized, mainly in the temperature window 250-650 deg. C. The characterisation includes irradiations up to 15 dpa in the mixed spectrum reactor HFR and up to 75 dpa in the fast breeder reactor BOR60. Industrial EUROFER-batches of 3.5 and 7.5 tons have been produced with a variety of semi-finished, quality-assured product forms. To increase thermal efficiency of blankets, high temperature resistant SiC f /SiC channel inserts for liquid metal coolant tubes are also developed. Regarding radiation damage resistance, a broad based reactor irradiation programs counts several steps from ≤5dpa (ITER TBMs) up to 75 dpa (DEMO). For the European divertor designers, a materials data base is presently being set up for pure W and W alloys, and related reactor irradiations are foreseen with temperatures from 650-1000 deg. C. (author)

  2. Structure analysis of designated hospitals for cancer control in Japan from JASTRO census survey database 2005

    Ikeda, Hiroshi; Kagami, Yoshikazu; Nishio, Masamichi; Kataoka, Masaaki; Matsumoto, Yasuo; Hatano, Kazuo; Ogino, Takashi

    2008-01-01

    The structures of 288 hospitals designated for cancer control and approved by Ministry of Health, Labour and Welfare in February 2006 were analyzed from radiotherapy aspects according to the Japanese Society for Therapeutic Radiology and Oncology (JASTRO) 2005 census survey data. The data were compiled from 266 hospitals. Overall 78,086 new patients were treated at these designated hospitals, which accounts for just a half the total number of patients in Japan. The structure of radiotherapy (RT) must be essential for cancer management, and our study showed the designated hospitals are insufficient in the RT requirement. No RT equipment is installed in 14 hospitals. Of 266, 109 hospitals treated less than 200 new patients, and 25 hospitals less than 100 in 2005. The data analysis revealed that academic hospitals, JACC hospitals and others are reasonable in terms of structures and capacity of radiotherapy. Moreover, both academic and JACC hospitals play similar roles to designated prefectural hospitals in cancer management by radiotherapy. (author)

  3. Artificial Intelligence in Prediction of Secondary Protein Structure Using CB513 Database

    Avdagic, Zikrija; Purisevic, Elvir; Omanovic, Samir; Coralic, Zlatan

    2009-01-01

    In this paper we describe CB513 a non-redundant dataset, suitable for development of algorithms for prediction of secondary protein structure. A program was made in Borland Delphi for transforming data from our dataset to make it suitable for learning of neural network for prediction of secondary protein structure implemented in MATLAB Neural-Network Toolbox. Learning (training and testing) of neural network is researched with different sizes of windows, different number of neurons in the hidden layer and different number of training epochs, while using dataset CB513. PMID:21347158

  4. Adaptive Constrained Optimal Control Design for Data-Based Nonlinear Discrete-Time Systems With Critic-Only Structure.

    Luo, Biao; Liu, Derong; Wu, Huai-Ning

    2018-06-01

    Reinforcement learning has proved to be a powerful tool to solve optimal control problems over the past few years. However, the data-based constrained optimal control problem of nonaffine nonlinear discrete-time systems has rarely been studied yet. To solve this problem, an adaptive optimal control approach is developed by using the value iteration-based Q-learning (VIQL) with the critic-only structure. Most of the existing constrained control methods require the use of a certain performance index and only suit for linear or affine nonlinear systems, which is unreasonable in practice. To overcome this problem, the system transformation is first introduced with the general performance index. Then, the constrained optimal control problem is converted to an unconstrained optimal control problem. By introducing the action-state value function, i.e., Q-function, the VIQL algorithm is proposed to learn the optimal Q-function of the data-based unconstrained optimal control problem. The convergence results of the VIQL algorithm are established with an easy-to-realize initial condition . To implement the VIQL algorithm, the critic-only structure is developed, where only one neural network is required to approximate the Q-function. The converged Q-function obtained from the critic-only VIQL method is employed to design the adaptive constrained optimal controller based on the gradient descent scheme. Finally, the effectiveness of the developed adaptive control method is tested on three examples with computer simulation.

  5. Search for 5'-leader regulatory RNA structures based on gene annotation aided by the RiboGap database.

    Naghdi, Mohammad Reza; Smail, Katia; Wang, Joy X; Wade, Fallou; Breaker, Ronald R; Perreault, Jonathan

    2017-03-15

    The discovery of noncoding RNAs (ncRNAs) and their importance for gene regulation led us to develop bioinformatics tools to pursue the discovery of novel ncRNAs. Finding ncRNAs de novo is challenging, first due to the difficulty of retrieving large numbers of sequences for given gene activities, and second due to exponential demands on calculation needed for comparative genomics on a large scale. Recently, several tools for the prediction of conserved RNA secondary structure were developed, but many of them are not designed to uncover new ncRNAs, or are too slow for conducting analyses on a large scale. Here we present various approaches using the database RiboGap as a primary tool for finding known ncRNAs and for uncovering simple sequence motifs with regulatory roles. This database also can be used to easily extract intergenic sequences of eubacteria and archaea to find conserved RNA structures upstream of given genes. We also show how to extend analysis further to choose the best candidate ncRNAs for experimental validation. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Structure and contents of a new geomorphological GIS database linked to a geomorphological map — With an example from Liden, central Sweden

    Gustavsson, M.; Seijmonsbergen, A.C.; Kolstrup, E.

    2008-01-01

    This paper presents the structure and contents of a standardised geomorphological GIS database that stores comprehensive scientific geomorphological data and constitutes the basis for processing and extracting spatial thematic data. The geodatabase contains spatial information on

  7. STAR (structural test and analysis database for reliable design) Version 7.1. User's manual

    Hosogai, Hiromi; Kawasaki, Nobuchika; Kasahara, Naoto

    1998-12-01

    The STAR is characterized by having two supporting functions for developing strength evaluation methods in addition to usual data base management system, an automatic damage calculation function with external programs and an analysis system on accuracy of prediction. This report describes the structure and user information for execution of STAR code. (K. Itami)

  8. Secbase: database module to retrieve secondary structure elements with ligand binding motifs.

    Koch, Oliver; Cole, Jason; Block, Peter; Klebe, Gerhard

    2009-10-01

    Secbase is presented as a novel extension module of Relibase. It integrates the information about secondary structure elements into the retrieval facilities of Relibase. The data are accessible via the extended Relibase user interface, and integrated retrieval queries can be addressed using an extended version of Reliscript. The primary information about alpha-helices and beta-sheets is used as provided by the PDB. Furthermore, a uniform classification of all turn families, based on recent clustering methods, and a new helix assignment that is based on this turn classification has been included. Algorithms to analyze the geometric features of helices and beta-strands were also implemented. To demonstrate the performance of the Secbase implementation, some application examples are given. They provide new insights into the involvement of secondary structure elements in ligand binding. A survey of water molecules detected next to the N-terminus of helices is analyzed to show their involvement in ligand binding. Additionally, the parallel oriented NH groups at the alpha-helix N-termini provide special binding motifs to bind particular ligand functional groups with two adjacent oxygen atoms, e.g., as found in negatively charged carboxylate or phosphate groups, respectively. The present study also shows that the specific structure of the first turn of alpha-helices provides a suitable explanation for stabilizing charged structures. The magnitude of the overall helix macrodipole seems to have no or only a minor influence on binding. Furthermore, an overview of the involvement of secondary structure elements with the recognition of some important endogenous ligands such as cofactors shows some distinct preference for particular binding motifs and amino acids.

  9. Digital bedrock mapping at the Geological Survey of Norway: BGS SIGMA tool and in-house database structure

    Gasser, Deta; Viola, Giulio; Bingen, Bernard

    2016-04-01

    Since 2010, the Geological Survey of Norway has been implementing and continuously developing a digital workflow for geological bedrock mapping in Norway, from fieldwork to final product. Our workflow is based on the ESRI ArcGIS platform, and we use rugged Windows computers in the field. Three different hardware solutions have been tested over the past 5 years (2010-2015). (1) Panasonic Toughbook CE-19 (2.3 kg), (2) Panasonic Toughbook CF H2 Field (1.6 kg) and (3) Motion MC F5t tablet (1.5 kg). For collection of point observations in the field we mainly use the SIGMA Mobile application in ESRI ArcGIS developed by the British Geological Survey, which allows the mappers to store georeferenced comments, structural measurements, sample information, photographs, sketches, log information etc. in a Microsoft Access database. The application is freely downloadable from the BGS websites. For line- and polygon work we use our in-house database, which is currently under revision. Our line database consists of three feature classes: (1) bedrock boundaries, (2) bedrock lineaments, and (3) bedrock lines, with each feature class having up to 24 different attribute fields. Our polygon database consists of one feature class with 38 attribute fields enabling to store various information concerning lithology, stratigraphic order, age, metamorphic grade and tectonic subdivision. The polygon and line databases are coupled via topology in ESRI ArcGIS, which allows us to edit them simultaneously. This approach has been applied in two large-scale 1:50 000 bedrock mapping projects, one in the Kongsberg domain of the Sveconorwegian orogen, and the other in the greater Trondheim area (Orkanger) in the Caledonian belt. The mapping projects combined collection of high-resolution geophysical data, digital acquisition of field data, and collection of geochronological, geochemical and petrological data. During the Kongsberg project, some 25000 field observation points were collected by eight

  10. Keyword Search in Databases

    Yu, Jeffrey Xu; Chang, Lijun

    2009-01-01

    It has become highly desirable to provide users with flexible ways to query/search information over databases as simple as keyword search like Google search. This book surveys the recent developments on keyword search over databases, and focuses on finding structural information among objects in a database using a set of keywords. Such structural information to be returned can be either trees or subgraphs representing how the objects, that contain the required keywords, are interconnected in a relational database or in an XML database. The structural keyword search is completely different from

  11. KALIMER database development

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.

  12. KALIMER database development

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  13. Synthesis and structural and electrical characterization of new materials Bi{sub 3}R{sub 2}FeTi{sub 3}O{sub 15}

    Gil Novoa, O.D.; Landinez Tellez, D.A. [Grupo de Fisica de Nuevos Materiales, Departamento de Fisica, Universidad Nacional de Colombia, AA 5997, Bogota DC (Colombia); Roa-Rojas, J., E-mail: jroar@unal.edu.co [Grupo de Fisica de Nuevos Materiales, Departamento de Fisica, Universidad Nacional de Colombia, AA 5997, Bogota DC (Colombia)

    2012-08-15

    In this work we report the synthesis of polycrystalline samples of Bi{sub 5}FeTi{sub 3}O{sub 15} and Bi{sub 3}R{sub 2}FeTi{sub 3}O{sub 15} new compounds with R=Nd, Sm, Gd, Dy, Ho and Yb. The materials were synthesized by the standard solid state reaction recipe from high purity (99.99%) powders. The structural characteristics of materials were analyzed by X-ray diffraction experiments. Rietveld refinement by the GSAS code was performed, taking the input data from the ICSD 74037 database. Results reveal that materials crystallized in orthorhombic single-phase structures and space group Fmm2. Measurements of polarization as a function of applied electric field were carried out using a Radiant Technology polarimeter. We determine the occurrence of hysteretic behaviors, which are characteristic of ferroelectric materials. The main values of remnant and coercive applied fields were observed for substitutions with Yb and Nd, which have the main atomic radii.

  14. Structure alerts for carcinogenicity, and the Salmonella assay system: a novel insight through the chemical relational databases technology.

    Benigni, Romualdo; Bossa, Cecilia

    2008-01-01

    In the past decades, chemical carcinogenicity has been the object of mechanistic studies that have been translated into valuable experimental (e.g., the Salmonella assays system) and theoretical (e.g., compilations of structure alerts for chemical carcinogenicity) models. These findings remain the basis of the science and regulation of mutagens and carcinogens. Recent advances in the organization and treatment of large databases consisting of both biological and chemical information nowadays allows for a much easier and more refined view of data. This paper reviews recent analyses on the predictive performance of various lists of structure alerts, including a new compilation of alerts that combines previous work in an optimized form for computer implementation. The revised compilation is part of the Toxtree 1.50 software (freely available from the European Chemicals Bureau website). The use of structural alerts for the chemical biological profiling of a large database of Salmonella mutagenicity results is also reported. Together with being a repository of the science on the chemical biological interactions at the basis of chemical carcinogenicity, the SAs have a crucial role in practical applications for risk assessment, for: (a) description of sets of chemicals; (b) preliminary hazard characterization; (c) formation of categories for e.g., regulatory purposes; (d) generation of subsets of congeneric chemicals to be analyzed subsequently with QSAR methods; (e) priority setting. An important aspect of SAs as predictive toxicity tools is that they derive directly from mechanistic knowledge. The crucial role of mechanistic knowledge in the process of applying (Q)SAR considerations to risk assessment should be strongly emphasized. Mechanistic knowledge provides a ground for interaction and dialogue between model developers, toxicologists and regulators, and permits the integration of the (Q)SAR results into a wider regulatory framework, where different types of

  15. The geothermal energy potential in Denmark - updating the database and new structural and thermal models

    Nielsen, Lars Henrik; Sparre Andersen, Morten; Balling, Niels; Boldreel, Lars Ole; Fuchs, Sven; Leth Hjuler, Morten; Kristensen, Lars; Mathiesen, Anders; Olivarius, Mette; Weibel, Rikke

    2017-04-01

    Knowledge of structural, hydraulic and thermal conditions of the subsurface is fundamental for the planning and use of hydrothermal energy. In the framework of a project under the Danish Research program 'Sustainable Energy and Environment' funded by the 'Danish Agency for Science, Technology and Innovation', fundamental geological and geophysical information of importance for the utilization of geothermal energy in Denmark was compiled, analyzed and re-interpreted. A 3D geological model was constructed and used as structural basis for the development of a national subsurface temperature model. In that frame, all available reflection seismic data were interpreted, quality controlled and integrated to improve the regional structural understanding. The analyses and interpretation of available relevant data (i.e. old and new seismic profiles, core and well-log data, literature data) and a new time-depth conversion allowed a consistent correlation of seismic surfaces for whole Denmark and across tectonic features. On this basis, new topologically consistent depth and thickness maps for 16 geological units from the top pre-Zechstein to the surface were drawn. A new 3D structural geological model was developed with special emphasis on potential geothermal reservoirs. The interpretation of petrophysical data (core data and well-logs) allows to evaluate the hydraulic and thermal properties of potential geothermal reservoirs and to develop a parameterized numerical 3D conductive subsurface temperature model. Reservoir properties and quality were estimated by integrating petrography and diagenesis studies with porosity-permeability data. Detailed interpretation of the reservoir quality of the geological formations was made by estimating net reservoir sandstone thickness based on well-log analysis, determination of mineralogy including sediment provenance analysis, and burial history data. New local surface heat-flow values (range: 64-84 mW/m2) were determined for the Danish

  16. Database Description - TMFunction | LSDB Archive [Life Science Database Archive metadata

    Full Text Available sidue (or mutant) in a protein. The experimental data are collected from the literature both by searching th...the sequence database, UniProt, structural database, PDB, and literature database

  17. Method to assess the temporal persistence of potential biometric features: Application to oculomotor, gait, face and brain structure databases

    Nixon, Mark S.; Komogortsev, Oleg V.

    2017-01-01

    We introduce the intraclass correlation coefficient (ICC) to the biometric community as an index of the temporal persistence, or stability, of a single biometric feature. It requires, as input, a feature on an interval or ratio scale, and which is reasonably normally distributed, and it can only be calculated if each subject is tested on 2 or more occasions. For a biometric system, with multiple features available for selection, the ICC can be used to measure the relative stability of each feature. We show, for 14 distinct data sets (1 synthetic, 8 eye-movement-related, 2 gait-related, and 2 face-recognition-related, and one brain-structure-related), that selecting the most stable features, based on the ICC, resulted in the best biometric performance generally. Analyses based on using only the most stable features produced superior Rank-1-Identification Rate (Rank-1-IR) performance in 12 of 14 databases (p = 0.0065, one-tailed), when compared to other sets of features, including the set of all features. For Equal Error Rate (EER), using a subset of only high-ICC features also produced superior performance in 12 of 14 databases (p = 0. 0065, one-tailed). In general, then, for our databases, prescreening potential biometric features, and choosing only highly reliable features yields better performance than choosing lower ICC features or than choosing all features combined. We also determined that, as the ICC of a group of features increases, the median of the genuine similarity score distribution increases and the spread of this distribution decreases. There was no statistically significant similar relationships for the impostor distributions. We believe that the ICC will find many uses in biometric research. In case of the eye movement-driven biometrics, the use of reliable features, as measured by ICC, allowed to us achieve the authentication performance with EER = 2.01%, which was not possible before. PMID:28575030

  18. Beyond the ENDF format: A modern nuclear database structure. SG38 meeting, NEA Headquarters, 29-30 November 2012

    McNabb, D.; Zerkin, V.; Mattoon, C.; Koning, A.; Brown, D.; Leal, L.; Sublet, J.C.; Coste-Delclaux, M.; Capote, R.; Forrest, R.; Kodeli, I.; Trkov, A.; Beck, B.; Haeck, W.; Fukahori, T.; Mills, R.W.; White, M.C.; Cullen, D.E.

    2012-11-01

    WPEC subgroup 38 (SG38) was formed to develop a new structure for storing nuclear reaction data, that is meant to eventually replace ENDF-6 as the standard way to store and share evaluations. The work of SG38 covers the following tasks: Designing flexible, general-purpose data containers; Determining a logical and easy-to-understand top-level hierarchy for storing evaluated nuclear reaction data; Creating a particle database for storing particles, masses and level schemes; Specifying the infrastructure (plotting, processing, etc.) that must accompany the new structure; Developing an Application Programming Interface or API to allow other codes to access data stored in the new structure; Specifying what tests need to be implemented for quality assurance of the new structure and associated infrastructure; Ensuring documentation and governance of the structure and associated infrastructure. This document is the proceedings of the SG38 meeting, held at the NEA Headquarters on 29-30 November 2012. It comprises all the available presentations (slides) given by the participants as well as 3 reports: A - Welcome and Introduction: - Purpose and goals for SG38 (D. McNabb); - Lessons from ENDF, EXFOR and other formats (V. Zerkin); - Lessons from first LLNL attempt at defining a new nuclear data structure (C. Mattoon); - Example of "2"3"9Pu data B - Purpose of the new data structure: - GND: Purpose of the new data structure (A. Koning); - Purpose of the new data structure: Dave's Perspective (D. Brown); C - Nuclear Data System Overview: - ENDF File uses in AMPX (L. Leal); D - Benefits and requirements for data evaluation and processing: - Benefits and requirements (J.C. Sublet); - CEA/DEN contribution (M. Coste-Delclaux); - Proposals from the IAEA-NDS (V. Zerkin, R. Capote, R. Forrest); - User View on the ENDF Formats and Data Processing (I. Kodeli); - On the ENDF Formats and Data Processing - report (A. Trkov); E - Format perspective, organization and requirements for basic

  19. Beyond the ENDF format: A modern nuclear database structure. SG38 meeting, NEA Headquarters, 21-22 May 2013

    McNabb, D.; Ishikawa, M.; White, M.; Zerkin, V.; Mattoon, C.; Koning, A.; Brown, D.; Badikov, S.; Beck, B.; Haeck, W.; Dunn, M.; Sublet, J.-C.; Sinitsa, V.; Shmakov, V.; Dupont, E.

    2013-05-01

    WPEC subgroup 38 (SG38) was formed to develop a new structure for storing nuclear reaction data, that is meant to eventually replace ENDF-6 as the standard way to store and share evaluations. The work of SG38 covers the following tasks: Designing flexible, general-purpose data containers; Determining a logical and easy-to-understand top-level hierarchy for storing evaluated nuclear reaction data; Creating a particle database for storing particles, masses and level schemes; Specifying the infrastructure (plotting, processing, etc.) that must accompany the new structure; Developing an Application Programming Interface or API to allow other codes to access data stored in the new structure; Specifying what tests need to be implemented for quality assurance of the new structure and associated infrastructure; Ensuring documentation and governance of the structure and associated infrastructure. This document is the proceedings of the SG38 meeting, held at the NEA Headquarters on 21-22 May 2013. It comprises all the available presentations (slides) given by the participants: A - Introduction: - Developing a plan to meet our requirements (D. McNabb); - Possible option for next meeting date and place (M. Ishikawa); B - Task 1 - Designing low-level data containers for the new structure: - Low-Level Data Containers (M. White); - Low-level data structures in EXFOR and associated software - Development of EXFOR-XML (V. Zerkin); - Data containers in GND (C. Mattoon); Download an example of basic data containers for 1, 2 and 3-D data as well as the table and matrix C - Task 2 - Designing high-level hierarchy for nuclear reaction data: - Nuclear reaction data in a new structure (A. Koning); - Bibliographic and documentation issues (D. Brown); - Designing a high-level hierarchy for nuclear reaction data (D. Brown); - The ENDF-6 Format for the Evaluated Covariances of Discrete Radiation Spectrum Data (S. Badikov); D - Task 3 - Designing an API for reading/writing new data

  20. The braingraph.org database of high resolution structural connectomes and the brain graph tools.

    Kerepesi, Csaba; Szalkai, Balázs; Varga, Bálint; Grolmusz, Vince

    2017-10-01

    Based on the data of the NIH-funded Human Connectome Project, we have computed structural connectomes of 426 human subjects in five different resolutions of 83, 129, 234, 463 and 1015 nodes and several edge weights. The graphs are given in anatomically annotated GraphML format that facilitates better further processing and visualization. For 96 subjects, the anatomically classified sub-graphs can also be accessed, formed from the vertices corresponding to distinct lobes or even smaller regions of interests of the brain. For example, one can easily download and study the connectomes, restricted to the frontal lobes or just to the left precuneus of 96 subjects using the data. Partially directed connectomes of 423 subjects are also available for download. We also present a GitHub-deposited set of tools, called the Brain Graph Tools, for several processing tasks of the connectomes on the site http://braingraph.org.

  1. Facilitating adverse drug event detection in pharmacovigilance databases using molecular structure similarity: application to rhabdomyolysis

    Vilar, Santiago; Harpaz, Rave; Chase, Herbert S; Costanzi, Stefano; Rabadan, Raul

    2011-01-01

    Background Adverse drug events (ADE) cause considerable harm to patients, and consequently their detection is critical for patient safety. The US Food and Drug Administration maintains an adverse event reporting system (AERS) to facilitate the detection of ADE in drugs. Various data mining approaches have been developed that use AERS to detect signals identifying associations between drugs and ADE. The signals must then be monitored further by domain experts, which is a time-consuming task. Objective To develop a new methodology that combines existing data mining algorithms with chemical information by analysis of molecular fingerprints to enhance initial ADE signals generated from AERS, and to provide a decision support mechanism to facilitate the identification of novel adverse events. Results The method achieved a significant improvement in precision in identifying known ADE, and a more than twofold signal enhancement when applied to the ADE rhabdomyolysis. The simplicity of the method assists in highlighting the etiology of the ADE by identifying structurally similar drugs. A set of drugs with strong evidence from both AERS and molecular fingerprint-based modeling is constructed for further analysis. Conclusion The results demonstrate that the proposed methodology could be used as a pharmacovigilance decision support tool to facilitate ADE detection. PMID:21946238

  2. Accessing and using chemical databases

    Nikolov, Nikolai Georgiev; Pavlov, Todor; Niemelä, Jay Russell

    2013-01-01

    Computer-based representation of chemicals makes it possible to organize data in chemical databases-collections of chemical structures and associated properties. Databases are widely used wherever efficient processing of chemical information is needed, including search, storage, retrieval......, and dissemination. Structure and functionality of chemical databases are considered. The typical kinds of information found in a chemical database are considered-identification, structural, and associated data. Functionality of chemical databases is presented, with examples of search and access types. More details...... are included about the OASIS database and platform and the Danish (Q)SAR Database online. Various types of chemical database resources are discussed, together with a list of examples....

  3. Structure and contents of a new geomorphological GIS database linked to a geomorphological map — With an example from Liden, central Sweden

    Gustavsson, Marcus; Seijmonsbergen, Arie C.; Kolstrup, Else

    2008-03-01

    This paper presents the structure and contents of a standardised geomorphological GIS database that stores comprehensive scientific geomorphological data and constitutes the basis for processing and extracting spatial thematic data. The geodatabase contains spatial information on morphography/morphometry, hydrography, lithology, genesis, processes and age. A unique characteristic of the GIS geodatabase is that it is constructed in parallel with a new comprehensive geomorphological mapping system designed with GIS applications in mind. This close coupling enables easy digitalisation of the information from the geomorphological map into the GIS database for use in both scientific and practical applications. The selected platform, in which the geomorphological vector, raster and tabular data are stored, is the ESRI Personal geodatabase. Additional data such as an image of the original geomorphological map, DEMs or aerial orthographic images are also included in the database. The structure of the geomorphological database presented in this paper is exemplified for a study site around Liden, central Sweden.

  4. Report on the achievements in the Sunshine Project in fiscal 1986. Surveys on systems to structure a coal liquefaction database; 1986 nendo sekitan ekika database kochiku no tame no system chosa seika hokokusho

    NONE

    1987-03-01

    Surveys are carried out on the current status of information control systems for development projects being performed or planned in developing coal liquefaction technologies. The conception for structuring a coal liquefaction database (CLDB) is made clear to manage comprehensively and utilize effectively the information in the systems. Section 3 investigated and analyzed the current status of data processing for experimental plants. The data for each experimental plant are processed individually. Therefore, it is preferable that the CLDB shall be provided with accommodating locations to receive respective data. Section 4 put into order the flows of operation and information in the coal liquefaction research, and depicted an overall configuration diagram for the system. Section 5 discusses problems in structuring this system. There is a large number of problems to be discussed from now on, not only in the technological aspect, but in analyzing the organizational roles of NEDO and commissioned business entities, and the needs of users. The last section summarizes the steps and schedule for developing this system. The development steps should preferably be implemented stepwise along the progress of the experimental plants, in such an order as the fundamental database, analysis database and engineering database. (NEDO)

  5. Beyond the ENDF format: A modern nuclear database structure. SG38 meeting, JAEA Tokai site, 9-11 December 2013

    Fukahori, T.; McNabb, D.; Mattoon, C.; Kugo, T.; Yokoyama, K.; Suyama, K.; Nishihara, K.; Konno, C.; Sato, T.; Brown, D.; White, M.; Beck, B.; Sinitsa, V.; Dunn, M.

    2013-12-01

    WPEC subgroup 38 (SG38) was formed to develop a new structure for storing nuclear reaction data, that is meant to eventually replace ENDF-6 as the standard way to store and share evaluations. The work of SG38 covers the following tasks: Designing flexible, general-purpose data containers; Determining a logical and easy-to-understand top-level hierarchy for storing evaluated nuclear reaction data; Creating a particle database for storing particles, masses and level schemes; Specifying the infrastructure (plotting, processing, etc.) that must accompany the new structure; Developing an Application Programming Interface or API to allow other codes to access data stored in the new structure; Specifying what tests need to be implemented for quality assurance of the new structure and associated infrastructure; Ensuring documentation and governance of the structure and associated infrastructure. This document is the proceedings of the third subgroup meeting which took place at the Tokai site of the Japan Atomic Energy Agency (JAEA) in Japan, on 9-11 December 2013 It comprises all the available presentations (slides) given by the participants as well as one draft paper: A - Introduction: - Welcome (T. Fukahori); - Reviewing our implementation plan (D. McNabb); - Overview of the SG38 wiki (C. Mattoon); B - Feedback from nuclear data users: - Application of nuclear data to light water reactor core analysis (T. Kugo); - Application of nuclear data to fast reactor analysis and design (K. Yokoyama); - Request for the format of the evaluated nuclear data file for the criticality safety evaluation (K. Suyama); - Sensitivity and uncertainty analysis for a minor-actinide transmuter with JENDL- 4.0 (K. Nishihara); - Application of nuclear data libraries in fusion neutronics and some comments (C. Konno); - Particle and heavy ion transport code system PHITS (T. Sato); C - Review of the project sub-tasks: - Top level organization of nuclear data (D. Brown); Documentation: Requirements

  6. BioMagResBank databases DOCR and FRED containing converted and filtered sets of experimental NMR restraints and coordinates from over 500 protein PDB structures

    Doreleijers, Jurgen F. [University of Wisconsin-Madison, BioMagResBank, Department of Biochemistry (United States); Nederveen, Aart J. [Utrecht University, Bijvoet Center for Biomolecular Research (Netherlands); Vranken, Wim [European Bioinformatics Institute, Macromolecular Structure Database group (United Kingdom); Lin Jundong [University of Wisconsin-Madison, BioMagResBank, Department of Biochemistry (United States); Bonvin, Alexandre M.J.J.; Kaptein, Robert [Utrecht University, Bijvoet Center for Biomolecular Research (Netherlands); Markley, John L.; Ulrich, Eldon L. [University of Wisconsin-Madison, BioMagResBank, Department of Biochemistry (United States)], E-mail: elu@bmrb.wisc.edu

    2005-05-15

    We present two new databases of NMR-derived distance and dihedral angle restraints: the Database Of Converted Restraints (DOCR) and the Filtered Restraints Database (FRED). These databases currently correspond to 545 proteins with NMR structures deposited in the Protein Databank (PDB). The criteria for inclusion were that these should be unique, monomeric proteins with author-provided experimental NMR data and coordinates available from the PDB capable of being parsed and prepared in a consistent manner. The Wattos program was used to parse the files, and the CcpNmr FormatConverter program was used to prepare them semi-automatically. New modules, including a new implementation of Aqua in the BioMagResBank (BMRB) software Wattos were used to analyze the sets of distance restraints (DRs) for inconsistencies, redundancies, NOE completeness, classification and violations with respect to the original coordinates. Restraints that could not be associated with a known nomenclature were flagged. The coordinates of hydrogen atoms were recalculated from the positions of heavy atoms to allow for a full restraint analysis. The DOCR database contains restraint and coordinate data that is made consistent with each other and with IUPAC conventions. The FRED database is based on the DOCR data but is filtered for use by test calculation protocols and longitudinal analyses and validations. These two databases are available from websites of the BMRB and the Macromolecular Structure Database (MSD) in various formats: NMR-STAR, CCPN XML, and in formats suitable for direct use in the software packages CNS and CYANA.

  7. BioMagResBank databases DOCR and FRED containing converted and filtered sets of experimental NMR restraints and coordinates from over 500 protein PDB structures

    Doreleijers, Jurgen F.; Nederveen, Aart J.; Vranken, Wim; Lin Jundong; Bonvin, Alexandre M.J.J.; Kaptein, Robert; Markley, John L.; Ulrich, Eldon L.

    2005-01-01

    We present two new databases of NMR-derived distance and dihedral angle restraints: the Database Of Converted Restraints (DOCR) and the Filtered Restraints Database (FRED). These databases currently correspond to 545 proteins with NMR structures deposited in the Protein Databank (PDB). The criteria for inclusion were that these should be unique, monomeric proteins with author-provided experimental NMR data and coordinates available from the PDB capable of being parsed and prepared in a consistent manner. The Wattos program was used to parse the files, and the CcpNmr FormatConverter program was used to prepare them semi-automatically. New modules, including a new implementation of Aqua in the BioMagResBank (BMRB) software Wattos were used to analyze the sets of distance restraints (DRs) for inconsistencies, redundancies, NOE completeness, classification and violations with respect to the original coordinates. Restraints that could not be associated with a known nomenclature were flagged. The coordinates of hydrogen atoms were recalculated from the positions of heavy atoms to allow for a full restraint analysis. The DOCR database contains restraint and coordinate data that is made consistent with each other and with IUPAC conventions. The FRED database is based on the DOCR data but is filtered for use by test calculation protocols and longitudinal analyses and validations. These two databases are available from websites of the BMRB and the Macromolecular Structure Database (MSD) in various formats: NMR-STAR, CCPN XML, and in formats suitable for direct use in the software packages CNS and CYANA

  8. Structure-based virtual screening and characterization of a novel IL-6 antagonistic compound from synthetic compound database

    Wang J

    2016-12-01

    Full Text Available Jing Wang,1,* Chunxia Qiao,1,* He Xiao,1 Zhou Lin,1 Yan Li,1 Jiyan Zhang,1 Beifen Shen,1 Tinghuan Fu,2 Jiannan Feng1 1Department of Molecular Immunology, Beijing Institute of Basic Medical Sciences, 2First Affiliated Hospital of PLA General Hospital, Beijing, People’s Republic of China *These authors contributed equally to this work Abstract: According to the three-dimensional (3D complex structure of (hIL-6·hIL-6R·gp 1302 and the binding orientation of hIL-6, three compounds with high affinity to hIL-6R and bioactivity to block hIL-6 in vitro were screened theoretically from the chemical databases, including 3D-Available Chemicals Directory (ACD and MDL Drug Data Report (MDDR, by means of the computer-guided virtual screening method. Using distance geometry, molecular modeling and molecular dynamics trajectory analysis methods, the binding mode and binding energy of the three compounds were evaluated theoretically. Enzyme-linked immunosorbent assay analysis demonstrated that all the three compounds could block IL-6 binding to IL-6R specifically. However, only compound 1 could effectively antagonize the function of hIL-6 and inhibit the proliferation of XG-7 cells in a dose-dependent manner, whereas it showed no cytotoxicity to SP2/0 or L929 cells. These data demonstrated that the compound 1 could be a promising candidate of hIL-6 antagonist. Keywords: virtual screening, structural optimization, human interlukin-6, small molecular antagonist, XG-7 cells, apoptosis

  9. Relational databases

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  10. Community Database

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...

  11. Database Administrator

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  12. The International Coal Statistics Data Base program maintenance guide

    1991-06-01

    The International Coal Statistics Data Base (ICSD) is a microcomputer-based system which contains information related to international coal trade. This includes coal production, consumption, imports and exports information. The ICSD is a secondary data base, meaning that information contained therein is derived entirely from other primary sources. It uses dBase III+ and Lotus 1-2-3 to locate, report and display data. The system is used for analysis in preparing the Annual Prospects for World Coal Trade (DOE/EIA-0363) publication. The ICSD system is menu driven and also permits the user who is familiar with dBase and Lotus operations to leave the menu structure to perform independent queries. Documentation for the ICSD consists of three manuals -- the User's Guide, the Operations Manual, and the Program Maintenance Manual. This Program Maintenance Manual provides the information necessary to maintain and update the ICSD system. Two major types of program maintenance documentation are presented in this manual. The first is the source code for the dBase III+ routines and related non-dBase programs used in operating the ICSD. The second is listings of the major component database field structures. A third important consideration for dBase programming, the structure of index files, is presented in the listing of source code for the index maintenance program. 1 fig

  13. Requirements for a new nuclear data structure. Part 2: Implementation Plan Prepared by WPEC Subgroup No.38 (subgroup title: 'A modern nuclear database structure beyond the ENDF format')

    2014-01-01

    This document summarizes the implementation plan developed during the second meeting of the WPEC Subgroup 38 (SG38), which was organized to develop a new evaluated nuclear data structure 1 and then oversee the transition from the current standard (ENDF-6) to the new structure. Part 1 of this document, adopted by WPEC in May 2013, lays out the vision and goals for the new structure. In this second step, SG38 develops a community plan to address these needs and requirements. The plan laid out here represents a consensus on how to execute the project, what work will be done, and to some degree how it will be done and the people involved. During the development of the vision and goals for the new format it was recognized that the application of modern programming and database practices will have significant benefits for nuclear reaction databases, both for those of us engaged in producing this data and those who utilize the data for applications. It was also appreciated that additional benefits would be realized if the low-level data containers used for storing reaction data were general enough to be shared with other nuclear data products, such as EXFOR, RIPL, and ENSDF, so that codes interfacing with these different databases could share the same set of routines for reading and writing data structures. However, it was also acknowledged that adoption of these new tools and capabilities will be difficult without some supporting infrastructure in place to use the new data structure, specifically open source codes to manipulate, search, plot, and process the data, as well as tools to translate data to other formats in current use and to check the data for quality. In order to address this broad set of goals, the SG38 project decided to organize the work around seven different products: 1. Low-level data structures; 2. Top-level reaction hierarchy; 3. Particle properties hierarchy; 4. Visualization, manipulation, and processing tools; 5. API for reading and writing data

  14. Dictionary as Database.

    Painter, Derrick

    1996-01-01

    Discussion of dictionaries as databases focuses on the digitizing of The Oxford English dictionary (OED) and the use of Standard Generalized Mark-Up Language (SGML). Topics include the creation of a consortium to digitize the OED, document structure, relational databases, text forms, sequence, and discourse. (LRW)

  15. RNA FRABASE 2.0: an advanced web-accessible database with the capacity to search the three-dimensional fragments within RNA structures

    Wasik Szymon

    2010-05-01

    Full Text Available Abstract Background Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. Description RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA

  16. Hazard Analysis Database Report

    Grams, W H

    2000-01-01

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from t...

  17. Assessment of the structural and functional impact of in-frame mutations of the DMD gene, using the tools included in the eDystrophin online database

    Nicolas Aurélie

    2012-07-01

    Full Text Available Abstract Background Dystrophin is a large essential protein of skeletal and heart muscle. It is a filamentous scaffolding protein with numerous binding domains. Mutations in the DMD gene, which encodes dystrophin, mostly result in the deletion of one or several exons and cause Duchenne (DMD and Becker (BMD muscular dystrophies. The most common DMD mutations are frameshift mutations resulting in an absence of dystrophin from tissues. In-frame DMD mutations are less frequent and result in a protein with partial wild-type dystrophin function. The aim of this study was to highlight structural and functional modifications of dystrophin caused by in-frame mutations. Methods and results We developed a dedicated database for dystrophin, the eDystrophin database. It contains 209 different non frame-shifting mutations found in 945 patients from a French cohort and previous studies. Bioinformatics tools provide models of the three-dimensional structure of the protein at deletion sites, making it possible to determine whether the mutated protein retains the typical filamentous structure of dystrophin. An analysis of the structure of mutated dystrophin molecules showed that hybrid repeats were reconstituted at the deletion site in some cases. These hybrid repeats harbored the typical triple coiled-coil structure of native repeats, which may be correlated with better function in muscle cells. Conclusion This new database focuses on the dystrophin protein and its modification due to in-frame deletions in BMD patients. The observation of hybrid repeat reconstitution in some cases provides insight into phenotype-genotype correlations in dystrophin diseases and possible strategies for gene therapy. The eDystrophin database is freely available: http://edystrophin.genouest.org/.

  18. Categorical database generalization in GIS

    Liu, Y.

    2002-01-01

    Key words: Categorical database, categorical database generalization, Formal data structure, constraints, transformation unit, classification hierarchy, aggregation hierarchy, semantic similarity, data model,

  19. Segmentation of anatomical structures in chest radiographs using supervised methods: a comparative study on a public database

    van Ginneken, Bram; Stegmann, Mikkel Bille; Loog, Marco

    2006-01-01

    classification method that employs a multi-scale filter bank of Gaussian derivatives and a k-nearest-neighbors classifier. The methods have been tested on a publicly available database of 247 chest radiographs, in which all objects have been manually segmented by two human observers. A parameter optimization...

  20. Discovery of a Red-Emitting Li3RbGe8O18:Mn4+ Phosphor in the Alkali-Germanate System: Structural Determination and Electronic Calculations.

    Singh, Satendra Pal; Kim, Minseuk; Park, Woon Bae; Lee, Jin-Woong; Sohn, Kee-Sun

    2016-10-17

    A solid-state combinatorial chemistry approach, which used the A-Ge-O (A = Li, K, Rb) system doped with a small amount of Mn 4+ as an activator, was adopted in a search for novel red-emitting phosphors. The A site may have been composed of either a single alkali metal ion or of a combination of them. This approach led to the discovery of a novel phosphor in the above system with the chemical formula Li 3 RbGe 8 O 18 :Mn 4+ . The crystal structure of this novel phosphor was solved via direct methods, and subsequent Rietveld refinement revealed a trigonal structure in the P3̅1m space group. The discovered phosphor is believed to be novel in the sense that neither the crystal structure nor the chemical formula matches any of the prototype structures available in the crystallographic information database (ICDD or ICSD). The measured photoluminescence intensity that peaked at a wavelength of 667 nm was found to be much higher than the best intensity obtained among all the existing A 2 Ge 4 O 9 (A = Li, K, Rb) compounds in the alkali-germanate system. An ab initio calculation based on density function theory (DFT) was conducted to verify the crystal structure model and compare the calculated value of the optical band gap with the experimental results. The optical band gap obtained from diffuse reflectance measurement (5.26 eV) and DFT calculation (4.64 eV) results were in very good agreement. The emission wavelength of this phosphor that exists in the deep red region of the electromagnetic spectrum may be very useful for increasing the color gamut of LED-based display devices such as ultrahigh-definition television (UHDTV) as per the ITU-R BT.2020-2 recommendations and also for down-converter phosphors that are used in solar-cell applications.

  1. Federal databases

    Welch, M.J.; Welles, B.W.

    1988-01-01

    Accident statistics on all modes of transportation are available as risk assessment analytical tools through several federal agencies. This paper reports on the examination of the accident databases by personal contact with the federal staff responsible for administration of the database programs. This activity, sponsored by the Department of Energy through Sandia National Laboratories, is an overview of the national accident data on highway, rail, air, and marine shipping. For each mode, the definition or reporting requirements of an accident are determined and the method of entering the accident data into the database is established. Availability of the database to others, ease of access, costs, and who to contact were prime questions to each of the database program managers. Additionally, how the agency uses the accident data was of major interest

  2. Database Replication

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  3. Database thinking development in Context of School Education

    Panský, Mikoláš

    2011-01-01

    The term database thinking is understood as group of Competencies that enables working with Database System. Database thinking development is targeted educational incidence to student with the expected outcome ability working with Database system. Thesis is focused on problematic of purposes, content and methods of database thinking development. Experimental part proposes quantitative metrics for database thinking development. KEYWORDS: Education, Database, Database thinking, Structured Query...

  4. RDD Databases

    National Oceanic and Atmospheric Administration, Department of Commerce — This database was established to oversee documents issued in support of fishery research activities including experimental fishing permits (EFP), letters of...

  5. Snowstorm Database

    National Oceanic and Atmospheric Administration, Department of Commerce — The Snowstorm Database is a collection of over 500 snowstorms dating back to 1900 and updated operationally. Only storms having large areas of heavy snowfall (10-20...

  6. Dealer Database

    National Oceanic and Atmospheric Administration, Department of Commerce — The dealer reporting databases contain the primary data reported by federally permitted seafood dealers in the northeast. Electronic reporting was implemented May 1,...

  7. National database

    Kristensen, Helen Grundtvig; Stjernø, Henrik

    1995-01-01

    Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen.......Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen....

  8. DMPD: Structure, function and regulation of the Toll/IL-1 receptor adaptor proteins. [Dynamic Macrophage Pathway CSML Database

    Full Text Available 17667936 Structure, function and regulation of the Toll/IL-1 receptor adaptor prote... (.svg) (.html) (.csml) Show Structure, function and regulation of the Toll/IL-1 receptor adaptor proteins. ...PubmedID 17667936 Title Structure, function and regulation of the Toll/IL-1 recep

  9. LoopX: A Graphical User Interface-Based Database for Comprehensive Analysis and Comparative Evaluation of Loops from Protein Structures.

    Kadumuri, Rajashekar Varma; Vadrevu, Ramakrishna

    2017-10-01

    Due to their crucial role in function, folding, and stability, protein loops are being targeted for grafting/designing to create novel or alter existing functionality and improve stability and foldability. With a view to facilitate a thorough analysis and effectual search options for extracting and comparing loops for sequence and structural compatibility, we developed, LoopX a comprehensively compiled library of sequence and conformational features of ∼700,000 loops from protein structures. The database equipped with a graphical user interface is empowered with diverse query tools and search algorithms, with various rendering options to visualize the sequence- and structural-level information along with hydrogen bonding patterns, backbone φ, ψ dihedral angles of both the target and candidate loops. Two new features (i) conservation of the polar/nonpolar environment and (ii) conservation of sequence and conformation of specific residues within the loops have also been incorporated in the search and retrieval of compatible loops for a chosen target loop. Thus, the LoopX server not only serves as a database and visualization tool for sequence and structural analysis of protein loops but also aids in extracting and comparing candidate loops for a given target loop based on user-defined search options.

  10. An XCT image database system

    Komori, Masaru; Minato, Kotaro; Koide, Harutoshi; Hirakawa, Akina; Nakano, Yoshihisa; Itoh, Harumi; Torizuka, Kanji; Yamasaki, Tetsuo; Kuwahara, Michiyoshi.

    1984-01-01

    In this paper, an expansion of X-ray CT (XCT) examination history database to XCT image database is discussed. The XCT examination history database has been constructed and used for daily examination and investigation in our hospital. This database consists of alpha-numeric information (locations, diagnosis and so on) of more than 15,000 cases, and for some of them, we add tree structured image data which has a flexibility for various types of image data. This database system is written by MUMPS database manipulation language. (author)

  11. JICST Factual Database(2)

    Araki, Keisuke

    The computer programme, which builds atom-bond connection tables from nomenclatures, is developed. Chemical substances with their nomenclature and varieties of trivial names or experimental code numbers are inputted. The chemical structures of the database are stereospecifically stored and are able to be searched and displayed according to stereochemistry. Source data are from laws and regulations of Japan, RTECS of US and so on. The database plays a central role within the integrated fact database service of JICST and makes interrelational retrieval possible.

  12. Practical use of chemical shift databases for protein solid-state NMR: 2D chemical shift maps and amino-acid assignment with secondary-structure information

    Fritzsching, K. J.; Yang, Y.; Schmidt-Rohr, K.; Hong Mei

    2013-01-01

    We introduce a Python-based program that utilizes the large database of 13 C and 15 N chemical shifts in the Biological Magnetic Resonance Bank to rapidly predict the amino acid type and secondary structure from correlated chemical shifts. The program, called PACSYlite Unified Query (PLUQ), is designed to help assign peaks obtained from 2D 13 C– 13 C, 15 N– 13 C, or 3D 15 N– 13 C– 13 C magic-angle-spinning correlation spectra. We show secondary-structure specific 2D 13 C– 13 C correlation maps of all twenty amino acids, constructed from a chemical shift database of 262,209 residues. The maps reveal interesting conformation-dependent chemical shift distributions and facilitate searching of correlation peaks during amino-acid type assignment. Based on these correlations, PLUQ outputs the most likely amino acid types and the associated secondary structures from inputs of experimental chemical shifts. We test the assignment accuracy using four high-quality protein structures. Based on only the Cα and Cβ chemical shifts, the highest-ranked PLUQ assignments were 40–60 % correct in both the amino-acid type and the secondary structure. For three input chemical shifts (CO–Cα–Cβ or N–Cα–Cβ), the first-ranked assignments were correct for 60 % of the residues, while within the top three predictions, the correct assignments were found for 80 % of the residues. PLUQ and the chemical shift maps are expected to be useful at the first stage of sequential assignment, for combination with automated sequential assignment programs, and for highly disordered proteins for which secondary structure analysis is the main goal of structure determination.

  13. Practical use of chemical shift databases for protein solid-state NMR: 2D chemical shift maps and amino-acid assignment with secondary-structure information

    Fritzsching, K. J.; Yang, Y.; Schmidt-Rohr, K.; Hong Mei, E-mail: mhong@iastate.edu [Iowa State University, Department of Chemistry (United States)

    2013-06-15

    We introduce a Python-based program that utilizes the large database of {sup 13}C and {sup 15}N chemical shifts in the Biological Magnetic Resonance Bank to rapidly predict the amino acid type and secondary structure from correlated chemical shifts. The program, called PACSYlite Unified Query (PLUQ), is designed to help assign peaks obtained from 2D {sup 13}C-{sup 13}C, {sup 15}N-{sup 13}C, or 3D {sup 15}N-{sup 13}C-{sup 13}C magic-angle-spinning correlation spectra. We show secondary-structure specific 2D {sup 13}C-{sup 13}C correlation maps of all twenty amino acids, constructed from a chemical shift database of 262,209 residues. The maps reveal interesting conformation-dependent chemical shift distributions and facilitate searching of correlation peaks during amino-acid type assignment. Based on these correlations, PLUQ outputs the most likely amino acid types and the associated secondary structures from inputs of experimental chemical shifts. We test the assignment accuracy using four high-quality protein structures. Based on only the C{alpha} and C{beta} chemical shifts, the highest-ranked PLUQ assignments were 40-60 % correct in both the amino-acid type and the secondary structure. For three input chemical shifts (CO-C{alpha}-C{beta} or N-C{alpha}-C{beta}), the first-ranked assignments were correct for 60 % of the residues, while within the top three predictions, the correct assignments were found for 80 % of the residues. PLUQ and the chemical shift maps are expected to be useful at the first stage of sequential assignment, for combination with automated sequential assignment programs, and for highly disordered proteins for which secondary structure analysis is the main goal of structure determination.

  14. IAEA Coordinated Research Project on the Establishment of a Material Properties Database for Irradiated Core Structural Components for Continued Safe Operation and Lifetime Extension of Ageing Research Reactors

    Borio Di Tigliole, A.; Schaaf, Van Der; Barnea, Y.; Bradley, E.; Morris, C.; Rao, D. V. H. [Research Reactor Section, Vianna (Australia); Shokr, A. [Research Reactor Safety Section, Vienna (Australia); Zeman, A. [International Atomic Energy Agency, Vienna (Australia)

    2013-07-01

    Today more than 50% of operating Research Reactors (RRs) are over 45 years old. Thus, ageing management is one of the most important issues to face in order to ensure availability (including life extension), reliability and safe operation of these facilities for the future. Management of the ageing process requires, amongst others, the predictions for the behavior of structural materials of primary components subjected to irradiation such as reactor vessel and core support structures, many of which are extremely difficult or impossible to replace. In fact, age-related material degradation mechanisms resulted in high profile, unplanned and lengthy shutdowns and unique regulatory processes of relicensing the facilities in recent years. These could likely have been prevented by utilizing available data for the implementation of appropriate maintenance and surveillance programmes. This IAEA Coordinated Research Project (CRP) will provide an international forum to establish a material properties Database for irradiated core structural materials and components. It is expected that this Database will be used by research reactor operators and regulators to help predict ageing related degradation. This would be useful to minimize unpredicted outages due to ageing processes of primary components and to mitigate lengthy and costly shutdowns. The Database will be a compilation of data from RRs operators' inputs, comprehensive literature reviews and experimental data from RRs. Moreover, the CRP will specify further activities needed to be addressed in order to bridge the gaps in the new created Database, for potential follow-on activities. As per today, 13 Member States (MS) confirmed their agreement to contribute to the development of the Database, covering a wide number of materials and properties. The present publication incorporates two parts: the first part includes details on the pre-CRP Questionnaire, including the conclusions drawn from the answers received from

  15. IAEA Coordinated Research Project on the Establishment of a Material Properties Database for Irradiated Core Structural Components for Continued Safe Operation and Lifetime Extension of Ageing Research Reactors

    Borio Di Tigliole, A.; Schaaf, Van Der; Barnea, Y.; Bradley, E.; Morris, C.; Rao, D. V. H.; Shokr, A.; Zeman, A.

    2013-01-01

    Today more than 50% of operating Research Reactors (RRs) are over 45 years old. Thus, ageing management is one of the most important issues to face in order to ensure availability (including life extension), reliability and safe operation of these facilities for the future. Management of the ageing process requires, amongst others, the predictions for the behavior of structural materials of primary components subjected to irradiation such as reactor vessel and core support structures, many of which are extremely difficult or impossible to replace. In fact, age-related material degradation mechanisms resulted in high profile, unplanned and lengthy shutdowns and unique regulatory processes of relicensing the facilities in recent years. These could likely have been prevented by utilizing available data for the implementation of appropriate maintenance and surveillance programmes. This IAEA Coordinated Research Project (CRP) will provide an international forum to establish a material properties Database for irradiated core structural materials and components. It is expected that this Database will be used by research reactor operators and regulators to help predict ageing related degradation. This would be useful to minimize unpredicted outages due to ageing processes of primary components and to mitigate lengthy and costly shutdowns. The Database will be a compilation of data from RRs operators' inputs, comprehensive literature reviews and experimental data from RRs. Moreover, the CRP will specify further activities needed to be addressed in order to bridge the gaps in the new created Database, for potential follow-on activities. As per today, 13 Member States (MS) confirmed their agreement to contribute to the development of the Database, covering a wide number of materials and properties. The present publication incorporates two parts: the first part includes details on the pre-CRP Questionnaire, including the conclusions drawn from the answers received from the MS

  16. Experiment Databases

    Vanschoren, Joaquin; Blockeel, Hendrik

    Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.

  17. Improving machine operation management efficiency via improving the vehicle park structure and using the production operation information database

    Koptev, V. Yu

    2017-02-01

    The work represents the results of studying basic interconnected criteria of separate equipment units of the transport network machines fleet, depending on production and mining factors to improve the transport systems management. Justifying the selection of a control system necessitates employing new methodologies and models, augmented with stability and transport flow criteria, accounting for mining work development dynamics on mining sites. A necessary condition is the accounting of technical and operating parameters related to vehicle operation. Modern open pit mining dispatching systems must include such kinds of the information database. An algorithm forming a machine fleet is presented based on multi-variation task solution in connection with defining reasonable operating features of a machine working as a part of a complex. Proposals cited in the work may apply to mining machines (drilling equipment, excavators) and construction equipment (bulldozers, cranes, pile-drivers), city transport and other types of production activities using machine fleet.

  18. 13C-NMR chemical shift databases as a quick tool to evaluate structural models of humic substances

    Nyrop Albers, Christian; Hansen, Poul Erik

    2010-01-01

    Models for humic and fulvic acids are discussed based on 13C liquid state NMR spectra combined with results from elemental analysis and titration studies. The analysis of NMR spectra is based on a full reconstruction of the NMR spectrum done with help of 13C-NMR data bases by adding up chemical...... side missing structural elements in the models can be suggested. A number of proposed structures for humic and fulvic acids are discussed based on the above analysis....

  19. Hazard Analysis Database Report

    GAULT, G.W.

    1999-10-13

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for the Tank Waste Remediation System (TWRS) Final Safety Analysis Report (FSAR). The FSAR is part of the approved TWRS Authorization Basis (AB). This document describes, identifies, and defines the contents and structure of the TWRS FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The TWRS Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The database supports the preparation of Chapters 3,4, and 5 of the TWRS FSAR and the USQ process and consists of two major, interrelated data sets: (1) Hazard Evaluation Database--Data from the results of the hazard evaluations; and (2) Hazard Topography Database--Data from the system familiarization and hazard identification.

  20. Human Performance Event Database

    Trager, E. A.

    1998-01-01

    The purpose of this paper is to describe several aspects of a Human Performance Event Database (HPED) that is being developed by the Nuclear Regulatory Commission. These include the background, the database structure and basis for the structure, the process for coding and entering event records, the results of preliminary analyses of information in the database, and plans for the future. In 1992, the Office for Analysis and Evaluation of Operational Data (AEOD) within the NRC decided to develop a database for information on human performance during operating events. The database was needed to help classify and categorize the information to help feedback operating experience information to licensees and others. An NRC interoffice working group prepared a list of human performance information that should be reported for events and the list was based on the Human Performance Investigation Process (HPIP) that had been developed by the NRC as an aid in investigating events. The structure of the HPED was based on that list. The HPED currently includes data on events described in augmented inspection team (AIT) and incident investigation team (IIT) reports from 1990 through 1996, AEOD human performance studies from 1990 through 1993, recent NRR special team inspections, and licensee event reports (LERs) that were prepared for the events. (author)

  1. Relationship Between Structural Alerts in NSAIDs and Idiosyncratic Hepatotoxicity : An Analysis of Spontaneous Report Data from the WHO Database

    Jessurun, Naomi; van Puijenbroek, Eugene

    2015-01-01

    BACKGROUND: Idiosyncratic drug reactions such as hepatotoxicity and blood dyscrasias represent one of the major causes of drug withdrawal from the market. According to the reactive metabolite (RM) concept, this may be due to the metabolic activation of structural alerts (SAs), functionalities in the

  2. Structural and sequence variants in patients with Silver-Russell syndrome or similar features-Curation of a disease database

    Tümer, Zeynep; López-Hernández, Julia Angélica; Netchine, Irène

    2018-01-01

    data of these patients. The clinical features are scored according to the Netchine-Harbison clinical scoring system (NH-CSS), which has recently been accepted as standard by consensus. The structural and sequence variations are reviewed and where necessary redescribed according to recent...

  3. Identification of Five Structurally Unrelated Quorum-Sensing Inhibitors of Pseudomonas aeruginosa from a Natural-Derivative Database

    Tan, Sean Yang-Yi; Chua, Song-Lin; Chen, Yicai

    2013-01-01

    QSI candidates. Three-dimensional structures of 3,040 natural compounds and their derivatives were obtained, after which molecular docking was performed using the QS receptor LasR as a target. Based on docking scores and molecular masses, 22 compounds were purchased to determine their efficacies...

  4. Analysis of Wave Reflection from Structures with Berms Through an Extensive Database and 2DV Numerical Modelling

    Zanuttigh, Barbara; van der Meer, Jentsje W.; Andersen, Thomas Lykke

    2009-01-01

    This paper analyses wave reflection from permeable structures with a berm, including reshaping cases. Data are obtained from recent wave flume experiments and from 2DV numerical simulations performed with the COBRAS-UC code. The objectives of this research were to identify the proper representation...

  5. Basement structure of the United Arab Emirates derived from an analysis of regional gravity and aeromagnetic database

    Ali, M. Y.; Fairhead, J. D.; Green, C. M.; Noufal, A.

    2017-08-01

    Gravity and aeromagnetic data covering the whole territory of the United Arab Emirates (UAE) have been used to evaluate both shallow and deep geological structures, in particular the depth to basement since it is not imaged by seismic data anywhere within the UAE. Thus, the aim has been to map the basement so that its structure can help to assess its control on the distribution of hydrocarbons within the UAE. Power spectrum analysis reveals gravity and magnetic signatures to have some similarities, in having two main density/susceptibility interfaces widely separated in depth such that regional-residual anomaly separation could effectively be undertaken. The upper density/susceptibility interface occurs at a depth of about 1.0 km while the deeper interface varies in depth throughout the UAE. For gravity, this deeper interface is assumed to be due to the combined effect of lateral changes in density structures within the sediments and in depth of basement while for magnetics it is assumed the sediments have negligible susceptibility and the anomalies unrelated to the volcanic/magmatic bodies result from only changes in depth to basement. The power spectrum analysis over the suspect volcanic/magmatic bodies indicates they occur at 5 km depth. The finite tilt-depth and finite local wavenumber methods were used to estimate depth to source and only depths that agree to within 10% of each other were used to generate the depth to basement map. This depth to basement map, to the west of the UAE-Oman Mountains, varies in depth from 5 km to in excess of 15 km depth and is able to structurally account for the location of the shear structures, seen in the residual magnetic data, and the location of the volcanic/magmatic centres relative to a set of elongate N-S to NE-SW trending basement highs. The majority of oilfields in the UAE are located within these basement highs. Therefore, the hydrocarbon distribution in the UAE basin appears to be controlled by the location of the

  6. Inelastic seismic behavior of post-installed anchors for nuclear safety related structures: Generation of experimental database

    Mahadik, Vinay, E-mail: vinay.mahadik@iwb.uni-stuttgart.de; Sharma, Akanshu; Hofmann, Jan

    2016-02-15

    Highlights: • Experiments for evaluating seismic behavior of anchors were performed. • Two undercut anchor products in use in nuclear facilities were considered. • Monotonic tension, shear and cycling tension tests at different crack widths. • Crack cycling tests at constant, in-phase and out-of phase tension loads. • Characteristics for the two anchors as a function of crack width were identified. - Abstract: Post installed (PI) anchors are often employed for connections between concrete structure and components or systems in nuclear power plants (NPP) and related facilities. Standardized practices for nuclear related structures demand stringent criteria, which an anchor has to satisfy in order to qualify for use in NPP related structures. In NPP and related facilities, the structure–component interaction in the event of an earthquake depends on the inelastic behavior of the concrete structure, the component system and also the anchorage system that connects them. For analysis, anchorages are usually assumed to be rigid. Under seismic actions, however, it is known that anchors may undergo significant plastic displacement and strength degradation. Analysis of structure–component interaction under seismic loads calls for numerical models simulating inelastic behavior of anchorage systems. A testing program covering different seismic loading scenarios in a reasonably conservative manner is required to establish a basis for generating numerical models of anchorage systems. Currently there is a general lack of modeling techniques to consider the inelastic behavior of anchorages in structure–component interaction under seismic loads. In this work, in view of establishing a basis for development of numerical models simulating the inelastic behavior of anchors, seismic tests on two different undercut anchors qualified for their use in NPP related structures were carried out. The test program was primarily based on the DIBt-KKW-Leitfaden (2010) guidelines

  7. Structural analysis of eight novel and 112 previously reported missense mutations in the interactive FXI mutation database reveals new insight on FXI deficiency.

    Saunders, Rebecca E; Shiltagh, Nuha; Gomez, Keith; Mellars, Gillian; Cooper, Carolyn; Perry, David J; Tuddenham, Edward G; Perkins, Stephen J

    2009-08-01

    Factor XI (FXI) functions in blood coagulation. FXI is composed of four apple (Ap) domains and a serine protease (SP) domain. Deficiency of FXI leads to an injury-related bleeding disorder, which is remarkable for the lack of correlation between bleeding symptoms and FXI coagulant activity (FXI:C). The number of mutations previously reported in our interactive web database (http://www.FactorXI.org) is now significantly increased to 183 through our new patient studies and from literature surveys. Eight novel missense mutations give a total of 120 throughout the FXI gene (F11). The most abundant defects in FXI are revealed to be those from low-protein plasma levels (Type I: CRM-) that originate from protein misfolding, rather than from functional defects (Type II: CRM+). A total of 70 Ap missense mutations were analysed using a consensus Ap domain structure generated from the FXI dimer crystal structure. This showed that all parts of the Ap domain were affected. The 47 SP missense mutations were also distributed throughout the SP domain structure. The periphery of the Ap beta-sheet structure is sensitive to structural perturbation caused by residue changes throughout the Ap domain, yet this beta-sheet is crucial for FXI dimer formation. Residues located at the Ap4:Ap4 interface in the dimer are much less directly involved. We conclude that the abundance of Type I defects in FXI results from the sensitivity of the Ap domain folding to residue changes within this, and discuss how structural knowledge of the mutations improves our understanding of FXI deficiencies.

  8. The CATH database

    Knudsen Michael

    2010-02-01

    Full Text Available Abstract The CATH database provides hierarchical classification of protein domains based on their folding patterns. Domains are obtained from protein structures deposited in the Protein Data Bank and both domain identification and subsequent classification use manual as well as automated procedures. The accompanying website http://www.cathdb.info provides an easy-to-use entry to the classification, allowing for both browsing and downloading of data. Here, we give a brief review of the database, its corresponding website and some related tools.

  9. Principal Component Analysis Coupled with Artificial Neural Networks—A Combined Technique Classifying Small Molecular Structures Using a Concatenated Spectral Database

    Mihail Lucian Birsa

    2011-10-01

    Full Text Available In this paper we present several expert systems that predict the class identity of the modeled compounds, based on a preprocessed spectral database. The expert systems were built using Artificial Neural Networks (ANN and are designed to predict if an unknown compound has the toxicological activity of amphetamines (stimulant and hallucinogen, or whether it is a nonamphetamine. In attempts to circumvent the laws controlling drugs of abuse, new chemical structures are very frequently introduced on the black market. They are obtained by slightly modifying the controlled molecular structures by adding or changing substituents at various positions on the banned molecules. As a result, no substance similar to those forming a prohibited class may be used nowadays, even if it has not been specifically listed. Therefore, reliable, fast and accessible systems capable of modeling and then identifying similarities at molecular level, are highly needed for epidemiological, clinical, and forensic purposes. In order to obtain the expert systems, we have preprocessed a concatenated spectral database, representing the GC-FTIR (gas chromatography-Fourier transform infrared spectrometry and GC-MS (gas chromatography-mass spectrometry spectra of 103 forensic compounds. The database was used as input for a Principal Component Analysis (PCA. The scores of the forensic compounds on the main principal components (PCs were then used as inputs for the ANN systems. We have built eight PC-ANN systems (principal component analysis coupled with artificial neural network with a different number of input variables: 15 PCs, 16 PCs, 17 PCs, 18 PCs, 19 PCs, 20 PCs, 21 PCs and 22 PCs. The best expert system was found to be the ANN network built with 18 PCs, which accounts for an explained variance of 77%. This expert system has the best sensitivity (a rate of classification C = 100% and a rate of true positives TP = 100%, as well as a good selectivity (a rate of true negatives TN

  10. Stackfile Database

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  11. Database Application Schema Forensics

    Hector Quintus Beyers

    2014-12-01

    Full Text Available The application schema layer of a Database Management System (DBMS can be modified to deliver results that may warrant a forensic investigation. Table structures can be corrupted by changing the metadata of a database or operators of the database can be altered to deliver incorrect results when used in queries. This paper will discuss categories of possibilities that exist to alter the application schema with some practical examples. Two forensic environments are introduced where a forensic investigation can take place in. Arguments are provided why these environments are important. Methods are presented how these environments can be achieved for the application schema layer of a DBMS. A process is proposed on how forensic evidence should be extracted from the application schema layer of a DBMS. The application schema forensic evidence identification process can be applied to a wide range of forensic settings.

  12. Database Vs Data Warehouse

    2007-01-01

    Full Text Available Data warehouse technology includes a set of concepts and methods that offer the users useful information for decision making. The necessity to build a data warehouse arises from the necessity to improve the quality of information in the organization. The date proceeding from different sources, having a variety of forms - both structured and unstructured, are filtered according to business rules and are integrated in a single large data collection. Using informatics solutions, managers have understood that data stored in operational systems - including databases, are an informational gold mine that must be exploited. Data warehouses have been developed to answer the increasing demands for complex analysis, which could not be properly achieved with operational databases. The present paper emphasizes some of the criteria that information application developers can use in order to choose between a database solution or a data warehouse one.

  13. Extending Database Integration Technology

    Buneman, Peter

    1999-01-01

    Formal approaches to the semantics of databases and database languages can have immediate and practical consequences in extending database integration technologies to include a vastly greater range...

  14. BioMagResBank database with sets of experimental NMR constraints corresponding to the structures of over 1400 biomolecules deposited in the Protein Data Bank

    Doreleijers, Jurgen F.; Mading, Steve; Maziuk, Dimitri; Sojourner, Kassandra; Yin Lei; Zhu Jun; Markley, John L.; Ulrich, Eldon L.

    2003-01-01

    Experimental constraints associated with NMR structures are available from the Protein Data Bank (PDB) in the form of 'Magnetic Resonance' (MR) files. These files contain multiple types of data concatenated without boundary markers and are difficult to use for further research. Reported here are the results of a project initiated to annotate, archive, and disseminate these data to the research community from a searchable resource in a uniform format. The MR files from a set of 1410 NMR structures were analyzed and their original constituent data blocks annotated as to data type using a semi-automated protocol. A new software program called Wattos was then used to parse and archive the data in a relational database. From the total number of MR file blocks annotated as constraints, it proved possible to parse 84% (3337/3975). The constraint lists that were parsed correspond to three data types (2511 distance, 788 dihedral angle, and 38 residual dipolar couplings lists) from the three most popular software packages used in NMR structure determination: XPLOR/CNS (2520 lists), DISCOVER (412 lists), and DYANA/DIANA (405 lists). These constraints were then mapped to a developmental version of the BioMagResBank (BMRB) data model. A total of 31 data types originating from 16 programs have been classified, with the NOE distance constraint being the most commonly observed. The results serve as a model for the development of standards for NMR constraint deposition in computer-readable form. The constraints are updated regularly and are available from the BMRB web site (http://www.bmrb.wisc.edu)

  15. Moving Observer Support for Databases

    Bukauskas, Linas

    Interactive visual data explorations impose rigid requirements on database and visualization systems. Systems that visualize huge amounts of data tend to request large amounts of memory resources and heavily use the CPU to process and visualize data. Current systems employ a loosely coupled...... architecture to exchange data between database and visualization. Thus, the interaction of the visualizer and the database is kept to the minimum, which most often leads to superfluous data being passed from database to visualizer. This Ph.D. thesis presents a novel tight coupling of database and visualizer....... The thesis discusses the VR-tree, an extension of the R-tree that enables observer relative data extraction. To support incremental observer position relative data extraction the thesis proposes the Volatile Access Structure (VAST). VAST is a main memory structure that caches nodes of the VR-tree. VAST...

  16. Improving decoy databases for protein folding algorithms

    Lindsey, Aaron

    2014-01-01

    Copyright © 2014 ACM. Predicting protein structures and simulating protein folding are two of the most important problems in computational biology today. Simulation methods rely on a scoring function to distinguish the native structure (the most energetically stable) from non-native structures. Decoy databases are collections of non-native structures used to test and verify these functions. We present a method to evaluate and improve the quality of decoy databases by adding novel structures and removing redundant structures. We test our approach on 17 different decoy databases of varying size and type and show significant improvement across a variety of metrics. We also test our improved databases on a popular modern scoring function and show that they contain a greater number of native-like structures than the original databases, thereby producing a more rigorous database for testing scoring functions.

  17. An examination of intrinsic errors in electronic structure methods using the Environmental Molecular Sciences Laboratory computational results database and the Gaussian-2 set

    Feller, D.; Peterson, K.A.

    1998-01-01

    The Gaussian-2 (G2) collection of atoms and molecules has been studied with Hartree endash Fock and correlated levels of theory, ranging from second-order perturbation theory to coupled cluster theory with noniterative inclusion of triple excitations. By exploiting the systematic convergence properties of the correlation consistent family of basis sets, complete basis set limits were estimated for a large number of the G2 energetic properties. Deviations with respect to experimentally derived energy differences corresponding to rigid molecules were obtained for 15 basis set/method combinations, as well as the estimated complete basis set limit. The latter values are necessary for establishing the intrinsic error for each method. In order to perform this analysis, the information generated in the present study was combined with the results of many previous benchmark studies in an electronic database, where it is available for use by other software tools. Such tools can assist users of electronic structure codes in making appropriate basis set and method choices that will increase the likelihood of achieving their accuracy goals without wasteful expenditures of computer resources. copyright 1998 American Institute of Physics

  18. An automated system designed for large scale NMR data deposition and annotation: application to over 600 assigned chemical shift data entries to the BioMagResBank from the Riken Structural Genomics/Proteomics Initiative internal database

    Kobayashi, Naohiro; Harano, Yoko; Tochio, Naoya; Nakatani, Eiichi; Kigawa, Takanori; Yokoyama, Shigeyuki; Mading, Steve; Ulrich, Eldon L.; Markley, John L.; Akutsu, Hideo; Fujiwara, Toshimichi

    2012-01-01

    Biomolecular NMR chemical shift data are key information for the functional analysis of biomolecules and the development of new techniques for NMR studies utilizing chemical shift statistical information. Structural genomics projects are major contributors to the accumulation of protein chemical shift information. The management of the large quantities of NMR data generated by each project in a local database and the transfer of the data to the public databases are still formidable tasks because of the complicated nature of NMR data. Here we report an automated and efficient system developed for the deposition and annotation of a large number of data sets including 1 H, 13 C and 15 N resonance assignments used for the structure determination of proteins. We have demonstrated the feasibility of our system by applying it to over 600 entries from the internal database generated by the RIKEN Structural Genomics/Proteomics Initiative (RSGI) to the public database, BioMagResBank (BMRB). We have assessed the quality of the deposited chemical shifts by comparing them with those predicted from the PDB coordinate entry for the corresponding protein. The same comparison for other matched BMRB/PDB entries deposited from 2001–2011 has been carried out and the results suggest that the RSGI entries greatly improved the quality of the BMRB database. Since the entries include chemical shifts acquired under strikingly similar experimental conditions, these NMR data can be expected to be a promising resource to improve current technologies as well as to develop new NMR methods for protein studies.

  19. Combining electronic structure and many-body theory with large databases: A method for predicting the nature of 4 f states in Ce compounds

    Herper, H. C.; Ahmed, T.; Wills, J. M.; Di Marco, I.; Björkman, T.; Iuşan, D.; Balatsky, A. V.; Eriksson, O.

    2017-08-01

    Recent progress in materials informatics has opened up the possibility of a new approach to accessing properties of materials in which one assays the aggregate properties of a large set of materials within the same class in addition to a detailed investigation of each compound in that class. Here we present a large scale investigation of electronic properties and correlated magnetism in Ce-based compounds accompanied by a systematic study of the electronic structure and 4 f -hybridization function of a large body of Ce compounds. We systematically study the electronic structure and 4 f -hybridization function of a large body of Ce compounds with the goal of elucidating the nature of the 4 f states and their interrelation with the measured Kondo energy in these compounds. The hybridization function has been analyzed for more than 350 data sets (being part of the IMS database) of cubic Ce compounds using electronic structure theory that relies on a full-potential approach. We demonstrate that the strength of the hybridization function, evaluated in this way, allows us to draw precise conclusions about the degree of localization of the 4 f states in these compounds. The theoretical results are entirely consistent with all experimental information, relevant to the degree of 4 f localization for all investigated materials. Furthermore, a more detailed analysis of the electronic structure and the hybridization function allows us to make precise statements about Kondo correlations in these systems. The calculated hybridization functions, together with the corresponding density of states, reproduce the expected exponential behavior of the observed Kondo temperatures and prove a consistent trend in real materials. This trend allows us to predict which systems may be correctly identified as Kondo systems. A strong anticorrelation between the size of the hybridization function and the volume of the systems has been observed. The information entropy for this set of systems is

  20. The magnet components database system

    Baggett, M.J.; Leedy, R.; Saltmarsh, C.; Tompkins, J.C.

    1990-01-01

    The philosophy, structure, and usage MagCom, the SSC magnet components database, are described. The database has been implemented in Sybase (a powerful relational database management system) on a UNIX-based workstation at the Superconducting Super Collider Laboratory (SSCL); magnet project collaborators can access the database via network connections. The database was designed to contain the specifications and measured values of important properties for major materials, plus configuration information (specifying which individual items were used in each cable, coil, and magnet) and the test results on completed magnets. These data will facilitate the tracking and control of the production process as well as the correlation of magnet performance with the properties of its constituents. 3 refs., 10 figs

  1. The magnet components database system

    Baggett, M.J.; Leedy, R.; Saltmarsh, C.; Tompkins, J.C.

    1990-01-01

    The philosophy, structure, and usage of MagCom, the SSC magnet components database, are described. The database has been implemented in Sybase (a powerful relational database management system) on a UNIX-based workstation at the Superconducting Super Collider Laboratory (SSCL); magnet project collaborators can access the database via network connections. The database was designed to contain the specifications and measured values of important properties for major materials, plus configuration information (specifying which individual items were used in each cable, coil, and magnet) and the test results on completed magnets. The data will facilitate the tracking and control of the production process as well as the correlation of magnet performance with the properties of its constituents. 3 refs., 9 figs

  2. Improving decoy databases for protein folding algorithms

    Lindsey, Aaron; Yeh, Hsin-Yi (Cindy); Wu, Chih-Peng; Thomas, Shawna; Amato, Nancy M.

    2014-01-01

    energetically stable) from non-native structures. Decoy databases are collections of non-native structures used to test and verify these functions. We present a method to evaluate and improve the quality of decoy databases by adding novel structures and removing

  3. Quantitative Study and Structure Visualization of Scientific Publications in the Field of Information Management in Web of Science Database during 1988-2009

    Afshin Hamdipour

    2012-12-01

    Full Text Available The present study endeavored to analysis the scientific publications that were indexed in the Web of Science database as the information management records and the visualization of science structure in this field during 1988-2009. The research method was scientometrics. During the study period, 1120 records in the field of information management have been published. These records were extracted in the form of plain text files and stored in a PC. Then they were analyzed by ISI.exe and HistCite softwares. Author's coefficient collaboration (CC was grown from zero in 1988 to 0.33 in 2009. Average coefficient collaboration between the authors was 0.22 which confirmed low authors collaboration in this area. The records have been published in 63 languages. Among these records the English language with 93.8 % possessed the highest proportion. City University London and the University of Sheffield in England had the most common publications in information management field. Based on the number of published records, T.D. Wilson with 13 records and 13 citations ranked as the first. The average number of global citations to 112 documents has been equal to 8.78. Despite the participation of different countries in the production of documents, more than 28.9% of records have been produced in the United States. According to results, 10 countries have published more than 72.4 percent of the records. City University London and the University of Sheffield have had highest frequency in this area. 15 journals have published 564 records (50.4% of the total productions. Finally, by implementation of scientific software HistCite map drawing clustered and authors, articles and four effective specific subjects were introduced..

  4. Mathematics for Databases

    ir. Sander van Laar

    2007-01-01

    A formal description of a database consists of the description of the relations (tables) of the database together with the constraints that must hold on the database. Furthermore the contents of a database can be retrieved using queries. These constraints and queries for databases can very well be

  5. Databases and their application

    Grimm, E.C.; Bradshaw, R.H.W; Brewer, S.; Flantua, S.; Giesecke, T.; Lézine, A.M.; Takahara, H.; Williams, J.W.,Jr; Elias, S.A.; Mock, C.J.

    2013-01-01

    During the past 20 years, several pollen database cooperatives have been established. These databases are now constituent databases of the Neotoma Paleoecology Database, a public domain, multiproxy, relational database designed for Quaternary-Pliocene fossil data and modern surface samples. The

  6. DOT Online Database

    Page Home Table of Contents Contents Search Database Search Login Login Databases Advisory Circulars accessed by clicking below: Full-Text WebSearch Databases Database Records Date Advisory Circulars 2092 5 data collection and distribution policies. Document Database Website provided by MicroSearch

  7. GOBASE: an organelle genome database

    O?Brien, Emmet A.; Zhang, Yue; Wang, Eric; Marie, Veronique; Badejoko, Wole; Lang, B. Franz; Burger, Gertraud

    2008-01-01

    The organelle genome database GOBASE, now in its 21st release (June 2008), contains all published mitochondrion-encoded sequences (?913 000) and chloroplast-encoded sequences (?250 000) from a wide range of eukaryotic taxa. For all sequences, information on related genes, exons, introns, gene products and taxonomy is available, as well as selected genome maps and RNA secondary structures. Recent major enhancements to database functionality include: (i) addition of an interface for RNA editing...

  8. Database theory and SQL practice using Access

    Kim, Gyeong Min; Lee, Myeong Jin

    2001-01-01

    This book introduces database theory and SQL practice using Access. It is comprised of seven chapters, which give description of understanding database with basic conception and DMBS, understanding relational database with examples of it, building database table and inputting data using access 2000, structured Query Language with introduction, management and making complex query using SQL, command for advanced SQL with understanding conception of join and virtual table, design on database for online bookstore with six steps and building of application with function, structure, component, understanding of the principle, operation and checking programming source for application menu.

  9. NNDC database migration project

    Burrows, Thomas W; Dunford, Charles L [U.S. Department of Energy, Brookhaven Science Associates (United States)

    2004-03-01

    NNDC Database Migration was necessary to replace obsolete hardware and software, to be compatible with the industry standard in relational databases (mature software, large base of supporting software for administration and dissemination and replication and synchronization tools) and to improve the user access in terms of interface and speed. The Relational Database Management System (RDBMS) consists of a Sybase Adaptive Server Enterprise (ASE), which is relatively easy to move between different RDB systems (e.g., MySQL, MS SQL-Server, or MS Access), the Structured Query Language (SQL) and administrative tools written in Java. Linux or UNIX platforms can be used. The existing ENSDF datasets are often VERY large and will need to be reworked and both the CRP (adopted) and CRP (Budapest) datasets give elemental cross sections (not relative I{gamma}) in the RI field (so it is not immediately obvious which of the old values has been changed). But primary and secondary intensities are now available on the same scale. The intensity normalization has been done for us. We will gain access to a large volume of data from Budapest and some of those gamma-ray intensity and energy data will be superior to what we already have.

  10. Dietary Supplement Ingredient Database

    ... and US Department of Agriculture Dietary Supplement Ingredient Database Toggle navigation Menu Home About DSID Mission Current ... values can be saved to build a small database or add to an existing database for national, ...

  11. Energy Consumption Database

    Consumption Database The California Energy Commission has created this on-line database for informal reporting ) classifications. The database also provides easy downloading of energy consumption data into Microsoft Excel (XLSX

  12. USAID Anticorruption Projects Database

    US Agency for International Development — The Anticorruption Projects Database (Database) includes information about USAID projects with anticorruption interventions implemented worldwide between 2007 and...

  13. NoSQL databases

    Mrozek, Jakub

    2012-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  14. Database Systems - Present and Future

    2009-01-01

    Full Text Available The database systems have nowadays an increasingly important role in the knowledge-based society, in which computers have penetrated all fields of activity and the Internet tends to develop worldwide. In the current informatics context, the development of the applications with databases is the work of the specialists. Using databases, reach a database from various applications, and also some of related concepts, have become accessible to all categories of IT users. This paper aims to summarize the curricular area regarding the fundamental database systems issues, which are necessary in order to train specialists in economic informatics higher education. The database systems integrate and interfere with several informatics technologies and therefore are more difficult to understand and use. Thus, students should know already a set of minimum, mandatory concepts and their practical implementation: computer systems, programming techniques, programming languages, data structures. The article also presents the actual trends in the evolution of the database systems, in the context of economic informatics.

  15. JT-60 database system, 1

    Kurihara, Kenichi; Kimura, Toyoaki; Itoh, Yasuhiro.

    1987-07-01

    Naturally, sufficient software circumstance makes it possible to analyse the discharge result data effectively. JT-60 discharge result data, collected by the supervisor, are transferred to the general purpose computer through the new linkage channel, and are converted to ''database''. Datafile in the database was designed to be surrounded by various interfaces. This structure is able to preserve the datafile reliability and does not expect the user's information about the datafile structure. In addition, the support system for graphic processing was developed so that the user may easily obtain the figures with some calculations. This paper reports on the basic concept and system design. (author)

  16. PrimateLit Database

    Primate Info Net Related Databases NCRR PrimateLit: A bibliographic database for primatology Top of any problems with this service. We welcome your feedback. The PrimateLit database is no longer being Resources, National Institutes of Health. The database is a collaborative project of the Wisconsin Primate

  17. Performance Enhancements for Advanced Database Management Systems

    Helmer, Sven

    2000-01-01

    New applications have emerged, demanding database management systems with enhanced functionality. However, high performance is a necessary precondition for the acceptance of such systems by end users. In this context we developed, implemented, and tested algorithms and index structures for improving the performance of advanced database management systems. We focused on index structures and join algorithms for set-valued attributes.

  18. An Interoperable Cartographic Database

    Slobodanka Ključanin; Zdravko Galić

    2007-01-01

    The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on t...

  19. Software listing: CHEMTOX database

    Moskowitz, P.D.

    1993-01-01

    Initially launched in 1983, the CHEMTOX Database was among the first microcomputer databases containing hazardous chemical information. The database is used in many industries and government agencies in more than 17 countries. Updated quarterly, the CHEMTOX Database provides detailed environmental and safety information on 7500-plus hazardous substances covered by dozens of regulatory and advisory sources. This brief listing describes the method of accessing data and provides ordering information for those wishing to obtain the CHEMTOX Database

  20. amamutdb.no: A relational database for MAN2B1 allelic variants that compiles genotypes, clinical phenotypes, and biochemical and structural data of mutant MAN2B1 in α-mannosidosis.

    Riise Stensland, Hilde Monica Frostad; Frantzen, Gabrio; Kuokkanen, Elina; Buvang, Elisabeth Kjeldsen; Klenow, Helle Bagterp; Heikinheimo, Pirkko; Malm, Dag; Nilssen, Øivind

    2015-06-01

    α-Mannosidosis is an autosomal recessive lysosomal storage disorder caused by mutations in the MAN2B1 gene, encoding lysosomal α-mannosidase. The disorder is characterized by a range of clinical phenotypes of which the major manifestations are mental impairment, hearing impairment, skeletal changes, and immunodeficiency. Here, we report an α-mannosidosis mutation database, amamutdb.no, which has been constructed as a publicly accessible online resource for recording and analyzing MAN2B1 variants (http://amamutdb.no). Our aim has been to offer structured and relational information on MAN2B1 mutations and genotypes along with associated clinical phenotypes. Classifying missense mutations, as pathogenic or benign, is a challenge. Therefore, they have been given special attention as we have compiled all available data that relate to their biochemical, functional, and structural properties. The α-mannosidosis mutation database is comprehensive and relational in the sense that information can be retrieved and compiled across datasets; hence, it will facilitate diagnostics and increase our understanding of the clinical and molecular aspects of α-mannosidosis. We believe that the amamutdb.no structure and architecture will be applicable for the development of databases for any monogenic disorder. © 2015 WILEY PERIODICALS, INC.

  1. Design and implementation of typical target image database system

    Qin Kai; Zhao Yingjun

    2010-01-01

    It is necessary to provide essential background data and thematic data timely in image processing and application. In fact, application is an integrating and analyzing procedure with different kinds of data. In this paper, the authors describe an image database system which classifies, stores, manages and analyzes database of different types, such as image database, vector database, spatial database, spatial target characteristics database, its design and structure. (authors)

  2. Management system of instrument database

    Zhang Xin

    1997-01-01

    The author introduces a management system of instrument database. This system has been developed using with Foxpro on network. The system has some characters such as clear structure, easy operation, flexible and convenient query, as well as the data safety and reliability

  3. Clinical Databases for Chest Physicians.

    Courtwright, Andrew M; Gabriel, Peter E

    2018-04-01

    A clinical database is a repository of patient medical and sociodemographic information focused on one or more specific health condition or exposure. Although clinical databases may be used for research purposes, their primary goal is to collect and track patient data for quality improvement, quality assurance, and/or actual clinical management. This article aims to provide an introduction and practical advice on the development of small-scale clinical databases for chest physicians and practice groups. Through example projects, we discuss the pros and cons of available technical platforms, including Microsoft Excel and Access, relational database management systems such as Oracle and PostgreSQL, and Research Electronic Data Capture. We consider approaches to deciding the base unit of data collection, creating consensus around variable definitions, and structuring routine clinical care to complement database aims. We conclude with an overview of regulatory and security considerations for clinical databases. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  4. JT-60 database system, 2

    Itoh, Yasuhiro; Kurihara, Kenichi; Kimura, Toyoaki.

    1987-07-01

    The JT-60 central control system, ''ZENKEI'' collects the control and instrumentation data relevant to discharge and device status data for plant monitoring. The former of the engineering data amounts to about 3 Mbytes per shot of discharge. The ''ZENKEI'' control system which consists of seven minicomputers for on-line real-time control has little performance of handling such a large amount of data for physical and engineering analysis. In order to solve this problem, it was planned to establish the experimental database on the Front-end Processor (FEP) of general purpose large computer in JAERI Computer Center. The database management system (DBMS), therefore, has been developed for creating the database during the shot interval. The engineering data are shipped up from ''ZENKEI'' to FEP through the dedicated communication line after the shot. The hierarchical data model has been adopted in this database, which consists of the data files with tree structure of three keys of system, discharge type and shot number. The JT-60 DBMS provides the data handling packages of subroutines for interfacing the database with user's application programs. The subroutine packages for supporting graphic processing and the function of access control for security of the database are also prepared in this DBMS. (author)

  5. Ageing Management Program Database

    Basic, I.; Vrbanic, I.; Zabric, I.; Savli, S.

    2008-01-01

    The aspects of plant ageing management (AM) gained increasing attention over the last ten years. Numerous technical studies have been performed to study the impact of ageing mechanisms on the safe and reliable operation of nuclear power plants. National research activities have been initiated or are in progress to provide the technical basis for decision making processes. The long-term operation of nuclear power plants is influenced by economic considerations, the socio-economic environment including public acceptance, developments in research and the regulatory framework, the availability of technical infrastructure to maintain and service the systems, structures and components as well as qualified personnel. Besides national activities there are a number of international activities in particular under the umbrella of the IAEA, the OECD and the EU. The paper discusses the process, procedure and database developed for Slovenian Nuclear Safety Administration (SNSA) surveillance of ageing process of Nuclear power Plant Krsko.(author)

  6. Database on wind characteristics. Contents of database bank

    Larsen, Gunner Chr.; Hansen, K.S.

    2001-01-01

    for the available data in the established database bank and part three is the Users Manual describing the various ways to access and analyse the data. The present report constitutes the second part of the Annex XVII reporting. Basically, the database bank contains three categories of data, i.e. i) high sampled wind...... field time series; ii) high sampled wind turbine structural response time series; andiii) wind resource data. The main emphasis, however, is on category i). The available data, within each of the three categories, are described in details. The description embraces site characteristics, terrain type...

  7. Directory of IAEA databases

    1991-11-01

    The first edition of the Directory of IAEA Databases is intended to describe the computerized information sources available to IAEA staff members. It contains a listing of all databases produced at the IAEA, together with information on their availability

  8. Native Health Research Database

    ... Indian Health Board) Welcome to the Native Health Database. Please enter your search terms. Basic Search Advanced ... To learn more about searching the Native Health Database, click here. Tutorial Video The NHD has made ...

  9. Cell Centred Database (CCDB)

    U.S. Department of Health & Human Services — The Cell Centered Database (CCDB) is a web accessible database for high resolution 2D, 3D and 4D data from light and electron microscopy, including correlated imaging.

  10. E3 Staff Database

    US Agency for International Development — E3 Staff database is maintained by E3 PDMS (Professional Development & Management Services) office. The database is Mysql. It is manually updated by E3 staff as...

  11. NIRS database of the original research database

    Morita, Kyoko

    1991-01-01

    Recently, library staffs arranged and compiled the original research papers that have been written by researchers for 33 years since National Institute of Radiological Sciences (NIRS) established. This papers describes how the internal database of original research papers has been created. This is a small sample of hand-made database. This has been cumulating by staffs who have any knowledge about computer machine or computer programming. (author)

  12. Scopus database: a review.

    Burnham, Judy F

    2006-03-08

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  13. Aviation Safety Issues Database

    Morello, Samuel A.; Ricks, Wendell R.

    2009-01-01

    The aviation safety issues database was instrumental in the refinement and substantiation of the National Aviation Safety Strategic Plan (NASSP). The issues database is a comprehensive set of issues from an extremely broad base of aviation functions, personnel, and vehicle categories, both nationally and internationally. Several aviation safety stakeholders such as the Commercial Aviation Safety Team (CAST) have already used the database. This broader interest was the genesis to making the database publically accessible and writing this report.

  14. Databases for INDUS-1 and INDUS-2

    Merh, Bhavna N.; Fatnani, Pravin

    2003-01-01

    The databases for Indus are relational databases designed to store various categories of data related to the accelerator. The data archiving and retrieving system in Indus is based on a client/sever model. A general purpose commercial database is used to store parameters and equipment data for the whole machine. The database manages configuration, on-line and historical databases. On line and off line applications distributed in several systems can store and retrieve the data from the database over the network. This paper describes the structure of databases for Indus-1 and Indus-2 and their integration within the software architecture. The data analysis, design, resulting data-schema and implementation issues are discussed. (author)

  15. Automated Oracle database testing

    CERN. Geneva

    2014-01-01

    Ensuring database stability and steady performance in the modern world of agile computing is a major challenge. Various changes happening at any level of the computing infrastructure: OS parameters & packages, kernel versions, database parameters & patches, or even schema changes, all can potentially harm production services. This presentation shows how an automatic and regular testing of Oracle databases can be achieved in such agile environment.

  16. Inleiding database-systemen

    Pels, H.J.; Lans, van der R.F.; Pels, H.J.; Meersman, R.A.

    1993-01-01

    Dit artikel introduceert de voornaamste begrippen die een rol spelen rond databases en het geeft een overzicht van de doelstellingen, de functies en de componenten van database-systemen. Hoewel de functie van een database intuitief vrij duidelijk is, is het toch een in technologisch opzicht complex

  17. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Full Text Available base Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...19&lang=en Whole data download - Referenced database Rice Expression Database (RED) Rice full-length cDNA Database... (KOME) Rice Genome Integrated Map Database (INE) Rice Mutant Panel Database (Tos17) Rice Genome Annotation Database

  18. Prediction methods and databases within chemoinformatics

    Jónsdóttir, Svava Osk; Jørgensen, Flemming Steen; Brunak, Søren

    2005-01-01

    MOTIVATION: To gather information about available databases and chemoinformatics methods for prediction of properties relevant to the drug discovery and optimization process. RESULTS: We present an overview of the most important databases with 2-dimensional and 3-dimensional structural information...... about drugs and drug candidates, and of databases with relevant properties. Access to experimental data and numerical methods for selecting and utilizing these data is crucial for developing accurate predictive in silico models. Many interesting predictive methods for classifying the suitability...

  19. Evolution of a Structure-Searchable Database into a Prototype for a High-Fidelity SmartPhone App for 62 Common Pesticides Used in Delaware.

    D'Souza, Malcolm J; Barile, Benjamin; Givens, Aaron F

    2015-05-01

    Synthetic pesticides are widely used in the modern world for human benefit. They are usually classified according to their intended pest target. In Delaware (DE), approximately 42 percent of the arable land is used for agriculture. In order to manage insectivorous and herbaceous pests (such as insects, weeds, nematodes, and rodents), pesticides are used profusely to biologically control the normal pest's life stage. In this undergraduate project, we first created a usable relational database containing 62 agricultural pesticides that are common in Delaware. Chemically pertinent quantitative and qualitative information was first stored in Bio-Rad's KnowItAll® Informatics System. Next, we extracted the data out of the KnowItAll® system and created additional sections on a Microsoft® Excel spreadsheet detailing pesticide use(s) and safety and handling information. Finally, in an effort to promote good agricultural practices, to increase efficiency in business decisions, and to make pesticide data globally accessible, we developed a mobile application for smartphones that displayed the pesticide database using Appery.io™; a cloud-based HyperText Markup Language (HTML5), jQuery Mobile and Hybrid Mobile app builder.

  20. An Interoperable Cartographic Database

    Slobodanka Ključanin

    2007-05-01

    Full Text Available The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on the Internet. 

  1. Nuclear power economic database

    Ding Xiaoming; Li Lin; Zhao Shiping

    1996-01-01

    Nuclear power economic database (NPEDB), based on ORACLE V6.0, consists of three parts, i.e., economic data base of nuclear power station, economic data base of nuclear fuel cycle and economic database of nuclear power planning and nuclear environment. Economic database of nuclear power station includes data of general economics, technique, capital cost and benefit, etc. Economic database of nuclear fuel cycle includes data of technique and nuclear fuel price. Economic database of nuclear power planning and nuclear environment includes data of energy history, forecast, energy balance, electric power and energy facilities

  2. A coordination language for databases

    Li, Ximeng; Wu, Xi; Lluch Lafuente, Alberto

    2017-01-01

    We present a coordination language for the modeling of distributed database applications. The language, baptized Klaim-DB, borrows the concepts of localities and nets of the coordination language Klaim but re-incarnates the tuple spaces of Klaim as databases. It provides high-level abstractions...... and primitives for the access and manipulation of structured data, with integrity and atomicity considerations. We present the formal semantics of Klaim-DB and develop a type system that avoids potential runtime errors such as certain evaluation errors and mismatches of data format in tables, which are monitored...... in the semantics. The use of the language is illustrated in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. Raising the abstraction level and encapsulating integrity checks in the language primitives have benefited the modeling task...

  3. International Ventilation Cooling Application Database

    Holzer, Peter; Psomas, Theofanis Ch.; OSullivan, Paul

    2016-01-01

    The currently running International Energy Agency, Energy and Conservation in Buildings, Annex 62 Ventilative Cooling (VC) project, is coordinating research towards extended use of VC. Within this Annex 62 the joint research activity of International VC Application Database has been carried out...... and locations, using VC as a mean of indoor comfort improvement. The building-spreadsheet highlights distributions of technologies and strategies, such as the following. (Numbers in % refer to the sample of the database’s 91 buildings.) It may be concluded that Ventilative Cooling is applied in temporary......, systematically investigating the distribution of technologies and strategies within VC. The database is structured as both a ticking-list-like building-spreadsheet and a collection of building-datasheets. The content of both closely follows Annex 62 State-Of-The- Art-Report. The database has been filled, based...

  4. Secure Distributed Databases Using Cryptography

    Ion IVAN

    2006-01-01

    Full Text Available The computational encryption is used intensively by different databases management systems for ensuring privacy and integrity of information that are physically stored in files. Also, the information is sent over network and is replicated on different distributed systems. It is proved that a satisfying level of security is achieved if the rows and columns of tables are encrypted independently of table or computer that sustains the data. Also, it is very important that the SQL - Structured Query Language query requests and responses to be encrypted over the network connection between the client and databases server. All this techniques and methods must be implemented by the databases administrators, designer and developers in a consistent security policy.

  5. Electronic database of arterial aneurysms

    Fabiano Luiz Erzinger

    2014-12-01

    Full Text Available Background:The creation of an electronic database facilitates the storage of information, as well as streamlines the exchange of data, making easier the exchange of knowledge for future research.Objective:To construct an electronic database containing comprehensive and up-to-date clinical and surgical data on the most common arterial aneurysms, to help advance scientific research.Methods:The most important specialist textbooks and articles found in journals and on internet databases were reviewed in order to define the basic structure of the protocol. Data were computerized using the SINPE© system for integrated electronic protocols and tested in a pilot study.Results:The data entered onto the system was first used to create a Master protocol, organized into a structure of top-level directories covering a large proportion of the content on vascular diseases as follows: patient history; physical examination; supplementary tests and examinations; diagnosis; treatment; and clinical course. By selecting items from the Master protocol, Specific protocols were then created for the 22 arterial sites most often involved by aneurysms. The program provides a method for collection of data on patients including clinical characteristics (patient history and physical examination, supplementary tests and examinations, treatments received and follow-up care after treatment. Any information of interest on these patients that is contained in the protocol can then be used to query the database and select data for studies.Conclusions:It proved possible to construct a database of clinical and surgical data on the arterial aneurysms of greatest interest and, by adapting the data to specific software, the database was integrated into the SINPE© system, thereby providing a standardized method for collection of data on these patients and tools for retrieving this information in an organized manner for use in scientific studies.

  6. YMDB: the Yeast Metabolome Database

    Jewison, Timothy; Knox, Craig; Neveu, Vanessa; Djoumbou, Yannick; Guo, An Chi; Lee, Jacqueline; Liu, Philip; Mandal, Rupasri; Krishnamurthy, Ram; Sinelnikov, Igor; Wilson, Michael; Wishart, David S.

    2012-01-01

    The Yeast Metabolome Database (YMDB, http://www.ymdb.ca) is a richly annotated ‘metabolomic’ database containing detailed information about the metabolome of Saccharomyces cerevisiae. Modeled closely after the Human Metabolome Database, the YMDB contains >2000 metabolites with links to 995 different genes/proteins, including enzymes and transporters. The information in YMDB has been gathered from hundreds of books, journal articles and electronic databases. In addition to its comprehensive literature-derived data, the YMDB also contains an extensive collection of experimental intracellular and extracellular metabolite concentration data compiled from detailed Mass Spectrometry (MS) and Nuclear Magnetic Resonance (NMR) metabolomic analyses performed in our lab. This is further supplemented with thousands of NMR and MS spectra collected on pure, reference yeast metabolites. Each metabolite entry in the YMDB contains an average of 80 separate data fields including comprehensive compound description, names and synonyms, structural information, physico-chemical data, reference NMR and MS spectra, intracellular/extracellular concentrations, growth conditions and substrates, pathway information, enzyme data, gene/protein sequence data, as well as numerous hyperlinks to images, references and other public databases. Extensive searching, relational querying and data browsing tools are also provided that support text, chemical structure, spectral, molecular weight and gene/protein sequence queries. Because of S. cervesiae's importance as a model organism for biologists and as a biofactory for industry, we believe this kind of database could have considerable appeal not only to metabolomics researchers, but also to yeast biologists, systems biologists, the industrial fermentation industry, as well as the beer, wine and spirit industry. PMID:22064855

  7. Database Description - RPD | LSDB Archive [Life Science Database Archive metadata

    Full Text Available ase Description General information of database Database name RPD Alternative name Rice Proteome Database...titute of Crop Science, National Agriculture and Food Research Organization Setsuko Komatsu E-mail: Database... classification Proteomics Resources Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database... description Rice Proteome Database contains information on protei...and entered in the Rice Proteome Database. The database is searchable by keyword,

  8. Database Description - JSNP | LSDB Archive [Life Science Database Archive metadata

    Full Text Available base Description General information of database Database name JSNP Alternative nam...n Science and Technology Agency Creator Affiliation: Contact address E-mail : Database...sapiens Taxonomy ID: 9606 Database description A database of about 197,000 polymorphisms in Japanese populat...1):605-610 External Links: Original website information Database maintenance site Institute of Medical Scien...er registration Not available About This Database Database Description Download License Update History of This Database

  9. Database Description - RED | LSDB Archive [Life Science Database Archive metadata

    Full Text Available ase Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti... Article title: Rice Expression Database: the gateway to rice functional genomics...nt Science (2002) Dec 7 (12):563-564 External Links: Original website information Database maintenance site

  10. Database Description - PLACE | LSDB Archive [Life Science Database Archive metadata

    Full Text Available abase Description General information of database Database name PLACE Alternative name A Database...Kannondai, Tsukuba, Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Databas...e classification Plant databases Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database...99, Vol.27, No.1 :297-300 External Links: Original website information Database maintenance site National In...- Need for user registration Not available About This Database Database Descripti

  11. Tsunami early warning in the Mediterranean: role, structure and tricks of pre-computed tsunami simulation databases and matching/forecasting algorithms

    Armigliato, Alberto; Pagnoni, Gianluca; Tinti, Stefano

    2014-05-01

    The general idea that pre-computed simulated scenario databases can play a key role in conceiving tsunami early warning systems is commonly accepted by now. But it was only in the last decade that it started to be applied to the Mediterranean region, taking special impulse from initiatives like the GDACS and from recently concluded EU-funded projects such as TRIDEC and NearToWarn. With reference to these two projects and with the possibility of further developing this research line in the frame of the FP7 ASTARTE project, we discuss some results we obtained regarding two major topics, namely the strategies applicable to the tsunami scenario database building and the design and performance assessment of a timely and "reliable" elementary-scenario combination algorithm to be run in real-time. As for the first theme, we take advantage of the experience gained in the test areas of Western Iberia, Rhodes (Greece) and Cyprus to illustrate the criteria with which a "Matching Scenario Database" (MSDB) can be built. These involve 1) the choice of the main tectonic tsunamigenic sources (or areas), 2) their tessellation with matrices of elementary faults whose dimension heavily depend on the particular studied area and must be a compromise between the needs to represent the tsunamigenic area in sufficient detail and of limiting the number of scenarios to be simulated, 3) the computation of the scenarios themselves, 4) the choice of the relevant simulation outputs and the standardisation of their formats. Regarding the matching/forecast algorithm, we want it to select and combine the MSDB elements based on the initial earthquake magnitude and location estimate, and to produce a forecast of (at least) the tsunami arrival time, amplitude and period at the closest tide-level sensors and in all needed forecast points. We discuss the performance of the algorithm in terms of the time needed to produce the forecast after the earthquake is detected. In particular, we analyse the

  12. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  13. Observational database for studies of nearby universe

    Kaisina, E. I.; Makarov, D. I.; Karachentsev, I. D.; Kaisin, S. S.

    2012-01-01

    We present the description of a database of galaxies of the Local Volume (LVG), located within 10 Mpc around the Milky Way. It contains more than 800 objects. Based on an analysis of functional capabilities, we used the PostgreSQL DBMS as a management system for our LVG database. Applying semantic modelling methods, we developed a physical ER-model of the database. We describe the developed architecture of the database table structure, and the implemented web-access, available at http://www.sao.ru/lv/lvgdb.

  14. HCSD: the human cancer secretome database

    Feizi, Amir; Banaei-Esfahani, Amir; Nielsen, Jens

    2015-01-01

    The human cancer secretome database (HCSD) is a comprehensive database for human cancer secretome data. The cancer secretome describes proteins secreted by cancer cells and structuring information about the cancer secretome will enable further analysis of how this is related with tumor biology...... database is limiting the ability to query the increasing community knowledge. We therefore developed the Human Cancer Secretome Database (HCSD) to fulfil this gap. HCSD contains >80 000 measurements for about 7000 nonredundant human proteins collected from up to 35 high-throughput studies on 17 cancer...

  15. Database Optimizing Services

    Adrian GHENCEA

    2010-12-01

    Full Text Available Almost every organization has at its centre a database. The database provides support for conducting different activities, whether it is production, sales and marketing or internal operations. Every day, a database is accessed for help in strategic decisions. The satisfaction therefore of such needs is entailed with a high quality security and availability. Those needs can be realised using a DBMS (Database Management System which is, in fact, software for a database. Technically speaking, it is software which uses a standard method of cataloguing, recovery, and running different data queries. DBMS manages the input data, organizes it, and provides ways of modifying or extracting the data by its users or other programs. Managing the database is an operation that requires periodical updates, optimizing and monitoring.

  16. National Database of Geriatrics

    Kannegaard, Pia Nimann; Vinding, Kirsten L; Hare-Bruun, Helle

    2016-01-01

    AIM OF DATABASE: The aim of the National Database of Geriatrics is to monitor the quality of interdisciplinary diagnostics and treatment of patients admitted to a geriatric hospital unit. STUDY POPULATION: The database population consists of patients who were admitted to a geriatric hospital unit....... Geriatric patients cannot be defined by specific diagnoses. A geriatric patient is typically a frail multimorbid elderly patient with decreasing functional ability and social challenges. The database includes 14-15,000 admissions per year, and the database completeness has been stable at 90% during the past......, percentage of discharges with a rehabilitation plan, and the part of cases where an interdisciplinary conference has taken place. Data are recorded by doctors, nurses, and therapists in a database and linked to the Danish National Patient Register. DESCRIPTIVE DATA: Descriptive patient-related data include...

  17. Tradeoffs in distributed databases

    Juntunen, R. (Risto)

    2016-01-01

    Abstract In a distributed database data is spread throughout the network into separated nodes with different DBMS systems (Date, 2000). According to CAP-theorem three database properties — consistency, availability and partition tolerance cannot be achieved simultaneously in distributed database systems. Two of these properties can be achieved but not all three at the same time (Brewer, 2000). Since this theorem there has b...

  18. Specialist Bibliographic Databases

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A.; Trukhachev, Vladimir I.; Kostyukova, Elena I.; Gerasimov, Alexey N.; Kitas, George D.

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and d...

  19. Supply Chain Initiatives Database

    None

    2012-11-01

    The Supply Chain Initiatives Database (SCID) presents innovative approaches to engaging industrial suppliers in efforts to save energy, increase productivity and improve environmental performance. This comprehensive and freely-accessible database was developed by the Institute for Industrial Productivity (IIP). IIP acknowledges Ecofys for their valuable contributions. The database contains case studies searchable according to the types of activities buyers are undertaking to motivate suppliers, target sector, organization leading the initiative, and program or partnership linkages.

  20. Toward An Unstructured Mesh Database

    Rezaei Mahdiraji, Alireza; Baumann, Peter Peter

    2014-05-01

    Unstructured meshes are used in several application domains such as earth sciences (e.g., seismology), medicine, oceanography, cli- mate modeling, GIS as approximate representations of physical objects. Meshes subdivide a domain into smaller geometric elements (called cells) which are glued together by incidence relationships. The subdivision of a domain allows computational manipulation of complicated physical structures. For instance, seismologists model earthquakes using elastic wave propagation solvers on hexahedral meshes. The hexahedral con- tains several hundred millions of grid points and millions of hexahedral cells. Each vertex node in the hexahedrals stores a multitude of data fields. To run simulation on such meshes, one needs to iterate over all the cells, iterate over incident cells to a given cell, retrieve coordinates of cells, assign data values to cells, etc. Although meshes are used in many application domains, to the best of our knowledge there is no database vendor that support unstructured mesh features. Currently, the main tool for querying and manipulating unstructured meshes are mesh libraries, e.g., CGAL and GRAL. Mesh li- braries are dedicated libraries which includes mesh algorithms and can be run on mesh representations. The libraries do not scale with dataset size, do not have declarative query language, and need deep C++ knowledge for query implementations. Furthermore, due to high coupling between the implementations and input file structure, the implementations are less reusable and costly to maintain. A dedicated mesh database offers the following advantages: 1) declarative querying, 2) ease of maintenance, 3) hiding mesh storage structure from applications, and 4) transparent query optimization. To design a mesh database, the first challenge is to define a suitable generic data model for unstructured meshes. We proposed ImG-Complexes data model as a generic topological mesh data model which extends incidence graph model to multi

  1. A59 waste repackaging database (AWARD)

    Keel, A.

    1993-06-01

    This document describes the data structures to be implemented to provide the A59 Waste Repackaging Database (AWARD); a Computer System for the in-cave Bertha waste sorting and LLW repackaging operations in A59. (Author)

  2. The RMS program system and database

    Fisher, S.M.; Peach, K.J.

    1982-08-01

    This report describes the program system developed for the data reduction and analysis of data obtained with the Rutherford Multiparticle Spectrometer (RMS), with particular emphasis on the utility of a well structured central data-base. (author)

  3. XML databases and the semantic web

    Thuraisingham, Bhavani

    2002-01-01

    Efficient access to data, sharing data, extracting information from data, and making use of the information have become urgent needs for today''s corporations. With so much data on the Web, managing it with conventional tools is becoming almost impossible. New tools and techniques are necessary to provide interoperability as well as warehousing between multiple data sources and systems, and to extract information from the databases. XML Databases and the Semantic Web focuses on critical and new Web technologies needed for organizations to carry out transactions on the Web, to understand how to use the Web effectively, and to exchange complex documents on the Web.This reference for database administrators, database designers, and Web designers working in tandem with database technologists covers three emerging technologies of significant impact for electronic business: Extensible Markup Language (XML), semi-structured databases, and the semantic Web. The first two parts of the book explore these emerging techn...

  4. BIOSPIDA: A Relational Database Translator for NCBI.

    Hagen, Matthew S; Lee, Eva K

    2010-11-13

    As the volume and availability of biological databases continue widespread growth, it has become increasingly difficult for research scientists to identify all relevant information for biological entities of interest. Details of nucleotide sequences, gene expression, molecular interactions, and three-dimensional structures are maintained across many different databases. To retrieve all necessary information requires an integrated system that can query multiple databases with minimized overhead. This paper introduces a universal parser and relational schema translator that can be utilized for all NCBI databases in Abstract Syntax Notation (ASN.1). The data models for OMIM, Entrez-Gene, Pubmed, MMDB and GenBank have been successfully converted into relational databases and all are easily linkable helping to answer complex biological questions. These tools facilitate research scientists to locally integrate databases from NCBI without significant workload or development time.

  5. Intermodal Passenger Connectivity Database -

    Department of Transportation — The Intermodal Passenger Connectivity Database (IPCD) is a nationwide data table of passenger transportation terminals, with data on the availability of connections...

  6. Transporter Classification Database (TCDB)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  7. Residency Allocation Database

    Department of Veterans Affairs — The Residency Allocation Database is used to determine allocation of funds for residency programs offered by Veterans Affairs Medical Centers (VAMCs). Information...

  8. Smart Location Database - Service

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block...

  9. Database principles programming performance

    O'Neil, Patrick

    2014-01-01

    Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi

  10. Veterans Administration Databases

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  11. IVR EFP Database

    National Oceanic and Atmospheric Administration, Department of Commerce — This database contains trip-level reports submitted by vessels participating in Exempted Fishery projects with IVR reporting requirements.

  12. Towards Sensor Database Systems

    Bonnet, Philippe; Gehrke, Johannes; Seshadri, Praveen

    2001-01-01

    . These systems lack flexibility because data is extracted in a predefined way; also, they do not scale to a large number of devices because large volumes of raw data are transferred regardless of the queries that are submitted. In our new concept of sensor database system, queries dictate which data is extracted...... from the sensors. In this paper, we define the concept of sensor databases mixing stored data represented as relations and sensor data represented as time series. Each long-running query formulated over a sensor database defines a persistent view, which is maintained during a given time interval. We...... also describe the design and implementation of the COUGAR sensor database system....

  13. Database Publication Practices

    Bernstein, P.A.; DeWitt, D.; Heuer, A.

    2005-01-01

    There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems.......There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems....

  14. Smart Location Database - Download

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block...

  15. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Full Text Available ase Description General information of database Database name RMG Alternative name ...raki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database... classification Nucleotide Sequence Databases Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database...rnal: Mol Genet Genomics (2002) 268: 434–445 External Links: Original website information Database...available URL of Web services - Need for user registration Not available About This Database Database Descri

  16. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Full Text Available base Description General information of database Database name KOME Alternative nam... Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice ...Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description Information about approximately ...Hayashizaki Y, Kikuchi S. Journal: PLoS One. 2007 Nov 28; 2(11):e1235. External Links: Original website information Database...OS) Rice mutant panel database (Tos17) A Database of Plant Cis-acting Regulatory

  17. Update History of This Database - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27 Arabidopsis Phenome Data...base English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...

  18. Update History of This Database - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Full Text Available List Contact us SKIP Stemcell Database Update History of This Database Date Update contents 2017/03/13 SKIP Stemcell Database... English archive site is opened. 2013/03/29 SKIP Stemcell Database ( https://www.skip.med.k...eio.ac.jp/SKIPSearch/top?lang=en ) is opened. About This Database Database Description Download License Update History of This Databa...se Site Policy | Contact Us Update History of This Database - SKIP Stemcell Database | LSDB Archive ...

  19. Update History of This Database - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Full Text Available List Contact us Yeast Interacting Proteins Database Update History of This Database Date Update contents 201...0/03/29 Yeast Interacting Proteins Database English archive site is opened. 2000/12/4 Yeast Interacting Proteins Database...( http://itolab.cb.k.u-tokyo.ac.jp/Y2H/ ) is released. About This Database Database Description... Download License Update History of This Database Site Policy | Contact Us Update History of This Database... - Yeast Interacting Proteins Database | LSDB Archive ...

  20. Secure Distributed Databases Using Cryptography

    Ion IVAN; Cristian TOMA

    2006-01-01

    The computational encryption is used intensively by different databases management systems for ensuring privacy and integrity of information that are physically stored in files. Also, the information is sent over network and is replicated on different distributed systems. It is proved that a satisfying level of security is achieved if the rows and columns of tables are encrypted independently of table or computer that sustains the data. Also, it is very important that the SQL - Structured Que...

  1. Download - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Full Text Available List Contact us Trypanosomes Database Download First of all, please read the license of this database. Data ...1.4 KB) Simple search and download Downlaod via FTP FTP server is sometimes jammed. If it is, access [here]. About This Database Data...base Description Download License Update History of This Database Site Policy | Contact Us Download - Trypanosomes Database | LSDB Archive ...

  2. Database design and database administration for a kindergarten

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  3. SSC lattice database and graphical interface

    Trahern, C.G.; Zhou, J.

    1991-11-01

    When completed the Superconducting Super Collider will be the world's largest accelerator complex. In order to build this system on schedule, the use of database technologies will be essential. In this paper we discuss one of the database efforts underway at the SSC, the lattice database. The SSC lattice database provides a centralized source for the design of each major component of the accelerator complex. This includes the two collider rings, the High Energy Booster, Medium Energy Booster, Low Energy Booster, and the LINAC as well as transfer and test beam lines. These designs have been created using a menagerie of programs such as SYNCH, DIMAD, MAD, TRANSPORT, MAGIC, TRACE3D AND TEAPOT. However, once a design has been completed, it is entered into a uniform database schema in the database system. In this paper we discuss the reasons for creating the lattice database and its implementation via the commercial database system SYBASE. Each lattice in the lattice database is composed of a set of tables whose data structure can describe any of the SSC accelerator lattices. In order to allow the user community access to the databases, a programmatic interface known as dbsf (for database to several formats) has been written. Dbsf creates ascii input files appropriate to the above mentioned accelerator design programs. In addition it has a binary dataset output using the Self Describing Standard data discipline provided with the Integrated Scientific Tool Kit software tools. Finally we discuss the graphical interfaces to the lattice database. The primary interface, known as OZ, is a simulation environment as well as a database browser

  4. Directory of IAEA databases

    1992-12-01

    This second edition of the Directory of IAEA Databases has been prepared within the Division of Scientific and Technical Information (NESI). Its main objective is to describe the computerized information sources available to staff members. This directory contains all databases produced at the IAEA, including databases stored on the mainframe, LAN's and PC's. All IAEA Division Directors have been requested to register the existence of their databases with NESI. For the second edition database owners were requested to review the existing entries for their databases and answer four additional questions. The four additional questions concerned the type of database (e.g. Bibliographic, Text, Statistical etc.), the category of database (e.g. Administrative, Nuclear Data etc.), the available documentation and the type of media used for distribution. In the individual entries on the following pages the answers to the first two questions (type and category) is always listed, but the answers to the second two questions (documentation and media) is only listed when information has been made available

  5. Balkan Vegetation Database

    Vassilev, Kiril; Pedashenko, Hristo; Alexandrova, Alexandra; Tashev, Alexandar; Ganeva, Anna; Gavrilova, Anna; Gradevska, Asya; Assenov, Assen; Vitkova, Antonina; Grigorov, Borislav; Gussev, Chavdar; Filipova, Eva; Aneva, Ina; Knollová, Ilona; Nikolov, Ivaylo; Georgiev, Georgi; Gogushev, Georgi; Tinchev, Georgi; Pachedjieva, Kalina; Koev, Koycho; Lyubenova, Mariyana; Dimitrov, Marius; Apostolova-Stoyanova, Nadezhda; Velev, Nikolay; Zhelev, Petar; Glogov, Plamen; Natcheva, Rayna; Tzonev, Rossen; Boch, Steffen; Hennekens, Stephan M.; Georgiev, Stoyan; Stoyanov, Stoyan; Karakiev, Todor; Kalníková, Veronika; Shivarov, Veselin; Russakova, Veska; Vulchev, Vladimir

    2016-01-01

    The Balkan Vegetation Database (BVD; GIVD ID: EU-00-019; http://www.givd.info/ID/EU-00- 019) is a regional database that consists of phytosociological relevés from different vegetation types from six countries on the Balkan Peninsula (Albania, Bosnia and Herzegovina, Bulgaria, Kosovo, Montenegro

  6. World Database of Happiness

    R. Veenhoven (Ruut)

    1995-01-01

    textabstractABSTRACT The World Database of Happiness is an ongoing register of research on subjective appreciation of life. Its purpose is to make the wealth of scattered findings accessible, and to create a basis for further meta-analytic studies. The database involves four sections:
    1.

  7. Fire test database

    Lee, J.A.

    1989-01-01

    This paper describes a project recently completed for EPRI by Impell. The purpose of the project was to develop a reference database of fire tests performed on non-typical fire rated assemblies. The database is designed for use by utility fire protection engineers to locate test reports for power plant fire rated assemblies. As utilities prepare to respond to Information Notice 88-04, the database will identify utilities, vendors or manufacturers who have specific fire test data. The database contains fire test report summaries for 729 tested configurations. For each summary, a contact is identified from whom a copy of the complete fire test report can be obtained. Five types of configurations are included: doors, dampers, seals, wraps and walls. The database is computerized. One version for IBM; one for Mac. Each database is accessed through user-friendly software which allows adding, deleting, browsing, etc. through the database. There are five major database files. One each for the five types of tested configurations. The contents of each provides significant information regarding the test method and the physical attributes of the tested configuration. 3 figs

  8. Children's Culture Database (CCD)

    Wanting, Birgit

    a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996......a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996...

  9. Atomic Spectra Database (ASD)

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  10. Consumer Product Category Database

    The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use information is compiled from multiple sources while product information is gathered from publicly available Material Safety Data Sheets (MSDS). EPA researchers are evaluating the possibility of expanding the database with additional product and use information.

  11. Database in Artificial Intelligence.

    Wilkinson, Julia

    1986-01-01

    Describes a specialist bibliographic database of literature in the field of artificial intelligence created by the Turing Institute (Glasgow, Scotland) using the BRS/Search information retrieval software. The subscription method for end-users--i.e., annual fee entitles user to unlimited access to database, document provision, and printed awareness…

  12. NoSQL database scaling

    Žardin, Norbert

    2017-01-01

    NoSQL database scaling is a decision, where system resources or financial expenses are traded for database performance or other benefits. By scaling a database, database performance and resource usage might increase or decrease, such changes might have a negative impact on an application that uses the database. In this work it is analyzed how database scaling affect database resource usage and performance. As a results, calculations are acquired, using which database scaling types and differe...

  13. The LHCb configuration database

    Abadie, L; Van Herwijnen, Eric; Jacobsson, R; Jost, B; Neufeld, N

    2005-01-01

    The aim of the LHCb configuration database is to store information about all the controllable devices of the detector. The experiment's control system (that uses PVSS ) will configure, start up and monitor the detector from the information in the configuration database. The database will contain devices with their properties, connectivity and hierarchy. The ability to store and rapidly retrieve huge amounts of data, and the navigability between devices are important requirements. We have collected use cases to ensure the completeness of the design. Using the entity relationship modelling technique we describe the use cases as classes with attributes and links. We designed the schema for the tables using relational diagrams. This methodology has been applied to the TFC (switches) and DAQ system. Other parts of the detector will follow later. The database has been implemented using Oracle to benefit from central CERN database support. The project also foresees the creation of tools to populate, maintain, and co...

  14. Mycobacteriophage genome database.

    Joseph, Jerrine; Rajendran, Vasanthi; Hassan, Sameer; Kumar, Vanaja

    2011-01-01

    Mycobacteriophage genome database (MGDB) is an exclusive repository of the 64 completely sequenced mycobacteriophages with annotated information. It is a comprehensive compilation of the various gene parameters captured from several databases pooled together to empower mycobacteriophage researchers. The MGDB (Version No.1.0) comprises of 6086 genes from 64 mycobacteriophages classified into 72 families based on ACLAME database. Manual curation was aided by information available from public databases which was enriched further by analysis. Its web interface allows browsing as well as querying the classification. The main objective is to collect and organize the complexity inherent to mycobacteriophage protein classification in a rational way. The other objective is to browse the existing and new genomes and describe their functional annotation. The database is available for free at http://mpgdb.ibioinformatics.org/mpgdb.php.

  15. Database automation of accelerator operation

    Casstevens, B.J.; Ludemann, C.A.

    1982-01-01

    The Oak Ridge Isochronous Cyclotron (ORIC) is a variable energy, multiparticle accelerator that produces beams of energetic heavy ions which are used as probes to study the structure of the atomic nucleus. To accelerate and transmit a particular ion at a specified energy to an experimenter's apparatus, the electrical currents in up to 82 magnetic field producing coils must be established to accuracies of from 0.1 to 0.001 percent. Mechanical elements must also be positioned by means of motors or pneumatic drives. A mathematical model of this complex system provides a good approximation of operating parameters required to produce an ion beam. However, manual tuning of the system must be performed to optimize the beam quality. The database system was implemented as an on-line query and retrieval system running at a priority lower than the cyclotron real-time software. It was designed for matching beams recorded in the database with beams specified for experiments. The database is relational and permits searching on ranges of any subset of the eleven beam categorizing attributes. A beam file selected from the database is transmitted to the cyclotron general control software which handles the automatic slewing of power supply currents and motor positions to the file values, thereby replicating the desired parameters

  16. Database Description - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database

  17. DOE Order 5480.28 Hanford facilities database

    Hayenga, J.L., Westinghouse Hanford

    1996-09-01

    This document describes the development of a database of DOE and/or leased Hanford Site Facilities. The completed database will consist of structure/facility parameters essential to the prioritization of these structures for natural phenomena hazard vulnerability in compliance with DOE Order 5480.28, `Natural Phenomena Hazards Mitigation`. The prioritization process will be based upon the structure/facility vulnerability to natural phenomena hazards. The ACCESS based database, `Hanford Facilities Site Database`, is generated from current Hanford Site information and databases.

  18. Comparison of approximations in density functional theory calculations: Energetics and structure of binary oxides

    Hinuma, Yoyo; Hayashi, Hiroyuki; Kumagai, Yu; Tanaka, Isao; Oba, Fumiyasu

    2017-09-01

    High-throughput first-principles calculations based on density functional theory (DFT) are a powerful tool in data-oriented materials research. The choice of approximation to the exchange-correlation functional is crucial as it strongly affects the accuracy of DFT calculations. This study compares performance of seven approximations, six of which are based on Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation (GGA) with and without Hubbard U and van der Waals corrections (PBE, PBE+U, PBED3, PBED3+U, PBEsol, and PBEsol+U), and the strongly constrained and appropriately normed (SCAN) meta-GGA on the energetics and crystal structure of elementary substances and binary oxides. For the latter, only those with closed-shell electronic structures are considered, examples of which include C u2O , A g2O , MgO, ZnO, CdO, SnO, PbO, A l2O3 , G a2O3 , I n2O3 , L a2O3 , B i2O3 , Si O2 , Sn O2 , Pb O2 , Ti O2 , Zr O2 , Hf O2 , V2O5 , N b2O5 , T a2O5 , Mo O3 , and W O3 . Prototype crystal structures are selected from the Inorganic Crystal Structure Database (ICSD) and cation substitution is used to make a set of existing and hypothetical oxides. Two indices are proposed to quantify the extent of lattice and internal coordinate relaxation during a calculation. The former is based on the second invariant and determinant of the transformation matrix of basis vectors from before relaxation to after relaxation, and the latter is derived from shifts of internal coordinates of atoms in the unit cell. PBED3, PBEsol, and SCAN reproduce experimental lattice parameters of elementary substances and oxides well with few outliers. Notably, PBEsol and SCAN predict the lattice parameters of low dimensional structures comparably well with PBED3, even though these two functionals do not explicitly treat van der Waals interactions. SCAN gives formation enthalpies and Gibbs free energies closest to experimental data, with mean errors (MEs) of 0.01 and -0.04 eV, respectively, and root

  19. Database for propagation models

    Kantak, Anil V.

    1991-07-01

    A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.

  20. Database citation in full text biomedical articles.

    Kafkas, Şenay; Kim, Jee-Hyub; McEntyre, Johanna R

    2013-01-01

    Molecular biology and literature databases represent essential infrastructure for life science research. Effective integration of these data resources requires that there are structured cross-references at the level of individual articles and biological records. Here, we describe the current patterns of how database entries are cited in research articles, based on analysis of the full text Open Access articles available from Europe PMC. Focusing on citation of entries in the European Nucleotide Archive (ENA), UniProt and Protein Data Bank, Europe (PDBe), we demonstrate that text mining doubles the number of structured annotations of database record citations supplied in journal articles by publishers. Many thousands of new literature-database relationships are found by text mining, since these relationships are also not present in the set of articles cited by database records. We recommend that structured annotation of database records in articles is extended to other databases, such as ArrayExpress and Pfam, entries from which are also cited widely in the literature. The very high precision and high-throughput of this text-mining pipeline makes this activity possible both accurately and at low cost, which will allow the development of new integrated data services.

  1. Product Licenses Database Application

    Tonkovikj, Petar

    2016-01-01

    The goal of this project is to organize and centralize the data about software tools available to CERN employees, as well as provide a system that would simplify the license management process by providing information about the available licenses and their expiry dates. The project development process is consisted of two steps: modeling the products (software tools), product licenses, legal agreements and other data related to these entities in a relational database and developing the front-end user interface so that the user can interact with the database. The result is an ASP.NET MVC web application with interactive views for displaying and managing the data in the underlying database.

  2. LandIT Database

    Iftikhar, Nadeem; Pedersen, Torben Bach

    2010-01-01

    and reporting purposes. This paper presents the LandIT database; which is result of the LandIT project, which refers to an industrial collaboration project that developed technologies for communication and data integration between farming devices and systems. The LandIT database in principal is based...... on the ISOBUS standard; however the standard is extended with additional requirements, such as gradual data aggregation and flexible exchange of farming data. This paper describes the conceptual and logical schemas of the proposed database based on a real-life farming case study....

  3. Structure, design and statistical programme library of the meteorological database of Potsdam-Institut fuer Klimaforschung; Struktur, Aufbau und statistische Programmbibliothek der meteorologischen Datenbank am Potsdam-Institut fuer Klimaforschung

    Oesterle, H.; Glauer, J. [Potsdam-Institut fuer Klimafolgenforschung (PIK), Potsdam (Germany); Denhard, M. [Frankfurt Univ. (Germany). Inst. fuer Meteorologie und Geophysik

    1999-01-01

    The relational database management system (ORACLE) using different client interfaces (browser, SQL, precompiler) is the most important basis for data organization and storage. The creation of a data bank system at PIK includes: Acquisition of meteorological data series for the projects executed at PIK; development of a database structure on the basis of daily values; control, analysis and classification of data into given storage formats; development and application of statistical software. There are currently 20 different types of data sets with daily, monthly and annual data which are functionally interconnected and updated. All data sets are continuously updated. (orig.) [Deutsch] Wesentliche Grundlage der Datenorganisation und -speicherung bildet das relationale Datenbanksystem ORACLE mit den dazugehoerigen Werkzeugen (Browser, SQL- und Precompiler). Die Entwicklung des Datenbanksystems am PIK umfasst folgende Etappen: - Erwerb von meteorologischen Datensaetzen fuer die am Institut laufenden Forschungsvorhaben; - Entwicklung einer Speicherstruktur auf der Basis von Tageswerten; - Kontrolle, Analyse und Einordnung der Daten in die vorgegebenen Speicherformate; - Entwicklung der zur Datennutzung notwendigen statistischen Programmbibliothek. Zur Zeit gibt es 20 verschiedene Typen von Datensaetzen. Sie enthalten taegliche, monatliche und jaehrliche Daten und sind funktional miteinander verbunden. Alle Datensaetze werden kontinuierlich erweitert. (orig.)

  4. Livestock Anaerobic Digester Database

    The Anaerobic Digester Database provides basic information about anaerobic digesters on livestock farms in the United States, organized in Excel spreadsheets. It includes projects that are under construction, operating, or shut down.

  5. Toxicity Reference Database

    U.S. Environmental Protection Agency — The Toxicity Reference Database (ToxRefDB) contains approximately 30 years and $2 billion worth of animal studies. ToxRefDB allows scientists and the interested...

  6. Dissolution Methods Database

    U.S. Department of Health & Human Services — For a drug product that does not have a dissolution test method in the United States Pharmacopeia (USP), the FDA Dissolution Methods Database provides information on...

  7. OTI Activity Database

    US Agency for International Development — OTI's worldwide activity database is a simple and effective information system that serves as a program management, tracking, and reporting tool. In each country,...

  8. ARTI Refrigerant Database

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1994-05-27

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  9. Marine Jurisdictions Database

    Goldsmith, Roger

    1998-01-01

    The purpose of this project was to take the data gathered for the Maritime Claims chart and create a Maritime Jurisdictions digital database suitable for use with oceanographic mission planning objectives...

  10. Medicaid CHIP ESPC Database

    U.S. Department of Health & Human Services — The Environmental Scanning and Program Characteristic (ESPC) Database is in a Microsoft (MS) Access format and contains Medicaid and CHIP data, for the 50 states and...

  11. Records Management Database

    US Agency for International Development — The Records Management Database is tool created in Microsoft Access specifically for USAID use. It contains metadata in order to access and retrieve the information...

  12. Reach Address Database (RAD)

    U.S. Environmental Protection Agency — The Reach Address Database (RAD) stores the reach address of each Water Program feature that has been linked to the underlying surface water features (streams,...

  13. Household Products Database: Pesticides

    ... of Products Manufacturers Ingredients About the Database FAQ Product ... control bulbs carpenter ants caterpillars crabgrass control deer dogs dogs/cats fertilizer w/insecticide fertilizer w/weed ...

  14. Mouse Phenome Database (MPD)

    U.S. Department of Health & Human Services — The Mouse Phenome Database (MPD) has characterizations of hundreds of strains of laboratory mice to facilitate translational discoveries and to assist in selection...

  15. Consumer Product Category Database

    U.S. Environmental Protection Agency — The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use...

  16. Drycleaner Database - Region 7

    U.S. Environmental Protection Agency — THIS DATA ASSET NO LONGER ACTIVE: This is metadata documentation for the Region 7 Drycleaner Database (R7DryClnDB) which tracks all Region7 drycleaners who notify...

  17. National Assessment Database

    U.S. Environmental Protection Agency — The National Assessment Database stores and tracks state water quality assessment decisions, Total Maximum Daily Loads (TMDLs) and other watershed plans designed to...

  18. IVR RSA Database

    National Oceanic and Atmospheric Administration, Department of Commerce — This database contains trip-level reports submitted by vessels participating in Research Set-Aside projects with IVR reporting requirements.

  19. Rat Genome Database (RGD)

    U.S. Department of Health & Human Services — The Rat Genome Database (RGD) is a collaborative effort between leading research institutions involved in rat genetic and genomic research to collect, consolidate,...

  20. The CAPEC Database

    Nielsen, Thomas Lund; Abildskov, Jens; Harper, Peter Mathias

    2001-01-01

    in the compound. This classification makes the CAPEC database a very useful tool, for example, in the development of new property models, since properties of chemically similar compounds are easily obtained. A program with efficient search and retrieval functions of properties has been developed.......The Computer-Aided Process Engineering Center (CAPEC) database of measured data was established with the aim to promote greater data exchange in the chemical engineering community. The target properties are pure component properties, mixture properties, and special drug solubility data....... The database divides pure component properties into primary, secondary, and functional properties. Mixture properties are categorized in terms of the number of components in the mixture and the number of phases present. The compounds in the database have been classified on the basis of the functional groups...

  1. Danish Urogynaecological Database

    Hansen, Ulla Darling; Gradel, Kim Oren; Larsen, Michael Due

    2016-01-01

    , complications if relevant, implants used if relevant, 3-6-month postoperative recording of symptoms, if any. A set of clinical quality indicators is being maintained by the steering committee for the database and is published in an annual report which also contains extensive descriptive statistics. The database......The Danish Urogynaecological Database is established in order to ensure high quality of treatment for patients undergoing urogynecological surgery. The database contains details of all women in Denmark undergoing incontinence surgery or pelvic organ prolapse surgery amounting to ~5,200 procedures...... has a completeness of over 90% of all urogynecological surgeries performed in Denmark. Some of the main variables have been validated using medical records as gold standard. The positive predictive value was above 90%. The data are used as a quality monitoring tool by the hospitals and in a number...

  2. The Danish Urogynaecological Database

    Guldberg, Rikke; Brostrøm, Søren; Hansen, Jesper Kjær

    2013-01-01

    in the DugaBase from 1 January 2009 to 31 October 2010, using medical records as a reference. RESULTS: A total of 16,509 urogynaecological procedures were registered in the DugaBase by 31 December 2010. The database completeness has increased by calendar time, from 38.2 % in 2007 to 93.2 % in 2010 for public......INTRODUCTION AND HYPOTHESIS: The Danish Urogynaecological Database (DugaBase) is a nationwide clinical database established in 2006 to monitor, ensure and improve the quality of urogynaecological surgery. We aimed to describe its establishment and completeness and to validate selected variables....... This is the first study based on data from the DugaBase. METHODS: The database completeness was calculated as a comparison between urogynaecological procedures reported to the Danish National Patient Registry and to the DugaBase. Validity was assessed for selected variables from a random sample of 200 women...

  3. Danish Pancreatic Cancer Database

    Fristrup, Claus; Detlefsen, Sönke; Palnæs Hansen, Carsten

    2016-01-01

    : Death is monitored using data from the Danish Civil Registry. This registry monitors the survival status of the Danish population, and the registration is virtually complete. All data in the database are audited by all participating institutions, with respect to baseline characteristics, key indicators......AIM OF DATABASE: The Danish Pancreatic Cancer Database aims to prospectively register the epidemiology, diagnostic workup, diagnosis, treatment, and outcome of patients with pancreatic cancer in Denmark at an institutional and national level. STUDY POPULATION: Since May 1, 2011, all patients...... with microscopically verified ductal adenocarcinoma of the pancreas have been registered in the database. As of June 30, 2014, the total number of patients registered was 2,217. All data are cross-referenced with the Danish Pathology Registry and the Danish Patient Registry to ensure the completeness of registrations...

  4. Food Habits Database (FHDBS)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NEFSC Food Habits Database has two major sources of data. The first, and most extensive, is the standard NEFSC Bottom Trawl Surveys Program. During these...

  5. Functionally Graded Materials Database

    Kisara, Katsuto; Konno, Tomomi; Niino, Masayuki

    2008-02-01

    Functionally Graded Materials Database (hereinafter referred to as FGMs Database) was open to the society via Internet in October 2002, and since then it has been managed by the Japan Aerospace Exploration Agency (JAXA). As of October 2006, the database includes 1,703 research information entries with 2,429 researchers data, 509 institution data and so on. Reading materials such as "Applicability of FGMs Technology to Space Plane" and "FGMs Application to Space Solar Power System (SSPS)" were prepared in FY 2004 and 2005, respectively. The English version of "FGMs Application to Space Solar Power System (SSPS)" is now under preparation. This present paper explains the FGMs Database, describing the research information data, the sitemap and how to use it. From the access analysis, user access results and users' interests are discussed.

  6. Tethys Acoustic Metadata Database

    National Oceanic and Atmospheric Administration, Department of Commerce — The Tethys database houses the metadata associated with the acoustic data collection efforts by the Passive Acoustic Group. These metadata include dates, locations...

  7. NLCD 2011 database

    U.S. Environmental Protection Agency — National Land Cover Database 2011 (NLCD 2011) is the most recent national land cover product created by the Multi-Resolution Land Characteristics (MRLC) Consortium....

  8. Medicare Coverage Database

    U.S. Department of Health & Human Services — The Medicare Coverage Database (MCD) contains all National Coverage Determinations (NCDs) and Local Coverage Determinations (LCDs), local articles, and proposed NCD...

  9. Household Products Database

    U.S. Department of Health & Human Services — This database links over 4,000 consumer brands to health effects from Material Safety Data Sheets (MSDS) provided by the manufacturers and allows scientists and...

  10. Global Volcano Locations Database

    National Oceanic and Atmospheric Administration, Department of Commerce — NGDC maintains a database of over 1,500 volcano locations obtained from the Smithsonian Institution Global Volcanism Program, Volcanoes of the World publication. The...

  11. 1988 Spitak Earthquake Database

    National Oceanic and Atmospheric Administration, Department of Commerce — The 1988 Spitak Earthquake database is an extensive collection of geophysical and geological data, maps, charts, images and descriptive text pertaining to the...

  12. Uranium Location Database

    U.S. Environmental Protection Agency — A GIS compiled locational database in Microsoft Access of ~15,000 mines with uranium occurrence or production, primarily in the western United States. The metadata...

  13. INIST: databases reorientation

    Bidet, J.C.

    1995-01-01

    INIST is a CNRS (Centre National de la Recherche Scientifique) laboratory devoted to the treatment of scientific and technical informations and to the management of these informations compiled in a database. Reorientation of the database content has been proposed in 1994 to increase the transfer of research towards enterprises and services, to develop more automatized accesses to the informations, and to create a quality assurance plan. The catalog of publications comprises 5800 periodical titles (1300 for fundamental research and 4500 for applied research). A science and technology multi-thematic database will be created in 1995 for the retrieval of applied and technical informations. ''Grey literature'' (reports, thesis, proceedings..) and human and social sciences data will be added to the base by the use of informations selected in the existing GRISELI and Francis databases. Strong modifications are also planned in the thematic cover of Earth sciences and will considerably reduce the geological information content. (J.S.). 1 tab

  14. Fine Arts Database (FAD)

    General Services Administration — The Fine Arts Database records information on federally owned art in the control of the GSA; this includes the location, current condition and information on artists.

  15. Kansas Cartographic Database (KCD)

    Kansas Data Access and Support Center — The Kansas Cartographic Database (KCD) is an exact digital representation of selected features from the USGS 7.5 minute topographic map series. Features that are...

  16. Database Replication Prototype

    Vandewall, R.

    2000-01-01

    This report describes the design of a Replication Framework that facilitates the implementation and com-parison of database replication techniques. Furthermore, it discusses the implementation of a Database Replication Prototype and compares the performance measurements of two replication techniques based on the Atomic Broadcast communication primitive: pessimistic active replication and optimistic active replication. The main contributions of this report can be split into four parts....

  17. Database on Wind Characteristics

    Højstrup, J.; Ejsing Jørgensen, Hans; Lundtang Petersen, Erik

    1999-01-01

    his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061......his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061...

  18. ORACLE DATABASE SECURITY

    Cristina-Maria Titrade

    2011-01-01

    This paper presents some security issues, namely security database system level, data level security, user-level security, user management, resource management and password management. Security is a constant concern in the design and database development. Usually, there are no concerns about the existence of security, but rather how large it should be. A typically DBMS has several levels of security, in addition to those offered by the operating system or network. Typically, a DBMS has user a...

  19. Database computing in HEP

    Day, C.T.; Loken, S.; MacFarlane, J.F.; May, E.; Lifka, D.; Lusk, E.; Price, L.E.; Baden, A.

    1992-01-01

    The major SSC experiments are expected to produce up to 1 Petabyte of data per year each. Once the primary reconstruction is completed by farms of inexpensive processors. I/O becomes a major factor in further analysis of the data. We believe that the application of database techniques can significantly reduce the I/O performed in these analyses. We present examples of such I/O reductions in prototype based on relational and object-oriented databases of CDF data samples

  20. A trending database for human performance events

    Harrison, D.

    1993-01-01

    An effective Operations Experience program includes a standardized methodology for the investigation of unplanned events and a tool capable of retaining investigation data for the purpose of trending analysis. A database used in conjunction with a formalized investigation procedure for the purpose of trending unplanning event data is described. The database follows the structure of INPO's Human Performance Enhancement System for investigations. The database screens duplicate on-line the HPES evaluation Forms. All information pertaining to investigations is collected, retained and entered into the database using these forms. The database will be used for trending analysis to determine if any significant patterns exist, for tracking progress over time both within AECL and against industry standards, and for evaluating the success of corrective actions. Trending information will be used to help prevent similar occurrences

  1. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  2. Specialist Bibliographic Databases.

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls.

  3. Specialist Bibliographic Databases

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  4. Aging management database

    Vidican, Dan

    2003-01-01

    As operation time is accumulated, the overall safety and performance of NPP tend to decrease. The reasons for potential non-availability of the structures, Systems and Components (SCC) in operation, are various but they represent in different mode the end result of the ageing phenomena. In order to understand the ageing phenomena and to be able to take adequate countermeasures, it is necessary to accumulate a big amount of information, from worldwide and also from the own plant. These Data have to be organized in a systematic form, easy to retrieval and use. General requirements and structure of an Ageing DataBase Activities related to ageing evaluation have to allow: - Identification and evaluation of degradation phenomena, potential malfunction and failure mode of the plant typical components; - Trend analyses (on selected critical components), prediction of the future performance and the remaining service life. To perform these activities, it is necessary to have information on similar components behavior in different NPP (in different environment and different operating conditions) and also the results from different pilot studies. The knowledge of worldwide experience is worthwhile. Also, it is necessary to know very well the operating and environmental conditions in own NPP and to analyze in detail the failure mode and root cause for the components removed from the plant due to extended degradation. Based on the above aspects, one presents a proposal for the structure of an Ageing DataBase. It has three main sections: - Section A: General knowledge about ageing phenomena. It contain all the information collected based on the worldwide experience. It could have, a general part with crude information and a synthetic one, structured on typical components (if possible on different manufacturers). The synthetic part, has to consider different ageing aspects and different monitoring and evaluation methods (e. g. component, function, environment condition, specific

  5. Requirements and specifications for a particle database

    2015-01-01

    One of the tasks of WPEC Subgroup 38 (SG38) is to design a database structure for storing the particle information needed for nuclear reaction databases and transport codes. Since the same particle may appear many times in a reaction database (produced by many different reactions on different targets), one of the long-term goals for SG38 is to move towards a central database of particle information to reduce redundancy and ensure consistency among evaluations. The database structure must be general enough to describe all relevant particles and their properties, including mass, charge, spin and parity, half-life, decay properties, and so on. Furthermore, it must be broad enough to handle not only excited nuclear states but also excited atomic states that can de-excite through atomic relaxation. Databases built with this hierarchy will serve as central repositories for particle information that can be linked to from codes and other databases. It is hoped that the final product is general enough for use in other projects besides SG38. While this is called a 'particle database', the definition of a particle (as described in Section 2) is very broad. The database must describe nucleons, nuclei, excited nuclear states (and possibly atomic states) in addition to fundamental particles like photons, electrons, muons, etc. Under this definition the list of possible particles becomes quite large. To help organize them the database will need a way of grouping related particles (e.g., all the isotopes of an element, or all the excited levels of an isotope) together into particle 'groups'. The database will also need a way to classify particles that belong to the same 'family' (such as 'leptons', 'baryons', etc.). Each family of particles may have special requirements as to what properties are required. One important function of the particle database will be to provide an easy way for codes and external databases to look up any particle stored inside. In order to make access as

  6. Why Save Your Course as a Relational Database?

    Hamilton, Gregory C.; Katz, David L.; Davis, James E.

    2000-01-01

    Describes a system that stores course materials for computer-based training programs in a relational database called Of Course! Outlines the basic structure of the databases; explains distinctions between Of Course! and other authoring languages; and describes how data is retrieved from the database and presented to the student. (Author/LRW)

  7. International nuclear safety center database on material properties

    Fink, J.K.

    1996-01-01

    International nuclear safety center database on the following material properties is described: fuel, cladding,absorbers, moderators, structural materials, coolants, concretes, liquid mixtures, uranium dioxide

  8. A Bayesian Network Schema for Lessening Database Inference

    Chang, LiWu; Moskowitz, Ira S

    2001-01-01

    .... The authors introduce a formal schema for database inference analysis, based upon a Bayesian network structure, which identifies critical parameters involved in the inference problem and represents...

  9. Computational 2D Materials Database

    Rasmussen, Filip Anselm; Thygesen, Kristian Sommer

    2015-01-01

    We present a comprehensive first-principles study of the electronic structure of 51 semiconducting monolayer transition-metal dichalcogenides and -oxides in the 2H and 1T hexagonal phases. The quasiparticle (QP) band structures with spin-orbit coupling are calculated in the G(0)W(0) approximation...... and used as input to a 2D hydrogenic model to estimate exciton binding energies. Throughout the paper we focus on trends and correlations in the electronic structure rather than detailed analysis of specific materials. All the computed data is available in an open database......., and comparison is made with different density functional theory descriptions. Pitfalls related to the convergence of GW calculations for two-dimensional (2D) materials are discussed together with possible solutions. The monolayer band edge positions relative to vacuum are used to estimate the band alignment...

  10. A Database Interface for Complex Objects

    Holsheimer, Marcel; de By, Rolf A.; de By, R.A.; Ait-Kaci, Hassan

    We describe a formal design for a logical query language using psi-terms as data structures to interact effectively and efficiently with a relational database. The structure of psi-terms provides an adequate representation for so-called complex objects. They generalize conventional terms used in

  11. Worldwide databases in marine geology: A review

    Kunte, P.D.

    such as image capturing, multimedia and geographic information system (GIS) should be utilized. Information managers need to collaborate with subject experts in order to maintain the high quality of the databases. 1. Introduction With the advent of computer...-DOS and Macintosh $ 56 MS-DOS P. D. KunteJMarine Geology 122 (1995) 263-275 coordination between the information providers and management centres. Within the databases there is no uniformity in the structure, storage and operating systems. Every producer...

  12. Database Description - DGBY | LSDB Archive [Life Science Database Archive metadata

    Full Text Available base Description General information of database Database name DGBY Alternative name Database...EL: +81-29-838-8066 E-mail: Database classification Microarray Data and other Gene Expression Databases Orga...nism Taxonomy Name: Saccharomyces cerevisiae Taxonomy ID: 4932 Database descripti...-called phenomics). We uploaded these data on this website which is designated DGBY(Database for Gene expres...ma J, Ando A, Takagi H. Journal: Yeast. 2008 Mar;25(3):179-90. External Links: Original website information Database

  13. The CUTLASS database facilities

    Jervis, P.; Rutter, P.

    1988-09-01

    The enhancement of the CUTLASS database management system to provide improved facilities for data handling is seen as a prerequisite to its effective use for future power station data processing and control applications. This particularly applies to the larger projects such as AGR data processing system refurbishments, and the data processing systems required for the new Coal Fired Reference Design stations. In anticipation of the need for improved data handling facilities in CUTLASS, the CEGB established a User Sub-Group in the early 1980's to define the database facilities required by users. Following the endorsement of the resulting specification and a detailed design study, the database facilities have been implemented as an integral part of the CUTLASS system. This paper provides an introduction to the range of CUTLASS Database facilities, and emphasises the role of Database as the central facility around which future Kit 1 and (particularly) Kit 6 CUTLASS based data processing and control systems will be designed and implemented. (author)

  14. ADANS database specification

    NONE

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  15. Immune epitope database analysis resource

    Kim, Yohan; Ponomarenko, Julia; Zhu, Zhanyang

    2012-01-01

    The immune epitope database analysis resource (IEDB-AR: http://tools.iedb.org) is a collection of tools for prediction and analysis of molecular targets of T- and B-cell immune responses (i.e. epitopes). Since its last publication in the NAR webserver issue in 2008, a new generation of peptide......, and the homology mapping tool was updated to enable mapping of discontinuous epitopes onto 3D structures. Furthermore, to serve a wider range of users, the number of ways in which IEDB-AR can be accessed has been expanded. Specifically, the predictive tools can be programmatically accessed using a web interface...

  16. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The... Article title: Author name(s): Journal: External Links: Original website information Database maintenance s...DB (Protein Data Bank) KEGG PATHWAY Database DrugPort Entry list Available Query search Available Web servic

  17. Relational databases for SSC design and control

    Barr, E.; Peggs, S.; Saltmarsh, C.

    1989-01-01

    Most people agree that a database is A Good Thing, but there is much confusion in the jargon used, and in what jobs a database management system and its peripheral software can and cannot do. During the life cycle of an enormous project like the SSC, from conceptual and theoretical design, through research and development, to construction, commissioning and operation, an enormous amount of data will be generated. Some of these data, originating in the early parts of the project, will be needed during commissioning or operation, many years in the future. Two of these pressing data management needs-from the magnet research and industrialization programs and the lattice design-have prompted work on understanding and adapting commercial database practices for scientific projects. Modern relational database management systems (rDBMS's) cope naturally with a large proportion of the requirements of data structures, like the SSC database structure built for the superconduction cable supplies, uses, and properties. This application is similar to the commercial applications for which these database systems were developed. The SSC application has further requirements not immediately satisfied by the commercial systems. These derive from the diversity of the data structures to be managed, the changing emphases and uses during the project lifetime, and the large amount of scientific data processing to be expected. 4 refs., 5 figs

  18. Computerized nuclear material database management system for power reactors

    Cheng Binghao; Zhu Rongbao; Liu Daming; Cao Bin; Liu Ling; Tan Yajun; Jiang Jincai

    1994-01-01

    The software packages for nuclear material database management for power reactors are described. The database structure, data flow and model for management of the database are analysed. Also mentioned are the main functions and characterizations of the software packages, which are successfully installed and used at both the Daya Bay Nuclear Power Plant and the Qinshan Nuclear Power Plant for the purposed of handling nuclear material database automatically

  19. Software Engineering Laboratory (SEL) database organization and user's guide

    So, Maria; Heller, Gerard; Steinberg, Sandra; Spiegel, Douglas

    1989-01-01

    The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base tables is described. In addition, techniques for accessing the database, through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL), are discussed.

  20. Open Geoscience Database

    Bashev, A.

    2012-04-01

    Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data

  1. IAEA nuclear databases for applications

    Schwerer, Otto

    2003-01-01

    The Nuclear Data Section (NDS) of the International Atomic Energy Agency (IAEA) provides nuclear data services to scientists on a worldwide scale with particular emphasis on developing countries. More than 100 data libraries are made available cost-free by Internet, CD-ROM and other media. These databases are used for practically all areas of nuclear applications as well as basic research. An overview is given of the most important nuclear reaction and nuclear structure databases, such as EXFOR, CINDA, ENDF, NSR, ENSDF, NUDAT, and of selected special purpose libraries such as FENDL, RIPL, RNAL, the IAEA Photonuclear Data Library, and the IAEA charged-particle cross section database for medical radioisotope production. The NDS also coordinates two international nuclear data centre networks and is involved in data development activities (to create new or improve existing data libraries when the available data are inadequate) and in technology transfer to developing countries, e.g. through the installation and support of the mirror web site of the IAEA Nuclear Data Services at IPEN (operational since March 2000) and by organizing nuclear-data related workshops. By encouraging their participation in IAEA Co-ordinated Research Projects and also by compiling their experimental results in databases such as EXFOR, the NDS helps to make developing countries' contributions to nuclear science visible and conveniently available. The web address of the IAEA Nuclear Data Services is http://www.nds.iaea.org and the NDS mirror service at IPEN (Brasil) can be accessed at http://www.nds.ipen.br/ (author)

  2. The LHCb configuration database

    Abadie, Lana; Gaspar, Clara; Jacobsson, Richard; Jost, Beat; Neufeld, Niko

    2005-01-01

    The Experiment Control System (ECS) will handle the monitoring, configuration and operation of all the LHCb experimental equipment. All parameters required to configure electronics equipment under the control of the ECS will reside in a configuration database. The database will contain two kinds of information: 1.\tConfiguration properties about devices such as hardware addresses, geographical location, and operational parameters associated with particular running modes (dynamic properties). 2.\tConnectivity between devices : this consists of describing the output and input connections of a device (static properties). The representation of these data using tables must be complete so that it can provide all the required information to the ECS and must cater for all the subsystems. The design should also guarantee a fast response time, even if a query results in a large volume of data being loaded from the database into the ECS. To fulfil these constraints, we apply the following methodology: Determine from the d...

  3. Tibetan Magmatism Database

    Chapman, James B.; Kapp, Paul

    2017-11-01

    A database containing previously published geochronologic, geochemical, and isotopic data on Mesozoic to Quaternary igneous rocks in the Himalayan-Tibetan orogenic system are presented. The database is intended to serve as a repository for new and existing igneous rock data and is publicly accessible through a web-based platform that includes an interactive map and data table interface with search, filtering, and download options. To illustrate the utility of the database, the age, location, and ɛHft composition of magmatism from the central Gangdese batholith in the southern Lhasa terrane are compared. The data identify three high-flux events, which peak at 93, 50, and 15 Ma. They are characterized by inboard arc migration and a temporal and spatial shift to more evolved isotopic compositions.

  4. The Danish Sarcoma Database

    Jørgensen, Peter Holmberg; Lausten, Gunnar Schwarz; Pedersen, Alma B

    2016-01-01

    AIM: The aim of the database is to gather information about sarcomas treated in Denmark in order to continuously monitor and improve the quality of sarcoma treatment in a local, a national, and an international perspective. STUDY POPULATION: Patients in Denmark diagnosed with a sarcoma, both...... skeletal and ekstraskeletal, are to be registered since 2009. MAIN VARIABLES: The database contains information about appearance of symptoms; date of receiving referral to a sarcoma center; date of first visit; whether surgery has been performed elsewhere before referral, diagnosis, and treatment; tumor...... of Diseases - tenth edition codes and TNM Classification of Malignant Tumours, and date of death (after yearly coupling to the Danish Civil Registration System). Data quality and completeness are currently secured. CONCLUSION: The Danish Sarcoma Database is population based and includes sarcomas occurring...

  5. Danish Gynecological Cancer Database

    Sørensen, Sarah Mejer; Bjørn, Signe Frahm; Jochumsen, Kirsten Marie

    2016-01-01

    AIM OF DATABASE: The Danish Gynecological Cancer Database (DGCD) is a nationwide clinical cancer database and its aim is to monitor the treatment quality of Danish gynecological cancer patients, and to generate data for scientific purposes. DGCD also records detailed data on the diagnostic measures...... data forms as follows: clinical data, surgery, pathology, pre- and postoperative care, complications, follow-up visits, and final quality check. DGCD is linked with additional data from the Danish "Pathology Registry", the "National Patient Registry", and the "Cause of Death Registry" using the unique...... Danish personal identification number (CPR number). DESCRIPTIVE DATA: Data from DGCD and registers are available online in the Statistical Analysis Software portal. The DGCD forms cover almost all possible clinical variables used to describe gynecological cancer courses. The only limitation...

  6. RODOS database adapter

    Xie Gang

    1995-11-01

    Integrated data management is an essential aspect of many automatical information systems such as RODOS, a real-time on-line decision support system for nuclear emergency management. In particular, the application software must provide access management to different commercial database systems. This report presents the tools necessary for adapting embedded SQL-applications to both HP-ALLBASE/SQL and CA-Ingres/SQL databases. The design of the database adapter and the concept of RODOS embedded SQL syntax are discussed by considering some of the most important features of SQL-functions and the identification of significant differences between SQL-implementations. Finally fully part of the software developed and the administrator's and installation guides are described. (orig.) [de

  7. The Danish Depression Database

    Videbech, Poul Bror Hemming; Deleuran, Anette

    2016-01-01

    AIM OF DATABASE: The purpose of the Danish Depression Database (DDD) is to monitor and facilitate the improvement of the quality of the treatment of depression in Denmark. Furthermore, the DDD has been designed to facilitate research. STUDY POPULATION: Inpatients as well as outpatients...... with depression, aged above 18 years, and treated in the public psychiatric hospital system were enrolled. MAIN VARIABLES: Variables include whether the patient has been thoroughly somatically examined and has been interviewed about the psychopathology by a specialist in psychiatry. The Hamilton score as well...... as an evaluation of the risk of suicide are measured before and after treatment. Whether psychiatric aftercare has been scheduled for inpatients and the rate of rehospitalization are also registered. DESCRIPTIVE DATA: The database was launched in 2011. Every year since then ~5,500 inpatients and 7,500 outpatients...

  8. 600 MW nuclear power database

    Cao Ruiding; Chen Guorong; Chen Xianfeng; Zhang Yishu

    1996-01-01

    600 MW Nuclear power database, based on ORACLE 6.0, consists of three parts, i.e. nuclear power plant database, nuclear power position database and nuclear power equipment database. In the database, there are a great deal of technique data and picture of nuclear power, provided by engineering designing units and individual. The database can give help to the designers of nuclear power

  9. The Neotoma Paleoecology Database

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  10. The Danish Sarcoma Database

    Jorgensen PH

    2016-10-01

    Full Text Available Peter Holmberg Jørgensen,1 Gunnar Schwarz Lausten,2 Alma B Pedersen3 1Tumor Section, Department of Orthopedic Surgery, Aarhus University Hospital, Aarhus, 2Tumor Section, Department of Orthopedic Surgery, Rigshospitalet, Copenhagen, 3Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, Denmark Aim: The aim of the database is to gather information about sarcomas treated in Denmark in order to continuously monitor and improve the quality of sarcoma treatment in a local, a national, and an international perspective. Study population: Patients in Denmark diagnosed with a sarcoma, both skeletal and ekstraskeletal, are to be registered since 2009. Main variables: The database contains information about appearance of symptoms; date of receiving referral to a sarcoma center; date of first visit; whether surgery has been performed elsewhere before referral, diagnosis, and treatment; tumor characteristics such as location, size, malignancy grade, and growth pattern; details on treatment (kind of surgery, amount of radiation therapy, type and duration of chemotherapy; complications of treatment; local recurrence and metastases; and comorbidity. In addition, several quality indicators are registered in order to measure the quality of care provided by the hospitals and make comparisons between hospitals and with international standards. Descriptive data: Demographic patient-specific data such as age, sex, region of living, comorbidity, World Health Organization's International Classification of Diseases – tenth edition codes and TNM Classification of Malignant Tumours, and date of death (after yearly coupling to the Danish Civil Registration System. Data quality and completeness are currently secured. Conclusion: The Danish Sarcoma Database is population based and includes sarcomas occurring in Denmark since 2009. It is a valuable tool for monitoring sarcoma incidence and quality of treatment and its improvement, postoperative

  11. C# Database Basics

    Schmalz, Michael

    2012-01-01

    Working with data and databases in C# certainly can be daunting if you're coming from VB6, VBA, or Access. With this hands-on guide, you'll shorten the learning curve considerably as you master accessing, adding, updating, and deleting data with C#-basic skills you need if you intend to program with this language. No previous knowledge of C# is necessary. By following the examples in this book, you'll learn how to tackle several database tasks in C#, such as working with SQL Server, building data entry forms, and using data in a web service. The book's code samples will help you get started

  12. Danish Palliative Care Database

    Grønvold, Mogens; Adsersen, Mathilde; Hansen, Maiken Bang

    2016-01-01

    Aims: The aim of the Danish Palliative Care Database (DPD) is to monitor, evaluate, and improve the clinical quality of specialized palliative care (SPC) (ie, the activity of hospital-based palliative care teams/departments and hospices) in Denmark. Study population: The study population is all...... patients were registered in DPD during the 5 years 2010–2014. Of those registered, 96% had cancer. Conclusion: DPD is a national clinical quality database for SPC having clinically relevant variables and high data and patient completeness....

  13. The Danish Anaesthesia Database

    Antonsen, Kristian; Rosenstock, Charlotte Vallentin; Lundstrøm, Lars Hyldborg

    2016-01-01

    AIM OF DATABASE: The aim of the Danish Anaesthesia Database (DAD) is the nationwide collection of data on all patients undergoing anesthesia. Collected data are used for quality assurance, quality development, and serve as a basis for research projects. STUDY POPULATION: The DAD was founded in 2004....... In addition, an annual DAD report is a benchmark for departments nationwide. CONCLUSION: The DAD is covering the anesthetic process for the majority of patients undergoing anesthesia in Denmark. Data in the DAD are increasingly used for both quality and research projects....

  14. MARKS ON ART database

    van Vlierden, Marieke; Wadum, Jørgen; Wolters, Margreet

    2016-01-01

    Mestermærker, monogrammer og kvalitetsmærker findes ofte præget eller stemplet på kunstværker fra 1300-1700. En illustreret database med denne typer mræker er under etablering på Nederlands Kunsthistoriske Institut (RKD) i Den Haag.......Mestermærker, monogrammer og kvalitetsmærker findes ofte præget eller stemplet på kunstværker fra 1300-1700. En illustreret database med denne typer mræker er under etablering på Nederlands Kunsthistoriske Institut (RKD) i Den Haag....

  15. The magnet database system

    Baggett, P.; Delagi, N.; Leedy, R.; Marshall, W.; Robinson, S.L.; Tompkins, J.C.

    1991-01-01

    This paper describes the current status of MagCom, a central database of SSC magnet information that is available to all magnet scientists via network connections. The database has been designed to contain the specifications and measured values of important properties for major materials, plus configuration information (specifying which individual items were used in each cable, coil, and magnet) and the test results on completed magnets. These data will help magnet scientists to track and control the production process and to correlate the performance of magnets with the properties of their constituents

  16. Yucca Mountain digital database

    Daudt, C.R.; Hinze, W.J.

    1992-01-01

    This paper discusses the Yucca Mountain Digital Database (DDB) which is a digital, PC-based geographical database of geoscience-related characteristics of the proposed high-level waste (HLW) repository site of Yucca Mountain, Nevada. It was created to provide the US Nuclear Regulatory Commission's (NRC) Advisory Committee on Nuclear Waste (ACNW) and its staff with a visual perspective of geological, geophysical, and hydrological features at the Yucca Mountain site as discussed in the Department of Energy's (DOE) pre-licensing reports

  17. ARTI refrigerant database

    Calm, J.M.

    1998-03-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to thermophysical properties, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air conditioning and refrigeration equipment. It also references documents addressing compatibility of refrigerants and lubricants with other materials.

  18. Surgery Risk Assessment (SRA) Database

    Department of Veterans Affairs — The Surgery Risk Assessment (SRA) database is part of the VA Surgical Quality Improvement Program (VASQIP). This database contains assessments of selected surgical...

  19. License - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us Trypanoso... Attribution-Share Alike 2.1 Japan . If you use data from this database, please be sure attribute this database as follows: Trypanoso...nse Update History of This Database Site Policy | Contact Us License - Trypanosomes Database | LSDB Archive ...

  20. MARC and Relational Databases.

    Llorens, Jose; Trenor, Asuncion

    1993-01-01

    Discusses the use of MARC format in relational databases and addresses problems of incompatibilities. A solution is presented that is in accordance with Open Systems Interconnection (OSI) standards and is based on experiences at the library of the Universidad Politecnica de Valencia (Spain). (four references) (EA)

  1. Teaching Historians with Databases.

    Burton, Vernon

    1993-01-01

    Asserts that, although pressures to publish have detracted from the quality of teaching at the college level, recent innovations in educational technology have created opportunities for instructional improvement. Describes the use of computer-assisted instruction and databases in college-level history courses. (CFR)

  2. Literature database aid

    Wanderer, J.A.

    1991-01-01

    The booklet is to help with the acquisition of original literature either after a conventional literature search or in particular after a database search. It bridges the gap between abbreviated (short) and original (long) titel. This, together with information on the holdings of technical/scientific libraries, facilitates document delivery. 1500 short titles are listed alphabetically. (orig.) [de

  3. Oversigt over databaser

    Krogh Graversen, Brian

    Dette er en oversigt over registre, som kan anvendes til at beslyse situationen og udviklingen på det sociale område. Oversigten er anden fase i et dataprojekt, som har til formål at etablere en database, som kan danne basis for en løbende overvågning, udredning, evaluering og forskning på det...

  4. LHCb distributed conditions database

    Clemencic, M

    2008-01-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCG library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica of the Conditions Database have been performed and the results will be summarized here

  5. Database Programming Languages

    This volume contains the proceedings of the 11th International Symposium on Database Programming Languages (DBPL 2007), held in Vienna, Austria, on September 23-24, 2007. DBPL 2007 was one of 15 meetings co-located with VLBD (the International Conference on Very Large Data Bases). DBPL continues...

  6. From database to normbase

    Stamper, R.K.; Liu, Kecheng; Liu, K.; Kolkman, M.; Kolkman, M.; Klarenberg, P.; Ades, Y.; van Slooten, C.; van Slooten, F.; Ades, Y.

    1991-01-01

    After the database concept, we are ready for the normbase concept. The object is to decouple organizational and technical knowledge that are now mixed inextricably together in the application programs we write today. The underlying principle is to find a way of specifying a social system as a system

  7. Database on wind characteristics

    Hansen, K.S. [The Technical Univ. of Denmark (Denmark); Courtney, M.S. [Risoe National Lab., (Denmark)

    1999-08-01

    The organisations that participated in the project consists of five research organisations: MIUU (Sweden), ECN (The Netherlands), CRES (Greece), DTU (Denmark), Risoe (Denmark) and one wind turbine manufacturer: Vestas Wind System A/S (Denmark). The overall goal was to build a database consisting of a large number of wind speed time series and create tools for efficiently searching through the data to select interesting data. The project resulted in a database located at DTU, Denmark with online access through the Internet. The database contains more than 50.000 hours of measured wind speed measurements. A wide range of wind climates and terrain types are represented with significant amounts of time series. Data have been chosen selectively with a deliberate over-representation of high wind and complex terrain cases. This makes the database ideal for wind turbine design needs but completely unsuitable for resource studies. Diversity has also been an important aim and this is realised with data from a large range of terrain types; everything from offshore to mountain, from Norway to Greece. (EHS)

  8. Database Description - SSBD | LSDB Archive [Life Science Database Archive metadata

    Full Text Available base Description General information of database Database name SSBD Alternative nam...ss 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe 650-0047, Japan, RIKEN Quantitative Biology Center Shuichi Onami E-mail: Database... classification Other Molecular Biology Databases Database classification Dynamic databa...elegans Taxonomy ID: 6239 Taxonomy Name: Escherichia coli Taxonomy ID: 562 Database description Systems Scie...i Onami Journal: Bioinformatics/April, 2015/Volume 31, Issue 7 External Links: Original website information Database

  9. Database Description - GETDB | LSDB Archive [Life Science Database Archive metadata

    Full Text Available abase Description General information of database Database name GETDB Alternative n...ame Gal4 Enhancer Trap Insertion Database DOI 10.18908/lsdba.nbdc00236-000 Creator Creator Name: Shigeo Haya... Chuo-ku, Kobe 650-0047 Tel: +81-78-306-3185 FAX: +81-78-306-3183 E-mail: Database classification Expression... Invertebrate genome database Organism Taxonomy Name: Drosophila melanogaster Taxonomy ID: 7227 Database des...riginal website information Database maintenance site Drosophila Genetic Resource

  10. The AMMA database

    Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim

    2010-05-01

    The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can

  11. JDD, Inc. Database

    Miller, David A., Jr.

    2004-01-01

    JDD Inc, is a maintenance and custodial contracting company whose mission is to provide their clients in the private and government sectors "quality construction, construction management and cleaning services in the most efficient and cost effective manners, (JDD, Inc. Mission Statement)." This company provides facilities support for Fort Riley in Fo,rt Riley, Kansas and the NASA John H. Glenn Research Center at Lewis Field here in Cleveland, Ohio. JDD, Inc. is owned and operated by James Vaughn, who started as painter at NASA Glenn and has been working here for the past seventeen years. This summer I worked under Devan Anderson, who is the safety manager for JDD Inc. in the Logistics and Technical Information Division at Glenn Research Center The LTID provides all transportation, secretarial, security needs and contract management of these various services for the center. As a safety manager, my mentor provides Occupational Health and Safety Occupation (OSHA) compliance to all JDD, Inc. employees and handles all other issues (Environmental Protection Agency issues, workers compensation, safety and health training) involving to job safety. My summer assignment was not as considered "groundbreaking research" like many other summer interns have done in the past, but it is just as important and beneficial to JDD, Inc. I initially created a database using a Microsoft Excel program to classify and categorize data pertaining to numerous safety training certification courses instructed by our safety manager during the course of the fiscal year. This early portion of the database consisted of only data (training field index, employees who were present at these training courses and who was absent) from the training certification courses. Once I completed this phase of the database, I decided to expand the database and add as many dimensions to it as possible. Throughout the last seven weeks, I have been compiling more data from day to day operations and been adding the

  12. Database Description - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Full Text Available List Contact us Open TG-GATEs Pathological Image Database Database Description General information of database Database... name Open TG-GATEs Pathological Image Database Alternative name - DOI 10.18908/lsdba.nbdc00954-0...iomedical Innovation 7-6-8, Saito-asagi, Ibaraki-city, Osaka 567-0085, Japan TEL:81-72-641-9826 Email: Database... classification Toxicogenomics Database Organism Taxonomy Name: Rattus norvegi... Article title: Author name(s): Journal: External Links: Original website information Database

  13. Database Description - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Full Text Available List Contact us Yeast Interacting Proteins Database Database Description General information of database Database... name Yeast Interacting Proteins Database Alternative name - DOI 10.18908/lsdba.nbdc00742-000 Creator C...-ken 277-8561 Tel: +81-4-7136-3989 FAX: +81-4-7136-3979 E-mail : Database classif...s cerevisiae Taxonomy ID: 4932 Database description Information on interactions and related information obta...l Acad Sci U S A. 2001 Apr 10;98(8):4569-74. Epub 2001 Mar 13. External Links: Original website information Database

  14. The Developement of A Lattice Structured Database

    Bruun, Hans

    In this project we have investigated the possibilities to make a system based on the concept algebra described in [3], [4] and [5]. The concept algebra is used for ontology specification and knowledge representation. It is a distributive lattice extended with attribution operations. One of the main...... ideas in this work is to use Birkhoff's representation theorem, so we represent distributive lattices using its dual representation: the partial order of join irreducibles. We show how to construct a concept algebra satisfying a given set of equations. The universal/initial algebra is usually too big...

  15. Database management systems understanding and applying database technology

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  16. PAMDB: a comprehensive Pseudomonas aeruginosa metabolome database.

    Huang, Weiliang; Brewer, Luke K; Jones, Jace W; Nguyen, Angela T; Marcu, Ana; Wishart, David S; Oglesby-Sherrouse, Amanda G; Kane, Maureen A; Wilks, Angela

    2018-01-04

    The Pseudomonas aeruginosaMetabolome Database (PAMDB, http://pseudomonas.umaryland.edu) is a searchable, richly annotated metabolite database specific to P. aeruginosa. P. aeruginosa is a soil organism and significant opportunistic pathogen that adapts to its environment through a versatile energy metabolism network. Furthermore, P. aeruginosa is a model organism for the study of biofilm formation, quorum sensing, and bioremediation processes, each of which are dependent on unique pathways and metabolites. The PAMDB is modelled on the Escherichia coli (ECMDB), yeast (YMDB) and human (HMDB) metabolome databases and contains >4370 metabolites and 938 pathways with links to over 1260 genes and proteins. The database information was compiled from electronic databases, journal articles and mass spectrometry (MS) metabolomic data obtained in our laboratories. For each metabolite entered, we provide detailed compound descriptions, names and synonyms, structural and physiochemical information, nuclear magnetic resonance (NMR) and MS spectra, enzymes and pathway information, as well as gene and protein sequences. The database allows extensive searching via chemical names, structure and molecular weight, together with gene, protein and pathway relationships. The PAMBD and its future iterations will provide a valuable resource to biologists, natural product chemists and clinicians in identifying active compounds, potential biomarkers and clinical diagnostics. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. Applications of GIS and database technologies to manage a Karst Feature Database

    Gao, Y.; Tipping, R.G.; Alexander, E.C.

    2006-01-01

    This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.

  18. Design research of uranium mine borehole database

    Xie Huaming; Hu Guangdao; Zhu Xianglin; Chen Dehua; Chen Miaoshun

    2008-01-01

    With short supply of energy sources, exploration of uranium mine have been enhanced, but data storage, analysis and usage of exploration data of uranium mine are not highly computerized currently in China, the data is poor shared and used that it can not adapt the need of production and research. It will be well done, if the data are stored and managed in a database system. The concept structure design, logic structure design and data integrity checks are discussed according to the demand of applications and the analysis of exploration data of uranium mine. An application of the database is illustrated finally. (authors)

  19. Database Access through Java Technologies

    Nicolae MERCIOIU

    2010-09-01

    Full Text Available As a high level development environment, the Java technologies offer support to the development of distributed applications, independent of the platform, providing a robust set of methods to access the databases, used to create software components on the server side, as well as on the client side. Analyzing the evolution of Java tools to access data, we notice that these tools evolved from simple methods that permitted the queries, the insertion, the update and the deletion of the data to advanced implementations such as distributed transactions, cursors and batch files. The client-server architectures allows through JDBC (the Java Database Connectivity the execution of SQL (Structured Query Language instructions and the manipulation of the results in an independent and consistent manner. The JDBC API (Application Programming Interface creates the level of abstractization needed to allow the call of SQL queries to any DBMS (Database Management System. In JDBC the native driver and the ODBC (Open Database Connectivity-JDBC bridge and the classes and interfaces of the JDBC API will be described. The four steps needed to build a JDBC driven application are presented briefly, emphasizing on the way each step has to be accomplished and the expected results. In each step there are evaluations on the characteristics of the database systems and the way the JDBC programming interface adapts to each one. The data types provided by SQL2 and SQL3 standards are analyzed by comparison with the Java data types, emphasizing on the discrepancies between those and the SQL types, but also the methods that allow the conversion between different types of data through the methods of the ResultSet object. Next, starting from the metadata role and studying the Java programming interfaces that allow the query of result sets, we will describe the advanced features of the data mining with JDBC. As alternative to result sets, the Rowsets add new functionalities that

  20. A novel approach: chemical relational databases, and the role of the ISSCAN database on assessing chemical carcinogenicity.

    Benigni, Romualdo; Bossa, Cecilia; Richard, Ann M; Yang, Chihae

    2008-01-01

    Mutagenicity and carcinogenicity databases are crucial resources for toxicologists and regulators involved in chemicals risk assessment. Until recently, existing public toxicity databases have been constructed primarily as "look-up-tables" of existing data, and most often did not contain chemical structures. Concepts and technologies originated from the structure-activity relationships science have provided powerful tools to create new types of databases, where the effective linkage of chemical toxicity with chemical structure can facilitate and greatly enhance data gathering and hypothesis generation, by permitting: a) exploration across both chemical and biological domains; and b) structure-searchability through the data. This paper reviews the main public databases, together with the progress in the field of chemical relational databases, and presents the ISSCAN database on experimental chemical carcinogens.

  1. De-identifying an EHR Database

    Lauesen, Søren; Pantazos, Kostas; Lippert, Søren

    2011-01-01

    -identified a Danish EHR database with 437,164 patients. The goal was to generate a version with real medical records, but related to artificial persons. We developed a de-identification algorithm that uses lists of named entities, simple language analysis, and special rules. Our algorithm consists of 3 steps: collect...... lists of identifiers from the database and external resources, define a replacement for each identifier, and replace identifiers in structured data and free text. Some patient records could not be safely de-identified, so the de-identified database has 323,122 patient records with an acceptable degree...... of anonymity, readability and correctness (F-measure of 95%). The algorithm has to be adjusted for each culture, language and database....

  2. Danish clinical databases: An overview

    Green, Anders

    2011-01-01

    Clinical databases contain data related to diagnostic procedures, treatments and outcomes. In 2001, a scheme was introduced for the approval, supervision and support to clinical databases in Denmark.......Clinical databases contain data related to diagnostic procedures, treatments and outcomes. In 2001, a scheme was introduced for the approval, supervision and support to clinical databases in Denmark....

  3. A database of new zeolite-like materials.

    Pophale, Ramdas; Cheeseman, Phillip A; Deem, Michael W

    2011-07-21

    We here describe a database of computationally predicted zeolite-like materials. These crystals were discovered by a Monte Carlo search for zeolite-like materials. Positions of Si atoms as well as unit cell, space group, density, and number of crystallographically unique atoms were explored in the construction of this database. The database contains over 2.6 M unique structures. Roughly 15% of these are within +30 kJ mol(-1) Si of α-quartz, the band in which most of the known zeolites lie. These structures have topological, geometrical, and diffraction characteristics that are similar to those of known zeolites. The database is the result of refinement by two interatomic potentials that both satisfy the Pauli exclusion principle. The database has been deposited in the publicly available PCOD database and in www.hypotheticalzeolites.net/database/deem/. This journal is © the Owner Societies 2011

  4. The GLIMS Glacier Database

    Raup, B. H.; Khalsa, S. S.; Armstrong, R.

    2007-12-01

    The Global Land Ice Measurements from Space (GLIMS) project has built a geospatial and temporal database of glacier data, composed of glacier outlines and various scalar attributes. These data are being derived primarily from satellite imagery, such as from ASTER and Landsat. Each "snapshot" of a glacier is from a specific time, and the database is designed to store multiple snapshots representative of different times. We have implemented two web-based interfaces to the database; one enables exploration of the data via interactive maps (web map server), while the other allows searches based on text-field constraints. The web map server is an Open Geospatial Consortium (OGC) compliant Web Map Server (WMS) and Web Feature Server (WFS). This means that other web sites can display glacier layers from our site over the Internet, or retrieve glacier features in vector format. All components of the system are implemented using Open Source software: Linux, PostgreSQL, PostGIS (geospatial extensions to the database), MapServer (WMS and WFS), and several supporting components such as Proj.4 (a geographic projection library) and PHP. These tools are robust and provide a flexible and powerful framework for web mapping applications. As a service to the GLIMS community, the database contains metadata on all ASTER imagery acquired over glacierized terrain. Reduced-resolution of the images (browse imagery) can be viewed either as a layer in the MapServer application, or overlaid on the virtual globe within Google Earth. The interactive map application allows the user to constrain by time what data appear on the map. For example, ASTER or glacier outlines from 2002 only, or from Autumn in any year, can be displayed. The system allows users to download their selected glacier data in a choice of formats. The results of a query based on spatial selection (using a mouse) or text-field constraints can be downloaded in any of these formats: ESRI shapefiles, KML (Google Earth), Map

  5. DataBase on Demand

    Aparicio, R Gaspar; Gomez, D; Wojcik, D; Coz, I Coterillo

    2012-01-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  6. An Integrated Enterprise Accelerator Database for the SLC Control System

    2002-01-01

    Since its inception in the early 1980's, the SLC Control System has been driven by a highly structured memory-resident real-time database. While efficient, its rigid structure and file-based sources makes it difficult to maintain and extract relevant information. The goal of transforming the sources for this database into a relational form is to enable it to be part of a Control System Enterprise Database that is an integrated central repository for SLC accelerator device and Control System data with links to other associated databases. We have taken the concepts developed for the NLC Enterprise Database and used them to create and load a relational model of the online SLC Control System database. This database contains data and structure to allow querying and reporting on beamline devices, their associations and parameters. In the future this will be extended to allow generation of EPICS and SLC database files, setup of applications and links to other databases such as accelerator maintenance, archive data, financial and personnel records, cabling information, documentation etc. The database is implemented using Oracle 8i. In the short term it will be updated daily in batch from the online SLC database. In the longer term, it will serve as the primary source for Control System static data, an R and D platform for the NLC, and contribute to SLC Control System operations

  7. REPLIKASI UNIDIRECTIONAL PADA HETEROGEN DATABASE

    Hendro Nindito; Evaristus Didik Madyatmadja; Albert Verasius Dian Sano

    2013-01-01

    The use of diverse database technology in enterprise today can not be avoided. Thus, technology is needed to generate information in real time. The purpose of this research is to discuss a database replication technology that can be applied in heterogeneous database environments. In this study we use Windows-based MS SQL Server database to Linux-based Oracle database as the goal. The research method used is prototyping where development can be done quickly and testing of working models of the...

  8. Database for environmental monitoring in nuclear facilities

    Raceanu, Mircea; Varlam, Carmen; Iliescu, Mariana; Enache, Adrian; Faurescu, Ionut

    2006-01-01

    To ensure that an assessment could be made of the impact of nuclear facilities on the local environment, a program of environmental monitoring must be established well before of nuclear facility commissioning. Enormous amount of data must be stored and correlated starting with: location, meteorology, type sample characterization from water to different kind of foods, radioactivity measurement and isotopic measurement (e.g. for C-14 determination, C-13 isotopic correction it is a must). Data modelling is a well known mechanism describing data structures at a high level of abstraction. Such models are often used to automatically create database structures, and to generate the code structures used to access the databases. This has the disadvantage of losing data constraints that might be specified in data models for data checking. Embodiment of the system of the present application includes a computer-readable memory for storing a definitional data table for defining variable symbols representing the corresponding measurable physical quantities. Developing a database system implies setting up well-established rules of how the data should be stored and accessed what is commonly called the Relational Database Theory. This consists of guidelines regarding issues as how to avoid duplicating data using the technique called normalization and how to identify the unique identifier for a database record. (authors)

  9. KALIMER database development (database configuration and design methodology)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  10. The magnet database system

    Ball, M.J.; Delagi, N.; Horton, B.; Ivey, J.C.; Leedy, R.; Li, X.; Marshall, B.; Robinson, S.L.; Tompkins, J.C.

    1992-01-01

    The Test Department of the Magnet Systems Division of the Superconducting Super Collider Laboratory (SSCL) is developing a central database of SSC magnet information that will be available to all magnet scientists at the SSCL or elsewhere, via network connections. The database contains information on the magnets' major components, configuration information (specifying which individual items were used in each cable, coil, and magnet), measurements made at major fabrication stages, and the test results on completed magnets. These data will facilitate the correlation of magnet performance with the properties of its constituents. Recent efforts have focused on the development of procedures for user-friendly access to the data, including displays in the format of the production open-quotes travelerclose quotes data sheets, standard summary reports, and a graphical interface for ad hoc queues and plots

  11. Database on aircraft accidents

    Nishio, Masahide; Koriyama, Tamio

    2013-11-01

    The Reactor Safety Subcommittee in the Nuclear Safety and Preservation Committee published 'The criteria on assessment of probability of aircraft crash into light water reactor facilities' as the standard method for evaluating probability of aircraft crash into nuclear reactor facilities in July 2002. In response to this issue, Japan Nuclear Energy Safety Organization has been collecting open information on aircraft accidents of commercial airplanes, self-defense force (SDF) airplanes and US force airplanes every year since 2003, sorting out them and developing the database of aircraft accidents for the latest 20 years to evaluate probability of aircraft crash into nuclear reactor facilities. In this report the database was revised by adding aircraft accidents in 2011 to the existing database and deleting aircraft accidents in 1991 from it, resulting in development of the revised 2012 database for the latest 20 years from 1992 to 2011. Furthermore, the flight information on commercial aircrafts was also collected to develop the flight database for the latest 20 years from 1992 to 2011 to evaluate probability of aircraft crash into reactor facilities. The method for developing the database of aircraft accidents to evaluate probability of aircraft crash into reactor facilities is based on the report 'The criteria on assessment of probability of aircraft crash into light water reactor facilities' described above. The 2012 revised database for the latest 20 years from 1992 to 2011 shows the followings. The trend of the 2012 database changes little as compared to the last year's report. (1) The data of commercial aircraft accidents is based on 'Aircraft accident investigation reports of Japan transport safety board' of Ministry of Land, Infrastructure, Transport and Tourism. The number of commercial aircraft accidents is 4 for large fixed-wing aircraft, 58 for small fixed-wing aircraft, 5 for large bladed aircraft and 99 for small bladed aircraft. The relevant accidents

  12. ARTI Refrigerant Database

    Calm, J.M.

    1992-11-09

    The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R- 717 (ammonia), ethers, and others as well as azeotropic and zeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents on compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. A computerized version is available that includes retrieval software.

  13. Geologic Field Database

    Katarina Hribernik

    2002-12-01

    Full Text Available The purpose of the paper is to present the field data relational database, which was compiled from data, gathered during thirty years of fieldwork on the Basic Geologic Map of Slovenia in scale1:100.000. The database was created using MS Access software. The MS Access environment ensures its stability and effective operation despite changing, searching, and updating the data. It also enables faster and easier user-friendly access to the field data. Last but not least, in the long-term, with the data transferred into the GISenvironment, it will provide the basis for the sound geologic information system that will satisfy a broad spectrum of geologists’ needs.

  14. DistiLD Database

    Palleja, Albert; Horn, Heiko; Eliasson, Sabrina

    2012-01-01

    Genome-wide association studies (GWAS) have identified thousands of single nucleotide polymorphisms (SNPs) associated with the risk of hundreds of diseases. However, there is currently no database that enables non-specialists to answer the following simple questions: which SNPs associated...... with diseases are in linkage disequilibrium (LD) with a gene of interest? Which chromosomal regions have been associated with a given disease, and which are the potentially causal genes in each region? To answer these questions, we use data from the HapMap Project to partition each chromosome into so-called LD...... blocks, so that SNPs in LD with each other are preferentially in the same block, whereas SNPs not in LD are in different blocks. By projecting SNPs and genes onto LD blocks, the DistiLD database aims to increase usage of existing GWAS results by making it easy to query and visualize disease...

  15. Odense Pharmacoepidemiological Database (OPED)

    Hallas, Jesper; Poulsen, Maja Hellfritzsch; Hansen, Morten Rix

    2017-01-01

    The Odense University Pharmacoepidemiological Database (OPED) is a prescription database established in 1990 by the University of Southern Denmark, covering reimbursed prescriptions from the county of Funen in Denmark and the region of Southern Denmark (1.2 million inhabitants). It is still active...... and thereby has more than 25 years' of continuous coverage. In this MiniReview, we review its history, content, quality, coverage, governance and some of its uses. OPED's data include the Danish Civil Registration Number (CPR), which enables unambiguous linkage with virtually all other health......-related registers in Denmark. Among its research uses, we review record-linkage studies of drug effects, advanced drug utilization studies, some examples of method development and use of OPED as sampling frame to recruit patients for field studies or clinical trials. With the advent of other, more comprehensive...

  16. THE EXTRAGALACTIC DISTANCE DATABASE

    Tully, R. Brent; Courtois, Helene M.; Jacobs, Bradley A.; Rizzi, Luca; Shaya, Edward J.; Makarov, Dmitry I.

    2009-01-01

    A database can be accessed on the Web at http://edd.ifa.hawaii.edu that was developed to promote access to information related to galaxy distances. The database has three functional components. First, tables from many literature sources have been gathered and enhanced with links through a distinct galaxy naming convention. Second, comparisons of results both at the levels of parameters and of techniques have begun and are continuing, leading to increasing homogeneity and consistency of distance measurements. Third, new material is presented arising from ongoing observational programs at the University of Hawaii 2.2 m telescope, radio telescopes at Green Bank, Arecibo, and Parkes and with the Hubble Space Telescope. This new observational material is made available in tandem with related material drawn from archives and passed through common analysis pipelines.

  17. Database on aircraft accidents

    Nishio, Masahide; Koriyama, Tamio

    2012-09-01

    The Reactor Safety Subcommittee in the Nuclear Safety and Preservation Committee published the report 'The criteria on assessment of probability of aircraft crash into light water reactor facilities' as the standard method for evaluating probability of aircraft crash into nuclear reactor facilities in July 2002. In response to the report, Japan Nuclear Energy Safety Organization has been collecting open information on aircraft accidents of commercial airplanes, self-defense force (SDF) airplanes and US force airplanes every year since 2003, sorting out them and developing the database of aircraft accidents for latest 20 years to evaluate probability of aircraft crash into nuclear reactor facilities. This year, the database was revised by adding aircraft accidents in 2010 to the existing database and deleting aircraft accidents in 1991 from it, resulting in development of the revised 2011 database for latest 20 years from 1991 to 2010. Furthermore, the flight information on commercial aircrafts was also collected to develop the flight database for latest 20 years from 1991 to 2010 to evaluate probability of aircraft crash into reactor facilities. The method for developing the database of aircraft accidents to evaluate probability of aircraft crash into reactor facilities is based on the report 'The criteria on assessment of probability of aircraft crash into light water reactor facilities' described above. The 2011 revised database for latest 20 years from 1991 to 2010 shows the followings. The trend of the 2011 database changes little as compared to the last year's one. (1) The data of commercial aircraft accidents is based on 'Aircraft accident investigation reports of Japan transport safety board' of Ministry of Land, Infrastructure, Transport and Tourism. 4 large fixed-wing aircraft accidents, 58 small fixed-wing aircraft accidents, 5 large bladed aircraft accidents and 114 small bladed aircraft accidents occurred. The relevant accidents for evaluating

  18. The OECD FIRE database

    Angner, A.; Berg, H.P.; Roewekamp, M.; Werner, W.; Gauvain, J.

    2007-01-01

    Realistic modelling of fire scenarios is still difficult due to the scarcity of reliable data needed for deterministic and probabilistic fire safety analysis. Therefore, it has been recognized as highly important to establish a fire event database on an international level. In consequence, several member countries of the Nuclear Energy Agency of the OECD have decided in 2000 to establish the International Fire Data Exchange Project (OECD FIRE) to encourage multilateral co-operation in the collection and analysis of data related to fire events at nuclear power plants. This paper presents the OECD FIRE project objectives, work scope and current status of the OECD FIRE database after 3 years of operation as well as first preliminary statistical insights gained from the collected data. (orig.)

  19. Migration from relational to NoSQL database

    Ghotiya, Sunita; Mandal, Juhi; Kandasamy, Saravanakumar

    2017-11-01

    Data generated by various real time applications, social networking sites and sensor devices is of very huge amount and unstructured, which makes it difficult for Relational database management systems to handle the data. Data is very precious component of any application and needs to be analysed after arranging it in some structure. Relational databases are only able to deal with structured data, so there is need of NoSQL Database management System which can deal with semi -structured data also. Relational database provides the easiest way to manage the data but as the use of NoSQL is increasing it is becoming necessary to migrate the data from Relational to NoSQL databases. Various frameworks has been proposed previously which provides mechanisms for migration of data stored at warehouses in SQL, middle layer solutions which can provide facility of data to be stored in NoSQL databases to handle data which is not structured. This paper provides a literature review of some of the recent approaches proposed by various researchers to migrate data from relational to NoSQL databases. Some researchers proposed mechanisms for the co-existence of NoSQL and Relational databases together. This paper provides a summary of mechanisms which can be used for mapping data stored in Relational databases to NoSQL databases. Various techniques for data transformation and middle layer solutions are summarised in the paper.

  20. Database on wind characteristics - contents of database bank

    Larsen, G.C.; Hansen, K.S.

    2004-06-01

    The main objective of IEA R and D Wind Annex XVII - Database on Wind Characteristics - has been to provide wind energy planners, designers and researchers, as well as the international wind engineering community in general, with a source of actual wind field data (time series and resource data) observed in a wide range of different wind climates and terrain types. Connected to an extension of the initial Annex period, the scope for the continuation was widened to include also support to the international wind turbine standardisation efforts.. The project partners are Sweden, Norway, U.S.A., The Netherlands and Denmark, with Denmark as the Operating Agent. The reporting of the continuation of Annex XVII falls in two separate parts. Part one accounts in details for the available data in the established database bank, and part two describes various data analyses performed with the overall purpose of improving the design load cases with relevance for to wind turbine structures. The present report constitutes the second part of the Annex XVII reporting. Both fatigue and extreme load aspects are dealt with, however, with the main emphasis on the latter. The work has been supported by The Ministry of Environment and Energy, Danish Energy Agency, The Netherlands Agency for Energy and the Environment (NOVEM), The Norwegian Water Resources and Energy Administration (NVE), The Swedish National Energy Administration (STEM) and The Government of the United States of America. (au)

  1. Database Perspectives on Blockchains

    Cohen, Sara; Zohar, Aviv

    2018-01-01

    Modern blockchain systems are a fresh look at the paradigm of distributed computing, applied under assumptions of large-scale public networks. They can be used to store and share information without a trusted central party. There has been much effort to develop blockchain systems for a myriad of uses, ranging from cryptocurrencies to identity control, supply chain management, etc. None of this work has directly studied the fundamental database issues that arise when using blockchains as the u...

  2. Real Time Baseball Database

    Fukue, Yasuhiro

    The author describes the system outline, features and operations of "Nikkan Sports Realtime Basaball Database" which was developed and operated by Nikkan Sports Shimbun, K. K. The system enables to input numerical data of professional baseball games as they proceed simultaneously, and execute data updating at realtime, just-in-time. Other than serving as supporting tool for prepareing newspapers it is also available for broadcasting media, general users through NTT dial Q2 and others.

  3. ARTI refrigerant database

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1999-01-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilities access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  4. ARTI refrigerant database

    Calm, J.M.

    1996-07-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  5. ARTI refrigerant database

    Calm, J.M.

    1996-11-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  6. Developing customer databases.

    Rao, S K; Shenbaga, S

    2000-01-01

    There is a growing consensus among pharmaceutical companies that more product and customer-specific approaches to marketing and selling a new drug can result in substantial increases in sales. Marketers and researchers taking a proactive micro-marketing approach to identifying, profiling, and communicating with target customers are likely to facilitate such approaches and outcomes. This article provides a working framework for creating customer databases that can be effectively mined to achieve a variety of such marketing and sales force objectives.

  7. The Xeno-glycomics database (XDB): a relational database of qualitative and quantitative pig glycome repertoire.

    Park, Hae-Min; Park, Ju-Hyeong; Kim, Yoon-Woo; Kim, Kyoung-Jin; Jeong, Hee-Jin; Jang, Kyoung-Soon; Kim, Byung-Gee; Kim, Yun-Gon

    2013-11-15

    In recent years, the improvement of mass spectrometry-based glycomics techniques (i.e. highly sensitive, quantitative and high-throughput analytical tools) has enabled us to obtain a large dataset of glycans. Here we present a database named Xeno-glycomics database (XDB) that contains cell- or tissue-specific pig glycomes analyzed with mass spectrometry-based techniques, including a comprehensive pig glycan information on chemical structures, mass values, types and relative quantities. It was designed as a user-friendly web-based interface that allows users to query the database according to pig tissue/cell types or glycan masses. This database will contribute in providing qualitative and quantitative information on glycomes characterized from various pig cells/organs in xenotransplantation and might eventually provide new targets in the α1,3-galactosyltransferase gene-knock out pigs era. The database can be accessed on the web at http://bioinformatics.snu.ac.kr/xdb.

  8. ARTI refrigerant database

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1996-04-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates. Citations in this report are divided into the following topics: thermophysical properties; materials compatibility; lubricants and tribology; application data; safety; test and analysis methods; impacts; regulatory actions; substitute refrigerants; identification; absorption and adsorption; research programs; and miscellaneous documents. Information is also presented on ordering instructions for the computerized version.

  9. ARTI Refrigerant Database

    Cain, J.M. (Calm (James M.), Great Falls, VA (United States))

    1993-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents to accelerate availability of the information and will be completed or replaced in future updates.

  10. ARTI refrigerant database

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1998-08-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufactures and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on many refrigerants including propane, ammonia, water, carbon dioxide, propylene, ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  11. ARTI refrigerant database

    Calm, J.M.

    1997-02-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alterative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on various refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  12. The production phase model as a tool for analysing and structuring the database of an operation management system; Das Phasenmodell der Produktion als ein Werkzeug fuer Analyse und Strukturierung der Datenbasis eines Betriebsmanagementsystems

    Rauprich, G.; Polke, B.

    1997-12-31

    This paper presents a process analysis which was carried out with the aid of a phase model for the purpose of supporting the start-up phase of an operation management system. Beside providing a functional structure for the database this process analysis permits a functional and consistent representation of the process at as crude or detailed a level as desired. In this way the phase model supplies a functional model of the process which serves the purposes of both the process control and the operation control level. Once the phase model has been created it largely only depends on the question being asked which level of detail and what properties of the relevant process elements and material and energy flows are used. [Deutsch] Eine mit Hilfe des Phasenmodells durchgefuehrte Prozessanalyse unterstuetzt die Einfuehrungsphase eines Betriebsmanagementsystems. Sie gibt eine funktionale Struktur der Datenbank vor und erlaubt darueberhinaus auf der Ebene des Betriebsmanagementsystems eine funktionale, konsistente, beliebig grobe oder feine detaillierte Darstellung des Prozesses. Die Phasenmodellierung liefert damit sowohl fuer die Prozessleitebene als auch fuer die Betriebsleitebene ein ganzheitliches, funktionales Modell des Prozesses. Wenn das Phasenmodell erst einmal erstellt ist, ist es im wesentlichen nur abhaengig von der Fragestellung, welche Detaillierungsebenen mit welchen Eigenschaften der relevanten Prozesselemente und Material-/Energiestroeme verwendet werden. (orig./RHM)

  13. Database Description - DMPD | LSDB Archive [Life Science Database Archive metadata

    Full Text Available base Description General information of database Database name DMPD Alternative nam...e Dynamic Macrophage Pathway CSML Database DOI 10.18908/lsdba.nbdc00558-000 Creator Creator Name: Masao Naga...ty of Tokyo 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639 Tel: +81-3-5449-5615 FAX: +83-3-5449-5442 E-mail: Database...606 Taxonomy Name: Mammalia Taxonomy ID: 40674 Database description DMPD collects...e(s) Article title: Author name(s): Journal: External Links: Original website information Database maintenan

  14. JICST Factual DatabaseJICST Chemical Substance Safety Regulation Database

    Abe, Atsushi; Sohma, Tohru

    JICST Chemical Substance Safety Regulation Database is based on the Database of Safety Laws for Chemical Compounds constructed by Japan Chemical Industry Ecology-Toxicology & Information Center (JETOC) sponsored by the Sience and Technology Agency in 1987. JICST has modified JETOC database system, added data and started the online service through JOlS-F (JICST Online Information Service-Factual database) in January 1990. JICST database comprises eighty-three laws and fourteen hundred compounds. The authors outline the database, data items, files and search commands. An example of online session is presented.

  15. A Study on Graph Storage Database of NOSQL

    Smita Agrawal; Atul Patel

    2016-01-01

    Big Data is used to store huge volume of both structured and unstructured data which is so large and is hard to process using current / traditional database tools and software technologies. The goal of Big Data Storage Management is to ensure a high level of data quality and availability for business intellect and big data analytics applications. Graph database which is not most popular NoSQL database compare to relational database yet but it is a most powerful NoSQL database which can handle...

  16. An online database of nuclear electromagnetic moments

    Mertzimekis, T.J.; Stamou, K.; Psaltis, A.

    2016-01-01

    Measurements of nuclear magnetic dipole and electric quadrupole moments are considered quite important for the understanding of nuclear structure both near and far from the valley of stability. The recent advent of radioactive beams has resulted in a plethora of new, continuously flowing, experimental data on nuclear structure – including nuclear moments – which hinders the information management. A new, dedicated, public and user friendly online database ( (http://magneticmoments.info)) has been created comprising experimental data of nuclear electromagnetic moments. The present database supersedes existing printed compilations, including also non-evaluated series of data and relevant meta-data, while putting strong emphasis on bimonthly updates. The scope, features and extensions of the database are reported.

  17. DAD - Distributed Adamo Database system at Hermes

    Wander, W.; Dueren, M.; Ferstl, M.; Green, P.; Potterveld, D.; Welch, P.

    1996-01-01

    Software development for the HERMES experiment faces the challenges of many other experiments in modern High Energy Physics: Complex data structures and relationships have to be processed at high I/O rate. Experimental control and data analysis are done on a distributed environment of CPUs with various operating systems and requires access to different time dependent databases like calibration and geometry. Slow and experimental control have a need for flexible inter-process-communication. Program development is done in different programming languages where interfaces to the libraries should not restrict the capacities of the language. The needs of handling complex data structures are fulfilled by the ADAMO entity relationship model. Mixed language programming can be provided using the CFORTRAN package. DAD, the Distributed ADAMO Database library, was developed to provide the I/O and database functionality requirements. (author)

  18. SmallSat Database

    Petropulos, Dolores; Bittner, David; Murawski, Robert; Golden, Bert

    2015-01-01

    The SmallSat has an unrealized potential in both the private industry and in the federal government. Currently over 70 companies, 50 universities and 17 governmental agencies are involved in SmallSat research and development. In 1994, the U.S. Army Missile and Defense mapped the moon using smallSat imagery. Since then Smart Phones have introduced this imagery to the people of the world as diverse industries watched this trend. The deployment cost of smallSats is also greatly reduced compared to traditional satellites due to the fact that multiple units can be deployed in a single mission. Imaging payloads have become more sophisticated, smaller and lighter. In addition, the growth of small technology obtained from private industries has led to the more widespread use of smallSats. This includes greater revisit rates in imagery, significantly lower costs, the ability to update technology more frequently and the ability to decrease vulnerability of enemy attacks. The popularity of smallSats show a changing mentality in this fast paced world of tomorrow. What impact has this created on the NASA communication networks now and in future years? In this project, we are developing the SmallSat Relational Database which can support a simulation of smallSats within the NASA SCaN Compatability Environment for Networks and Integrated Communications (SCENIC) Modeling and Simulation Lab. The NASA Space Communications and Networks (SCaN) Program can use this modeling to project required network support needs in the next 10 to 15 years. The SmallSat Rational Database could model smallSats just as the other SCaN databases model the more traditional larger satellites, with a few exceptions. One being that the smallSat Database is designed to be built-to-order. The SmallSat database holds various hardware configurations that can be used to model a smallSat. It will require significant effort to develop as the research material can only be populated by hand to obtain the unique data

  19. Databases of surface wave dispersion

    L. Boschi

    2005-06-01

    Full Text Available Observations of seismic surface waves provide the most important constraint on the elastic properties of the Earth’s lithosphere and upper mantle. Two databases of fundamental mode surface wave dispersion were recently compiled and published by groups at Harvard (Ekström et al., 1997 and Utrecht/Oxford (Trampert and Woodhouse, 1995, 2001, and later employed in 3-d global tomographic studies. Although based on similar sets of seismic records, the two databases show some significant discrepancies. We derive phase velocity maps from both, and compare them to quantify the discrepancies and assess the relative quality of the data; in this endeavour, we take careful account of the effects of regularization and parametrization. At short periods, where Love waves are mostly sensitive to crustal structure and thickness, we refer our comparison to a map of the Earth’s crust derived from independent data. On the assumption that second-order effects like seismic anisotropy and scattering can be neglected, we find the measurements of Ekström et al. (1997 of better quality; those of Trampert and Woodhouse (2001 result in phase velocity maps of much higher spatial frequency and, accordingly, more difficult to explain and justify geophysically. The discrepancy is partly explained by the more conservative a priori selection of data implemented by Ekström et al. (1997. Nevertheless, it becomes more significant with decreasing period, which indicates that it could also be traced to the different measurement techniques employed by the authors.

  20. Axiomatic Specification of Database Domain Statics

    Wieringa, Roelf J.

    1987-01-01

    In the past ten years, much work has been done to add more structure to database models 1 than what is represented by a mere collection of flat relations (Albano & Cardelli [1985], Albano et al. [1986], Borgida eta. [1984], Brodie [1984], Brodie & Ridjanovic [1984], Brodie & Silva (1982], Codd

  1. Analysis of large databases in vascular surgery.

    Nguyen, Louis L; Barshes, Neal R

    2010-09-01

    Large databases can be a rich source of clinical and administrative information on broad populations. These datasets are characterized by demographic and clinical data for over 1000 patients from multiple institutions. Since they are often collected and funded for other purposes, their use for secondary analysis increases their utility at relatively low costs. Advantages of large databases as a source include the very large numbers of available patients and their related medical information. Disadvantages include lack of detailed clinical information and absence of causal descriptions. Researchers working with large databases should also be mindful of data structure design and inherent limitations to large databases, such as treatment bias and systemic sampling errors. Withstanding these limitations, several important studies have been published in vascular care using large databases. They represent timely, "real-world" analyses of questions that may be too difficult or costly to address using prospective randomized methods. Large databases will be an increasingly important analytical resource as we focus on improving national health care efficacy in the setting of limited resources.

  2. Disbiome database: linking the microbiome to disease.

    Janssens, Yorick; Nielandt, Joachim; Bronselaer, Antoon; Debunne, Nathan; Verbeke, Frederick; Wynendaele, Evelien; Van Immerseel, Filip; Vandewynckel, Yves-Paul; De Tré, Guy; De Spiegeleer, Bart

    2018-06-04

    Recent research has provided fascinating indications and evidence that the host health is linked to its microbial inhabitants. Due to the development of high-throughput sequencing technologies, more and more data covering microbial composition changes in different disease types are emerging. However, this information is dispersed over a wide variety of medical and biomedical disciplines. Disbiome is a database which collects and presents published microbiota-disease information in a standardized way. The diseases are classified using the MedDRA classification system and the micro-organisms are linked to their NCBI and SILVA taxonomy. Finally, each study included in the Disbiome database is assessed for its reporting quality using a standardized questionnaire. Disbiome is the first database giving a clear, concise and up-to-date overview of microbial composition differences in diseases, together with the relevant information of the studies published. The strength of this database lies within the combination of the presence of references to other databases, which enables both specific and diverse search strategies within the Disbiome database, and the human annotation which ensures a simple and structured presentation of the available data.

  3. Multilevel security for relational databases

    Faragallah, Osama S; El-Samie, Fathi E Abd

    2014-01-01

    Concepts of Database Security Database Concepts Relational Database Security Concepts Access Control in Relational Databases      Discretionary Access Control      Mandatory Access Control      Role-Based Access Control Work Objectives Book Organization Basic Concept of Multilevel Database Security IntroductionMultilevel Database Relations Polyinstantiation      Invisible Polyinstantiation      Visible Polyinstantiation      Types of Polyinstantiation      Architectural Consideration

  4. The Danish fetal medicine database

    Ekelund, Charlotte Kvist; Kopp, Tine Iskov; Tabor, Ann

    2016-01-01

    trimester ultrasound scan performed at all public hospitals in Denmark are registered in the database. Main variables/descriptive data: Data on maternal characteristics, ultrasonic, and biochemical variables are continuously sent from the fetal medicine units’Astraia databases to the central database via...... analyses are sent to the database. Conclusion: It has been possible to establish a fetal medicine database, which monitors first-trimester screening for chromosomal abnormalities and second-trimester screening for major fetal malformations with the input from already collected data. The database...

  5. XML: James Webb Space Telescope Database Issues, Lessons, and Status

    Detter, Ryan; Mooney, Michael; Fatig, Curtis

    2003-01-01

    This paper will present the current concept using extensible Markup Language (XML) as the underlying structure for the James Webb Space Telescope (JWST) database. The purpose of using XML is to provide a JWST database, independent of any portion of the ground system, yet still compatible with the various systems using a variety of different structures. The testing of the JWST Flight Software (FSW) started in 2002, yet the launch is scheduled for 2011 with a planned 5-year mission and a 5-year follow on option. The initial database and ground system elements, including the commands, telemetry, and ground system tools will be used for 19 years, plus post mission activities. During the Integration and Test (I&T) phases of the JWST development, 24 distinct laboratories, each geographically dispersed, will have local database tools with an XML database. Each of these laboratories database tools will be used for the exporting and importing of data both locally and to a central database system, inputting data to the database certification process, and providing various reports. A centralized certified database repository will be maintained by the Space Telescope Science Institute (STScI), in Baltimore, Maryland, USA. One of the challenges for the database is to be flexible enough to allow for the upgrade, addition or changing of individual items without effecting the entire ground system. Also, using XML should allow for the altering of the import and export formats needed by the various elements, tracking the verification/validation of each database item, allow many organizations to provide database inputs, and the merging of the many existing database processes into one central database structure throughout the JWST program. Many National Aeronautics and Space Administration (NASA) projects have attempted to take advantage of open source and commercial technology. Often this causes a greater reliance on the use of Commercial-Off-The-Shelf (COTS), which is often limiting

  6. Profiling of Escherichia coli Chromosome database.

    Yamazaki, Yukiko; Niki, Hironori; Kato, Jun-ichi

    2008-01-01

    The Profiling of Escherichia coli Chromosome (PEC) database (http://www.shigen.nig.ac.jp/ecoli/pec/) is designed to allow E. coli researchers to efficiently access information from functional genomics studies. The database contains two principal types of data: gene essentiality and a large collection of E. coli genetic research resources. The essentiality data are based on data compilation from published single-gene essentiality studies and on cell growth studies of large-deletion mutants. Using the circular and linear viewers for both whole genomes and the minimal genome, users can not only gain an overview of the genome structure but also retrieve information on contigs, gene products, mutants, deletions, and so forth. In particular, genome-wide exhaustive mutants are an essential resource for studying E. coli gene functions. Although the genomic database was constructed independently from the genetic resources database, users may seamlessly access both types of data. In addition to these data, the PEC database also provides a summary of homologous genes of other bacterial genomes and of protein structure information, with a comprehensive interface. The PEC is thus a convenient and useful platform for contemporary E. coli researchers.

  7. Social Capital Database

    Paldam, Martin; Svendsen, Gert Tinggaard

    2005-01-01

      This report has two purposes: The first purpose is to present our 4-page question­naire, which measures social capital. It is close to the main definitions of social capital and contains the most successful measures from the literature. Also it is easy to apply as discussed. The second purpose...... is to present the social capital database we have collected for 21 countries using the question­naire. We do this by comparing the level of social capital in the countries covered. That is, the report compares the marginals from the 21 surveys....

  8. Nuclear database management systems

    Stone, C.; Sutton, R.

    1996-01-01

    The authors are developing software tools for accessing and visualizing nuclear data. MacNuclide was the first software application produced by their group. This application incorporates novel database management and visualization tools into an intuitive interface. The nuclide chart is used to access properties and to display results of searches. Selecting a nuclide in the chart displays a level scheme with tables of basic, radioactive decay, and other properties. All level schemes are interactive, allowing the user to modify the display, move between nuclides, and display entire daughter decay chains

  9. EMU Lessons Learned Database

    Matthews, Kevin M., Jr.; Crocker, Lori; Cupples, J. Scott

    2011-01-01

    As manned space exploration takes on the task of traveling beyond low Earth orbit, many problems arise that must be solved in order to make the journey possible. One major task is protecting humans from the harsh space environment. The current method of protecting astronauts during Extravehicular Activity (EVA) is through use of the specially designed Extravehicular Mobility Unit (EMU). As more rigorous EVA conditions need to be endured at new destinations, the suit will need to be tailored and improved in order to accommodate the astronaut. The Objective behind the EMU Lessons Learned Database(LLD) is to be able to create a tool which will assist in the development of next-generation EMUs, along with maintenance and improvement of the current EMU, by compiling data from Failure Investigation and Analysis Reports (FIARs) which have information on past suit failures. FIARs use a system of codes that give more information on the aspects of the failure, but if one is unfamiliar with the EMU they will be unable to decipher the information. A goal of the EMU LLD is to not only compile the information, but to present it in a user-friendly, organized, searchable database accessible to all familiarity levels with the EMU; both newcomers and veterans alike. The EMU LLD originally started as an Excel database, which allowed easy navigation and analysis of the data through pivot charts. Creating an entry requires access to the Problem Reporting And Corrective Action database (PRACA), which contains the original FIAR data for all hardware. FIAR data are then transferred to, defined, and formatted in the LLD. Work is being done to create a web-based version of the LLD in order to increase accessibility to all of Johnson Space Center (JSC), which includes converting entries from Excel to the HTML format. FIARs related to the EMU have been completed in the Excel version, and now focus has shifted to expanding FIAR data in the LLD to include EVA tools and support hardware such as

  10. Harmonization of Databases

    Charlifue, Susan; Tate, Denise; Biering-Sorensen, Fin

    2016-01-01

    strengths and weaknesses. Efforts to provide a uniform approach to data collection are also reviewed. The databases reviewed offer different approaches to capture important clinical information on SCI. They vary on size, purpose, data points, inclusion of standard outcomes, and technical requirements. Each...... a carefully crafted implementation plan that lists purposes, cost, resources required, and policies to guide such development to establishing a framework for dissemination of data and findings. For the present, taking advantage of the vast but different data already collected over many decades may require...

  11. Search Databases and Statistics

    Refsgaard, Jan C; Munk, Stephanie; Jensen, Lars J

    2016-01-01

    having strengths and weaknesses that must be considered for the individual needs. These are reviewed in this chapter. Equally critical for generating highly confident output datasets is the application of sound statistical criteria to limit the inclusion of incorrect peptide identifications from database...... searches. Additionally, careful filtering and use of appropriate statistical tests on the output datasets affects the quality of all downstream analyses and interpretation of the data. Our considerations and general practices on these aspects of phosphoproteomics data processing are presented here....

  12. Dansk kolorektal Cancer Database

    Harling, Henrik; Nickelsen, Thomas

    2005-01-01

    The Danish Colorectal Cancer Database was established in 1994 with the purpose of monitoring whether diagnostic and surgical principles specified in the evidence-based national guidelines of good clinical practice were followed. Twelve clinical indicators have been listed by the Danish Colorectal...... Cancer Group, and the performance of each hospital surgical department with respect to these indicators is reported annually. In addition, the register contains a large collection of data that provide valuable information on the influence of comorbidity and lifestyle factors on disease outcome...

  13. Usability in Scientific Databases

    Ana-Maria Suduc

    2012-07-01

    Full Text Available Usability, most often defined as the ease of use and acceptability of a system, affects the users' performance and their job satisfaction when working with a machine. Therefore, usability is a very important aspect which must be considered in the process of a system development. The paper presents several numerical data related to the history of the scientific research of the usability of information systems, as it is viewed in the information provided by three important scientific databases, Science Direct, ACM Digital Library and IEEE Xplore Digital Library, at different queries related to this field.

  14. Generalized Database Management System Support for Numeric Database Environments.

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  15. The YH database: the first Asian diploid genome database

    Li, Guoqing; Ma, Lijia; Song, Chao

    2009-01-01

    genome consensus. The YH database is currently one of the three personal genome database, organizing the original data and analysis results in a user-friendly interface, which is an endeavor to achieve fundamental goals for establishing personal medicine. The database is available at http://yh.genomics.org.cn....

  16. Study of developing a database of energy statistics

    Park, T.S. [Korea Energy Economics Institute, Euiwang (Korea, Republic of)

    1997-08-01

    An integrated energy database should be prepared in advance for managing energy statistics comprehensively. However, since much manpower and budget is required for developing an integrated energy database, it is difficult to establish a database within a short period of time. Therefore, this study sets the purpose in drawing methods to analyze existing statistical data lists and to consolidate insufficient data as first stage work for the energy database, and at the same time, in analyzing general concepts and the data structure of the database. I also studied the data content and items of energy databases in operation in international energy-related organizations such as IEA, APEC, Japan, and the USA as overseas cases as well as domestic conditions in energy databases, and the hardware operating systems of Japanese databases. I analyzed the making-out system of Korean energy databases, discussed the KEDB system which is representative of total energy databases, and present design concepts for new energy databases. In addition, I present the establishment directions and their contents of future Korean energy databases, data contents that should be collected by supply and demand statistics, and the establishment of data collection organization, etc. by analyzing the Korean energy statistical data and comparing them with the system of OECD/IEA. 26 refs., 15 figs., 11 tabs.

  17. Asbestos Exposure Assessment Database

    Arcot, Divya K.

    2010-01-01

    Exposure to particular hazardous materials in a work environment is dangerous to the employees who work directly with or around the materials as well as those who come in contact with them indirectly. In order to maintain a national standard for safe working environments and protect worker health, the Occupational Safety and Health Administration (OSHA) has set forth numerous precautionary regulations. NASA has been proactive in adhering to these regulations by implementing standards which are often stricter than regulation limits and administering frequent health risk assessments. The primary objective of this project is to create the infrastructure for an Asbestos Exposure Assessment Database specific to NASA Johnson Space Center (JSC) which will compile all of the exposure assessment data into a well-organized, navigable format. The data includes Sample Types, Samples Durations, Crafts of those from whom samples were collected, Job Performance Requirements (JPR) numbers, Phased Contrast Microscopy (PCM) and Transmission Electron Microscopy (TEM) results and qualifiers, Personal Protective Equipment (PPE), and names of industrial hygienists who performed the monitoring. This database will allow NASA to provide OSHA with specific information demonstrating that JSC s work procedures are protective enough to minimize the risk of future disease from the exposures. The data has been collected by the NASA contractors Computer Sciences Corporation (CSC) and Wyle Laboratories. The personal exposure samples were collected from devices worn by laborers working at JSC and by building occupants located in asbestos-containing buildings.

  18. Brasilia’s Database Administrators

    Jane Adriana

    2016-06-01

    Full Text Available Database administration has gained an essential role in the management of new database technologies. Different data models are being created for supporting the enormous data volume, from the traditional relational database. These new models are called NoSQL (Not only SQL databases. The adoption of best practices and procedures, has become essential for the operation of database management systems. Thus, this paper investigates some of the techniques and tools used by database administrators. The study highlights features and particularities in databases within the area of Brasilia, the Capital of Brazil. The results point to which new technologies regarding database management are currently the most relevant, as well as the central issues in this area.

  19. National Transportation Atlas Databases : 2012.

    2012-01-01

    The National Transportation Atlas Databases 2012 (NTAD2012) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...

  20. National Transportation Atlas Databases : 2011.

    2011-01-01

    The National Transportation Atlas Databases 2011 (NTAD2011) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...