WorldWideScience

Sample records for tool elist database

  1. System administrator's manual (SAM) for the enhanced logistics intratheater support tool (ELIST) database instance segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    This document is the System Administrator's Manual (SAM) for the Enhanced Logistics Intratheater Support Tool (ELIST) Database Instance Segment. It covers errors that can arise during the segment's installation and deinstallation, and it outlines appropriate recovery actions. It also tells how to change the password for the SYSTEM account of the database instance after the instance is created, and it discusses the creation of a suitable database instance for ELIST by means other than the installation of the segment. The latter subject is covered in more depth than its introductory discussion in the Installation Procedures (IP) for the Enhanced Logistics Intratheater Support Tool (ELIST) Global Data Segment, Database Instance Segment, Database Fill Segment, Database Segment, Database Utility Segment, Software Segment, and Reference Data Segment (referred to in portions of this document as the ELIST IP). The information in this document is expected to be of use only rarely. Other than errors arising from the failure to follow instructions, difficulties are not expected to be encountered during the installation or deinstallation of the segment. By the same token, the need to create a database instance for ELIST by means other than the installation of the segment is expected to be the exception, rather than the rule. Most administrators will only need to be aware of the help that is provided in this document and will probably not actually need to read and make use of it

  2. System administrator's manual (SAM) for the enhanced logistics intratheater support tool (ELIST) database segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    This document is the System Administrator's Manual (SAM) for the Enhanced Logistics Intratheater Support Tool (ELIST) Database Segment. It covers errors that can arise during the segment's installation and deinstallation, and it outlines appropriate recovery actions. It also tells how to extend the database storage available to Oracle if a datastore becomes filled during the use of ELIST. The latter subject builds on some of the actions that must be performed when installing this segment, as documented in the Installation Procedures (IP) for the Enhanced Logistics Intratheater Support Tool (ELIST) Global Data Segment, Database Instance Segment, Database Fill Segment, Database Segment, Database Utility Segment, Software Segment, and Reference Data Segment (referred to in portions of this document as the ELIST IP). The information in this document is expected to be of use only rarely. Other than errors arising from the failure to follow instructions, difficulties are not expected to be encountered during the installation or deinstallation of the segment. The need to extend database storage likewise typically arises infrequently. Most administrators will only need to be aware of the help that is provided in this document and will probably not actually need to read and make use of it

  3. Software test plan/description/report (STP/STD/STR) for the enhanced logistics intratheater support tool (ELIST) global data segment. Version 8.1.0.0, Database Instance Segment Version 8.1.0.0, ...[elided] and Reference Data Segment Version 8.1.0.0 for Solaris 7; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.; Absil-Mills, M.; Jacobs, K.

    2002-01-01

    This document is the Software Test Plan/Description/Report (STP/STD/STR) for the DII COE Enhanced Logistics Intratheater Support Tool (ELIST) mission application. It combines in one document the information normally presented separately in a Software Test Plan, a Software Test Description, and a Software Test Report; it also presents this information in one place for all the segments of the ELIST mission application. The primary purpose of this document is to show that ELIST has been tested by the developer and found, by that testing, to install, deinstall, and work properly. The information presented here is detailed enough to allow the reader to repeat the testing independently. The remainder of this document is organized as follows. Section 1.1 identifies the ELIST mission application. Section 2 is the list of all documents referenced in this document. Section 3, the Software Test Plan, outlines the testing methodology and scope-the latter by way of a concise summary of the tests performed. Section 4 presents detailed descriptions of the tests, along with the expected and observed results; that section therefore combines the information normally found in a Software Test Description and a Software Test Report. The remaining small sections present supplementary information. Throughout this document, the phrase ELIST IP refers to the Installation Procedures (IP) for the Enhanced Logistics Intratheater Support Tool (ELIST) Global Data Segment, Database Instance Segment, Database Fill Segment, Database Segment, Database Utility Segment, Software Segment, and Reference Data Segment

  4. Installation procedures (IP) for the enhanced logistics intratheater support tool (ELIST) global data segment version 8.1.0.0, database instance segment version 8.1.0.0, ...[elided] and reference data segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    This document is the Installation Procedures (IP) for the DII COE Enhanced Logistics Intraheater Support Tool (ELIST) mission application. It tells how to install and deinstall the seven segments of the mission application

  5. Introduction to the enhanced logistics intratheater support tool (ELIST) mission application and its segments : global data segment version 8.1.0.0, database instance segment version 8.1.0.0, ...[elided] and reference data segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    The ELIST mission application simulates and evaluates the feasibility of intratheater transportation logistics primarily for the theater portion of a course of action. It performs a discrete event simulation of a series of movement requirements over a constrained infrastructure network using specified transportation assets. ELIST addresses the question of whether transportation infrastructures and lift allocations are adequate to support the movements of specific force structures and supplies to their destinations on time

  6. ELIST v.8.1 : User's Manual.; TOPICAL

    International Nuclear Information System (INIS)

    Van Groningen, Blachowicz D.; Duffy Braun, M.; Clemmons, M. A.; Simunich, K. L.; Timmerman, D.; VanderZee, H.; Widing, M. A.

    2002-01-01

    This user's manual documents the capabilities and functions of the Enhanced Logistics Intratheater Support Tool (ELIST) software application. Steps for using the Expanded Time Phase Force Deployment Data (ETPFDD) Editor (ETEdit), which is included in ELIST but is also a stand-alone software application, are contained in a separate document. ELIST is a discrete event simulation tool developed for use by military planners in both the continental United States (CONUS) and outside the continental United States (OCONUS). It simulates the reception, staging, onward movement, and integration (RSOI) of military personnel and equipment from all services within, between, or among countries. ELIST not only runs a simulation, but it also provides the capability to edit asset sets, networks, and scenarios. These capabilities show how various changes can affect the outcome of a simulation. Further, ELIST incorporates topographic maps on which the network is displayed. The system also allows planners to simulate scenarios at the vehicle level. Prior to the implementation of ELIST, planners were able to simulate military deployment from the point of departure (origin) to the point of arrival in the theater (the port of debarkation). Since the development and implementation of ELIST, however, planners can simulate military deployment from the point of departure (airport or seaport), through the staging area, through the theater-staging base, to the final destination. A typical scenario might be set up to transport personnel and cargo to a location by aircraft or ship. Upon arrival at the airport or seaport, the cargo would be sent to a staging area where it would be set up and transferred to a vehicle, or in the case of petroleum, oil, and lubricants (POL), a pipeline. The vehicle then would transport the cargo to the theater-staging base where it would ''marry up'' with the main body of personnel. From this point, the cargo and the main body would be transported to the final

  7. ELIST8: simulating military deployments in Java

    International Nuclear Information System (INIS)

    Van Groningen, C. N.; Blachowicz, D.; Braun, M. D.; Simunich, K. L.; Widing, M. A.

    2002-01-01

    Planning for the transportation of large amounts of equipment, troops, and supplies presents a complex problem. Many options, including modes of transportation, vehicles, facilities, routes, and timing, must be considered. The amount of data involved in generating and analyzing a course of action (e.g., detailed information about military units, logistical infrastructures, and vehicles) is enormous. Software tools are critical in defining and analyzing these plans. Argonne National Laboratory has developed ELIST (Enhanced Logistics Intra-theater Support Tool), a simulation-based decision support system, to assist military planners in determining the logistical feasibility of an intra-theater course of action. The current version of ELIST (v.8) contains a discrete event simulation developed using the Java programming language. Argonne selected Java because of its object-oriented framework, which has greatly facilitated entity and process development within the simulation, and because it fulfills a primary requirement for multi-platform execution. This paper describes the model, including setup and analysis, a high-level architectural design, and an evaluation of Java

  8. Brede Tools and Federating Online Neuroinformatics Databases

    DEFF Research Database (Denmark)

    Nielsen, Finn Årup

    2014-01-01

    As open science neuroinformatics databases the Brede Database and Brede Wiki seek to make distribution and federation of their content as easy and transparent as possible. The databases rely on simple formats and allow other online tools to reuse their content. This paper describes the possible...... interconnections on different levels between the Brede tools and other databases....

  9. Programming database tools for the casual user

    International Nuclear Information System (INIS)

    Katz, R.A; Griffiths, C.

    1990-01-01

    The AGS Distributed Control System (AGSDCS) uses a relational database management system (INTERBASE) for the storage of all data associated with the control of the particle accelerator complex. This includes the static data which describes the component devices of the complex, as well as data for application program startup and data records that are used in analysis. Due to licensing restraints, it was necessary to develop tools to allow programs requiring access to a database to be unconcerned whether or not they were running on a licensed node. An in-house database server program was written, using Apollo mailbox communication protocols, allowing application programs via calls to this server to access the interbase database. Initially, the tools used by the server to actually access the database were written using the GDML C host language interface. Through the evolutionary learning process these tools have been converted to Dynamic SQL. Additionally, these tools have been extracted from the exclusive province of the database server and placed in their own library. This enables application programs to use these same tools on a licensed node without using the database server and without having to modify the application code. The syntax of the C calls remain the same

  10. Nanoinformatics: Emerging Databases and Available Tools

    Directory of Open Access Journals (Sweden)

    Suresh Panneerselvam

    2014-04-01

    Full Text Available Nanotechnology has arisen as a key player in the field of nanomedicine. Although the use of engineered nanoparticles is rapidly increasing, safety assessment is also important for the beneficial use of new nanomaterials. Considering that the experimental assessment of new nanomaterials is costly and laborious, in silico approaches hold promise. Several major challenges in nanotechnology indicate a need for nanoinformatics. New database initiatives such as ISA-TAB-Nano, caNanoLab, and Nanomaterial Registry will help in data sharing and developing data standards, and, as the amount of nanomaterials data grows, will provide a way to develop methods and tools specific to the nanolevel. In this review, we describe emerging databases and tools that should aid in the progress of nanotechnology research.

  11. Chemical databases evaluated by order theoretical tools.

    Science.gov (United States)

    Voigt, Kristina; Brüggemann, Rainer; Pudenz, Stefan

    2004-10-01

    Data on environmental chemicals are urgently needed to comply with the future chemicals policy in the European Union. The availability of data on parameters and chemicals can be evaluated by chemometrical and environmetrical methods. Different mathematical and statistical methods are taken into account in this paper. The emphasis is set on a new, discrete mathematical method called METEOR (method of evaluation by order theory). Application of the Hasse diagram technique (HDT) of the complete data-matrix comprising 12 objects (databases) x 27 attributes (parameters + chemicals) reveals that ECOTOX (ECO), environmental fate database (EFD) and extoxnet (EXT)--also called multi-database databases--are best. Most single databases which are specialised are found in a minimal position in the Hasse diagram; these are biocatalysis/biodegradation database (BID), pesticide database (PES) and UmweltInfo (UMW). The aggregation of environmental parameters and chemicals (equal weight) leads to a slimmer data-matrix on the attribute side. However, no significant differences are found in the "best" and "worst" objects. The whole approach indicates a rather bad situation in terms of the availability of data on existing chemicals and hence an alarming signal concerning the new and existing chemicals policies of the EEC.

  12. Northern Forest Futures reporting tools and database guide

    Science.gov (United States)

    Patrick D. Miles; Robert J. Huggett; W. Keith. Moser

    2015-01-01

    The Northern Forest Futures database (NFFDB) supports the reporting of both current and projected future forest conditions for the 20 states that make up the U.S. North, an area bounded by Maine, Maryland, Missouri, and Minnesota. The NFFDB database and attendant reporting tools are available to the public as a Microsoft AccessTM database. The...

  13. Documentation of databases in the Wilmar Planning tool

    International Nuclear Information System (INIS)

    Kiviluioma, J.; Meimbom, P.

    2006-01-01

    The Wilmar Planning tool consists of a number of databases and models as shown in Figure 1. This report documents the design of the following subparts of the Wilmar Planning tool: 1. The Scenario database holding the scenario trees generated from the Scenario Tree Creation model. 2. The Input database holding input data to the Joint Market model and the Long-term model apart from the scenario trees. 3. The output database containing the results of a Joint Market model run. The Wilmar Planning Tool is developed in the project Wind Power Integration in Liberalised Electricity Markets (WILMAR) supported by EU (contract ENK5-CT-2002-00663). (LN)

  14. Plant databases and data analysis tools

    Science.gov (United States)

    It is anticipated that the coming years will see the generation of large datasets including diagnostic markers in several plant species with emphasis on crop plants. To use these datasets effectively in any plant breeding program, it is essential to have the information available via public database...

  15. Coupling databases and advanced analytic tools (R)

    OpenAIRE

    Seakomo, Saviour Sedem Kofi

    2014-01-01

    Today, several contemporary organizations collect various kinds of data, creating large data repositories. But the capacity to perform advanced analytics over these large amount of data stored in databases remains a significant challenge to statistical software (R, S, SAS, SPSS, etc) and data management systems (DBMSs). This is because while statistical software provide comprehensive analytics and modelling functionalities, they can only handle limited amounts of data. The data management sys...

  16. Using XFRACAS as a PVROM database tool.

    Energy Technology Data Exchange (ETDEWEB)

    Hamman, Colin Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-04-01

    ReliaSofts XFRACAS is a tool chosen by Sandia National Laboratories (SNL) for the purpose of collecting and organizing photovoltaic (PV) system field data that is to be used in assessing PV system reliability. It is a Web-based, closed-loop, incident (failure) reporting, analysis, and corrective action system software package designed for the acquisition, management and analysis of quality and reliability data from multiple sources. The ability to export PV system times-to-failure and times-to-suspension for ready analysis by ReliaSofts Weibull++ and RGA was a primary consideration in choosing XFRACAS as a reliability data collection tool.

  17. Analysis of functionality free CASE-tools databases design

    Directory of Open Access Journals (Sweden)

    A. V. Gavrilov

    2016-01-01

    Full Text Available The introduction in the educational process of database design CASEtechnologies requires the institution of significant costs for the purchase of software. A possible solution could be the use of free software peers. At the same time this kind of substitution should be based on even-com representation of the functional characteristics and features of operation of these programs. The purpose of the article – a review of the free and non-profi t CASE-tools database design, as well as their classifi cation on the basis of the analysis functionality. When writing this article were used materials from the offi cial websites of the tool developers. Evaluation of the functional characteristics of CASEtools for database design made exclusively empirically with the direct work with software products. Analysis functionality of tools allow you to distinguish the two categories CASE-tools database design. The first category includes systems with a basic set of features and tools. The most important basic functions of these systems are: management connections to database servers, visual tools to create and modify database objects (tables, views, triggers, procedures, the ability to enter and edit data in table mode, user and privilege management tools, editor SQL-code, means export/import data. CASE-system related to the first category can be used to design and develop simple databases, data management, as well as a means of administration server database. A distinctive feature of the second category of CASE-tools for database design (full-featured systems is the presence of visual designer, allowing to carry out the construction of the database model and automatic creation of the database on the server based on this model. CASE-system related to this categories can be used for the design and development of databases of any structural complexity, as well as a database server administration tool. The article concluded that the

  18. EST databases and web tools for EST projects.

    Science.gov (United States)

    Shen, Yao-Qing; O'Brien, Emmet; Koski, Liisa; Lang, B Franz; Burger, Gertraud

    2009-01-01

    This chapter outlines key considerations for constructing and implementing an EST database. Instead of showing the technological details step by step, emphasis is put on the design of an EST database suited to the specific needs of EST projects and how to choose the most suitable tools. Using TBestDB as an example, we illustrate the essential factors to be considered for database construction and the steps for data population and annotation. This process employs technologies such as PostgreSQL, Perl, and PHP to build the database and interface, and tools such as AutoFACT for data processing and annotation. We discuss these in comparison to other available technologies and tools, and explain the reasons for our choices.

  19. Database tools for enhanced analysis of TMX-U data

    International Nuclear Information System (INIS)

    Stewart, M.E.; Carter, M.R.; Casper, T.A.; Meyer, W.H.; Perkins, D.E.; Whitney, D.M.

    1986-01-01

    A commercial database software package has been used to create several databases and tools that assist and enhance the ability of experimental physicists to analyze data from the Tandem Mirror Experiment-Upgrade (TMX-U) experiment. This software runs on a DEC-20 computer in M-Divisions's User Service Center at Lawrence Livermore National Laboratory (LLNL), where data can be analyzed off line from the main TMX-U acquisition computers. When combined with interactive data analysis programs, these tools provide the capability to do batch-style processing or interactive data analysis on the computers in the USC or the supercomputers of the National Magnetic Fusion Energy Computer Center (NMFECC) in addition to the normal processing done by the TMX-U acquisition system. One database tool provides highly reduced data for searching and correlation analysis of several diagnostic signals within a single shot or over many shots. A second database tool provides retrieval and storage of unreduced data for use in detailed analysis of one or more diagnostic signals. We will show how these database tools form the core of an evolving off-line data analysis environment on the USC computers

  20. A database of immunoglobulins with integrated tools: DIGIT.

    KAUST Repository

    Chailyan, Anna

    2011-11-10

    The DIGIT (Database of ImmunoGlobulins with Integrated Tools) database (http://biocomputing.it/digit) is an integrated resource storing sequences of annotated immunoglobulin variable domains and enriched with tools for searching and analyzing them. The annotations in the database include information on the type of antigen, the respective germline sequences and on pairing information between light and heavy chains. Other annotations, such as the identification of the complementarity determining regions, assignment of their structural class and identification of mutations with respect to the germline, are computed on the fly and can also be obtained for user-submitted sequences. The system allows customized BLAST searches and automatic building of 3D models of the domains to be performed.

  1. Database of Legal Terms for Communicative and Knowledge Information Tools

    DEFF Research Database (Denmark)

    Nielsen, Sandro

    2014-01-01

    Online law dictionaries based on traditional linguistic and text linguistic approaches do not fully satisfy the needs for help lawyers, students and professional translators have in specific types of situation. This state of affairs can be remedied by re−assessing the practical and theoretical...... foundations of online dictionaries in light of the technical options available for online information tools combined with modern lexicographic principles. The above discussion indicates that the legal database is a repository of structured data serving online dictionaries that search for data in databases...... or specific legal matters in cognitive situations. The theoretical foundation and practical implications of this type of online law dictionaries enable lexicographers to make dictionaries that satisfy the needs of lawyers, students and translators for practical information tools....

  2. 2016 update of the PRIDE database and its related tools.

    Science.gov (United States)

    Vizcaíno, Juan Antonio; Csordas, Attila; del-Toro, Noemi; Dianes, José A; Griss, Johannes; Lavidas, Ilias; Mayer, Gerhard; Perez-Riverol, Yasset; Reisinger, Florian; Ternent, Tobias; Xu, Qing-Wei; Wang, Rui; Hermjakob, Henning

    2016-01-04

    The PRoteomics IDEntifications (PRIDE) database is one of the world-leading data repositories of mass spectrometry (MS)-based proteomics data. Since the beginning of 2014, PRIDE Archive (http://www.ebi.ac.uk/pride/archive/) is the new PRIDE archival system, replacing the original PRIDE database. Here we summarize the developments in PRIDE resources and related tools since the previous update manuscript in the Database Issue in 2013. PRIDE Archive constitutes a complete redevelopment of the original PRIDE, comprising a new storage backend, data submission system and web interface, among other components. PRIDE Archive supports the most-widely used PSI (Proteomics Standards Initiative) data standard formats (mzML and mzIdentML) and implements the data requirements and guidelines of the ProteomeXchange Consortium. The wide adoption of ProteomeXchange within the community has triggered an unprecedented increase in the number of submitted data sets (around 150 data sets per month). We outline some statistics on the current PRIDE Archive data contents. We also report on the status of the PRIDE related stand-alone tools: PRIDE Inspector, PRIDE Converter 2 and the ProteomeXchange submission tool. Finally, we will give a brief update on the resources under development 'PRIDE Cluster' and 'PRIDE Proteomes', which provide a complementary view and quality-scored information of the peptide and protein identification data available in PRIDE Archive. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. Web Exploration Tools for a Fast Federated Optical Survey Database

    Science.gov (United States)

    Humphreys, Roberta M.

    2000-01-01

    We implemented several new web-based tools to improve the efficiency and versatility of access to the APS Catalog of the POSS I (Palomar Observatory-National Geographic Sky Survey) and its associated image database. The most important addition was a federated database system to link the APS Catalog and image database into one Internet-accessible database. With the FDBS, the queries and transactions on the integrated database are performed as if it were a single database. We installed Myriad the FDBS developed by Professor Jaideep Srivastava and members of his group in the University of Minnesota Computer Science Department. It is the first system to provide schema integration, query processing and optimization, and transaction management capabilities in a single framework. The attached figure illustrates the Myriad architecture. The FDBS permits horizontal access to the data, not just vertical. For example, for the APS, queries can be made not only by sky position, but also by any parameter present in either of the databases. APS users will be able to produce an image of all the blue galaxies and stellar sources for comparison with x-ray source error ellipses from AXAF (X Ray Astrophysics Facility) (Chandra) for example. The FDBS is now available as a beta release with the appropriate query forms at our web site. While much of our time was occupied with adapting Myriad to the APS environment, we also made major changes in Star Base, our DBMS for the Catalog, at the web interface to improve its efficiency for issuing and processing queries. Star Base is now three times faster for large queries. Improvements were also made at the web end of the image database for faster access; although work still needs to be done to the image database itself for more efficient return with the FDBS. During the past few years, we made several improvements to the database pipeline that creates the individual plate databases queries by StarBase. The changes include improved positions

  4. New tools and improvements in the Exoplanet Transit Database

    Directory of Open Access Journals (Sweden)

    Pejcha O.

    2011-02-01

    Full Text Available Comprehensive collection of the available light curves, prediction possibilities and the online model fitting procedure, that are available via Exoplanet Transit Database became very popular in the community. In this paper we summarized the changes, that we made in the ETD during last year (including the Kepler candidates into the prediction section, modeling of an unknown planet in the model-fit section and some other small improvements. All this new tools cannot be found in the main ETD paper.

  5. More Data and Better Tools for the GLIMS Glacier Database

    Science.gov (United States)

    Raup, B. H.; Armstrong, R. L.; Cogley, J. G.; Hock, R.

    2015-12-01

    Earth's glaciers are changing rapidly in response to a changing climate, and this has implications for people in numerous ways, such as increased hazards from glacial lake outburst floods, changes to water resources, and increasing sea level. To understand these changes, it is vitally important to monitor glaciers through time, measuring their areal extent, changes in volume, flow velocities, snow lines, elevation distribution, and changes to associated water bodies. The glacier database of the Global Land Ice Measurements from Space (GLIMS) initiative is the only multi-temporal glacier database capable of tracking all these glacier measurements and providing them to the scientific community and broader public.This contribution presents recent results in 1) expansion of the GLIMS Glacier Database in geographic coverage by drawing on the Randolph Glacier Inventory (RGI) and other new data sets; 2) new tools for visualizing and downloading GLIMS data in a choice of formats and data models; 3) a new data model for handling multiple glacier records through time while avoiding double-counting of glacier number or area; and 4) a new system of collaboration between all members of the glacier mapping community to streamline the process of meeting various community needs. The result of this work promises to be an improved glacier data repository that will be useful for tracking changes in water resources, hazards, and mass budgets of the world's glaciers.

  6. Astronomical database and VO-tools of Nikolaev Astronomical Observatory

    Science.gov (United States)

    Mazhaev, A. E.; Protsyuk, Yu. I.

    2010-05-01

    search centre and a search radius. The search results are outputted into a main window of Aladin in textual and graphical forms using XML and Simple Object Access Protocol (SOAP). In this way, the NVO image server is integrated with other astronomical servers, using a special configuration file. The user may conveniently request information from many servers using the same server selector of Aladin, although the servers are located in different countries. Aladin has a wide range of special tools for data analysis and handling, including connection with other standalone applications. As a conclusion, we should note that a research team of a data centre, which provides the infrastructure for data output to the internet, is responsible for creation of corresponding archives. Therefore, each observatory or data centre has to provide an access to its archives in accordance with the IVOA standards and a resolution adopted by the IAU XXV General Assembly #B.1, titled: Public Access to Astronomical Archives. A research team of NAO copes successfully with this task and continues to develop the NVO. Using our databases and VO-tools, we also take part in development of the Ukrainian Virtual Observatory (UkrVO). All three main parts of the NVO are used as prototypes for the UkrVO. Informational resources provided by other astronomical institutions from Ukraine will be included in corresponding databases and VO interfaces.

  7. Tools for loading MEDLINE into a local relational database

    Science.gov (United States)

    Oliver, Diane E; Bhalotia, Gaurav; Schwartz, Ariel S; Altman, Russ B; Hearst, Marti A

    2004-01-01

    Background Researchers who use MEDLINE for text mining, information extraction, or natural language processing may benefit from having a copy of MEDLINE that they can manage locally. The National Library of Medicine (NLM) distributes MEDLINE in eXtensible Markup Language (XML)-formatted text files, but it is difficult to query MEDLINE in that format. We have developed software tools to parse the MEDLINE data files and load their contents into a relational database. Although the task is conceptually straightforward, the size and scope of MEDLINE make the task nontrivial. Given the increasing importance of text analysis in biology and medicine, we believe a local installation of MEDLINE will provide helpful computing infrastructure for researchers. Results We developed three software packages that parse and load MEDLINE, and ran each package to install separate instances of the MEDLINE database. For each installation, we collected data on loading time and disk-space utilization to provide examples of the process in different settings. Settings differed in terms of commercial database-management system (IBM DB2 or Oracle 9i), processor (Intel or Sun), programming language of installation software (Java or Perl), and methods employed in different versions of the software. The loading times for the three installations were 76 hours, 196 hours, and 132 hours, and disk-space utilization was 46.3 GB, 37.7 GB, and 31.6 GB, respectively. Loading times varied due to a variety of differences among the systems. Loading time also depended on whether data were written to intermediate files or not, and on whether input files were processed in sequence or in parallel. Disk-space utilization depended on the number of MEDLINE files processed, amount of indexing, and whether abstracts were stored as character large objects or truncated. Conclusions Relational database (RDBMS) technology supports indexing and querying of very large datasets, and can accommodate a locally stored version

  8. Automated tools for cross-referencing large databases. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Clapp, N E; Green, P L; Bell, D [and others

    1997-05-01

    A Cooperative Research and Development Agreement (CRADA) was funded with TRESP Associates, Inc., to develop a limited prototype software package operating on one platform (e.g., a personal computer, small workstation, or other selected device) to demonstrate the concepts of using an automated database application to improve the process of detecting fraud and abuse of the welfare system. An analysis was performed on Tennessee`s welfare administration system. This analysis was undertaken to determine if the incidence of welfare waste, fraud, and abuse could be reduced and if the administrative process could be improved to reduce benefits overpayment errors. The analysis revealed a general inability to obtain timely data to support the verification of a welfare recipient`s economic status and eligibility for benefits. It has been concluded that the provision of more modern computer-based tools and the establishment of electronic links to other state and federal data sources could increase staff efficiency, reduce the incidence of out-of-date information provided to welfare assistance staff, and make much of the new data required available in real time. Electronic data links have been proposed to allow near-real-time access to data residing in databases located in other states and at federal agency data repositories. The ability to provide these improvements to the local office staff would require the provision of additional computers, software, and electronic data links within each of the offices and the establishment of approved methods of accessing remote databases and transferring potentially sensitive data. In addition, investigations will be required to ascertain if existing laws would allow such data transfers, and if not, what changed or new laws would be required. The benefits, in both cost and efficiency, to the state of Tennessee of having electronically-enhanced welfare system administration and control are expected to result in a rapid return of investment.

  9. Global Tsunami Database: Adding Geologic Deposits, Proxies, and Tools

    Science.gov (United States)

    Brocko, V. R.; Varner, J.

    2007-12-01

    A result of collaboration between NOAA's National Geophysical Data Center (NGDC) and the Cooperative Institute for Research in the Environmental Sciences (CIRES), the Global Tsunami Database includes instrumental records, human observations, and now, information inferred from the geologic record. Deep Ocean Assessment and Reporting of Tsunamis (DART) data, historical reports, and information gleaned from published tsunami deposit research build a multi-faceted view of tsunami hazards and their history around the world. Tsunami history provides clues to what might happen in the future, including frequency of occurrence and maximum wave heights. However, instrumental and written records commonly span too little time to reveal the full range of a region's tsunami hazard. The sedimentary deposits of tsunamis, identified with the aid of modern analogs, increasingly complement instrumental and human observations. By adding the component of tsunamis inferred from the geologic record, the Global Tsunami Database extends the record of tsunamis backward in time. Deposit locations, their estimated age and descriptions of the deposits themselves fill in the tsunami record. Tsunamis inferred from proxies, such as evidence for coseismic subsidence, are included to estimate recurrence intervals, but are flagged to highlight the absence of a physical deposit. Authors may submit their own descriptions and upload digital versions of publications. Users may sort by any populated field, including event, location, region, age of deposit, author, publication type (extract information from peer reviewed publications only, if you wish), grain size, composition, presence/absence of plant material. Users may find tsunami deposit references for a given location, event or author; search for particular properties of tsunami deposits; and even identify potential collaborators. Users may also download public-domain documents. Data and information may be viewed using tools designed to extract and

  10. The Web-Database Connection Tools for Sharing Information on the Campus Intranet.

    Science.gov (United States)

    Thibeault, Nancy E.

    This paper evaluates four tools for creating World Wide Web pages that interface with Microsoft Access databases: DB Gateway, Internet Database Assistant (IDBA), Microsoft Internet Database Connector (IDC), and Cold Fusion. The system requirements and features of each tool are discussed. A sample application, "The Virtual Help Desk"…

  11. Database constraints applied to metabolic pathway reconstruction tools.

    Science.gov (United States)

    Vilaplana, Jordi; Solsona, Francesc; Teixido, Ivan; Usié, Anabel; Karathia, Hiren; Alves, Rui; Mateo, Jordi

    2014-01-01

    Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the process(es) of interest and their function. It also enables the sets of proteins involved in the process(es) in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes.

  12. Video Databases: An Emerging Tool in Business Education

    Science.gov (United States)

    MacKinnon, Gregory; Vibert, Conor

    2014-01-01

    A video database of business-leader interviews has been implemented in the assignment work of students in a Bachelor of Business Administration program at a primarily-undergraduate liberal arts university. This action research study was designed to determine the most suitable assignment work to associate with the database in a Business Strategy…

  13. The UKNG database: a simple audit tool for interventional neuroradiology

    International Nuclear Information System (INIS)

    Millar, J.S.; Burke, M.

    2007-01-01

    The UK Neurointerventional Group (UKNG) has developed a unified database for the purposes of recording, analysis and clinical audit of neuroangiography and neurointerventional procedures. It has been in use since January 2002. The database utilizes an Access platform (Microsoft) comprising separate but linked programs for data collection and analysis. The program that analyses aneurysm therapy has been designed to mirror the criteria used in the International Subarachnoid Aneurysm Trial (ISAT). Data entered into the main database immediately update the analysis program producing clinical outcome scores in the form of a report. Our local database (Wessex) now contains records on more than 1,750 patients including nearly 350 aneurysm coilings and a total of approximately 500 neurointerventional, vascular procedures. Every time a new piece of information is added to the main database the reporting database is automatically updated which allows 'real-time' audit and analysis of one's clinical practice. The clinical outcome scores for aneurysm treatment are presented in such a way that we can directly compare our results with the 'Clinical Standard' set by ISAT. This database provides a unique opportunity to monitor and review practice at national level. The UKNG wishes to share this database with the wider neurointerventional community and a copy of the software can be obtained free of charge from the authors. (orig.)

  14. The UKNG database: a simple audit tool for interventional neuroradiology.

    Science.gov (United States)

    Millar, J S; Burke, M

    2007-06-01

    The UK Neurointerventional Group (UKNG) has developed a unified database for the purposes of recording, analysis and clinical audit of neuroangiography and neurointerventional procedures. It has been in use since January 2002. The database utilizes an Access platform (Microsoft) comprising separate but linked programs for data collection and analysis. The program that analyses aneurysm therapy has been designed to mirror the criteria used in the International Subarachnoid Aneurysm Trial (ISAT). Data entered into the main database immediately update the analysis program producing clinical outcome scores in the form of a report. Our local database (Wessex) now contains records on more than 1,750 patients including nearly 350 aneurysm coilings and a total of approximately 500 neurointerventional, vascular procedures. Every time a new piece of information is added to the main database the reporting database is automatically updated which allows 'real-time' audit and analysis of one's clinical practice. The clinical outcome scores for aneurysm treatment are presented in such a way that we can directly compare our results with the 'Clinical Standard' set by ISAT. This database provides a unique opportunity to monitor and review practice at national level. The UKNG wishes to share this database with the wider neurointerventional community and a copy of the software can be obtained free of charge from the authors.

  15. The German Landslide Database: A Tool to Analyze Infrastructure Exposure

    Science.gov (United States)

    Damm, Bodo; Klose, Martin

    2015-04-01

    The Federal Republic of Germany has long been among the few European countries that lack a national landslide database. Systematic collection and inventory of landslide data over broad geographic areas and for different types of critical infrastructures was thus widely exceptional up until today. This has changed in recent years with the launch of a database initiative aimed at closing the data gap existing at national level. The present contribution reports on this database project that is focused on the development of a comprehensive pool of landslide data for systematic analysis of landslide hazard impacts in Germany. Major purpose of the database is to store and provide detailed scientific data on all types of landslides affecting critical infrastructures (transportation systems, industrial facilities, etc.) and urban areas. The database evolved over the last 15 years to a database covering large parts of Germany and offers a collection of data sets for more than 4,200 landslides with over 13,000 single data files. Data collection is based on a bottom-up approach that involves in-depth archive works and acquisition of data by close collaboration with infrastructure agencies and municipal offices. This enables to develop a database that stores geospatial landslide information and detailed data sets on landslide causes and impacts as well as hazard mitigation. The database is currently migrated to a spatial database system in PostgreSQL/PostGIS. This contribution gives an overview of the database content and its application in landslide impact research. It deals with the underlying strategy of data collection and presents the types of data and their quality to perform damage statistics and analyses of infrastructure exposure. The contribution refers to different case studies and regional investigations in the German Central Uplands.

  16. Interaction between policy measures. Analysis tool in the MURE database

    Energy Technology Data Exchange (ETDEWEB)

    Boonekamp, P.G.M. [ECN Policy Studies, Petten (Netherlands); Faberi, S. [Institute of Studies for the Integration of Systems ISIS, Rome (Italy)

    2013-12-15

    The ODYSSEE database on energy efficiency indicators (www.odyssee-indicators.org) has been set up to enable the monitoring and evaluation of realised energy efficiency improvements and related energy savings. The database covers the 27 EU countries as well as Norway and Croatia and data are available from 1990 on. This report describes how sets of mutually consistent impacts for packages as well as individual policy measures can be determined in the MURE database (MURE is the French abbreviation for Mesures d'Utilisation Rationnelle de l'Energie)

  17. Using Web Database Tools To Facilitate the Construction of Knowledge in Online Courses.

    Science.gov (United States)

    McNeil, Sara G.; Robin, Bernard R.

    This paper presents an overview of database tools that dynamically generate World Wide Web materials and focuses on the use of these tools to support research activities, as well as teaching and learning. Database applications have been used in classrooms to support learning activities for over a decade, but, although business and e-commerce have…

  18. The database management system: A topic and a tool

    Science.gov (United States)

    Plummer, O. R.

    1984-01-01

    Data structures and data base management systems are common tools employed to deal with the administrative information of a university. An understanding of these topics is needed by a much wider audience, ranging from those interested in computer aided design and manufacturing to those using microcomputers. These tools are becoming increasingly valuable to academic programs as they develop comprehensive computer support systems. The wide use of these tools relies upon the relational data model as a foundation. Experience with the use of the IPAD RIM5.0 program is described.

  19. TabSQL: a MySQL tool to facilitate mapping user data to public databases.

    Science.gov (United States)

    Xia, Xiao-Qin; McClelland, Michael; Wang, Yipeng

    2010-06-23

    With advances in high-throughput genomics and proteomics, it is challenging for biologists to deal with large data files and to map their data to annotations in public databases. We developed TabSQL, a MySQL-based application tool, for viewing, filtering and querying data files with large numbers of rows. TabSQL provides functions for downloading and installing table files from public databases including the Gene Ontology database (GO), the Ensembl databases, and genome databases from the UCSC genome bioinformatics site. Any other database that provides tab-delimited flat files can also be imported. The downloaded gene annotation tables can be queried together with users' data in TabSQL using either a graphic interface or command line. TabSQL allows queries across the user's data and public databases without programming. It is a convenient tool for biologists to annotate and enrich their data.

  20. An online database for plant image analysis software tools

    OpenAIRE

    Lobet, Guillaume; Draye, Xavier; Périlleux, Claire

    2013-01-01

    Background: Recent years have seen an increase in methods for plant phenotyping using image analyses. These methods require new software solutions for data extraction and treatment. These solutions are instrumental in supporting various research pipelines, ranging from the localisation of cellular compounds to the quantification of tree canopies. However, due to the variety of existing tools and the lack of central repository, it is challenging for researchers to identify the software that is...

  1. The Development of a Benchmark Tool for NoSQL Databases

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2013-07-01

    Full Text Available The aim of this article is to describe a proposed benchmark methodology and software application targeted at measuring the performance of both SQL and NoSQL databases. These represent the results obtained during PhD research (being actually a part of a larger application intended for NoSQL database management. A reason for aiming at this particular subject is the complete lack of benchmarking tools for NoSQL databases, except for YCBS [1] and a benchmark tool made specifically to compare Redis to RavenDB. While there are several well-known benchmarking systems for classical relational databases (starting with the canon TPC-C, TPC-E and TPC-H, on the other side of databases world such tools are mostly missing and seriously needed.

  2. Tools for the creation of IMS database designs from Entity-Relationship diagrams

    Energy Technology Data Exchange (ETDEWEB)

    Margrave, G.; Lusk, E.L.; Overbeek, R.A.

    1983-01-01

    This paper presents an overview of a system of software tools that can be used in preparing database designs. The design technique is based on the Entity-Relationship model - the tools allow a designer to conveniently develop an Entity-Relationship model, from which an Extended Entity-Relationship model is produced. The extended Entity-Relationship model is used as input to tools that generate the basic design appropriate to the target database management system. This paper reports on only the tools to generate an IMS design from the extended Entity-Relationship model.

  3. Databases and web tools for cancer genomics study.

    Science.gov (United States)

    Yang, Yadong; Dong, Xunong; Xie, Bingbing; Ding, Nan; Chen, Juan; Li, Yongjun; Zhang, Qian; Qu, Hongzhu; Fang, Xiangdong

    2015-02-01

    Publicly-accessible resources have promoted the advance of scientific discovery. The era of genomics and big data has brought the need for collaboration and data sharing in order to make effective use of this new knowledge. Here, we describe the web resources for cancer genomics research and rate them on the basis of the diversity of cancer types, sample size, omics data comprehensiveness, and user experience. The resources reviewed include data repository and analysis tools; and we hope such introduction will promote the awareness and facilitate the usage of these resources in the cancer research community. Copyright © 2015 The Authors. Production and hosting by Elsevier Ltd.. All rights reserved.

  4. Contingency Contractor Optimization Phase 3 Sustainment Database Design Document - Contingency Contractor Optimization Tool - Prototype

    Energy Technology Data Exchange (ETDEWEB)

    Frazier, Christopher Rawls; Durfee, Justin David; Bandlow, Alisa; Gearhart, Jared Lee; Jones, Katherine A

    2016-05-01

    The Contingency Contractor Optimization Tool – Prototype (CCOT-P) database is used to store input and output data for the linear program model described in [1]. The database allows queries to retrieve this data and updating and inserting new input data.

  5. Databases and Associated Bioinformatic Tools in Studies of Food Allergens, Epitopes and Haptens – a Review

    Directory of Open Access Journals (Sweden)

    Bucholska Justyna

    2018-06-01

    Full Text Available Allergies and/or food intolerances are a growing problem of the modern world. Diffi culties associated with the correct diagnosis of food allergies result in the need to classify the factors causing allergies and allergens themselves. Therefore, internet databases and other bioinformatic tools play a special role in deepening knowledge of biologically-important compounds. Internet repositories, as a source of information on different chemical compounds, including those related to allergy and intolerance, are increasingly being used by scientists. Bioinformatic methods play a signifi cant role in biological and medical sciences, and their importance in food science is increasing. This study aimed at presenting selected databases and tools of bioinformatic analysis useful in research on food allergies, allergens (11 databases, epitopes (7 databases, and haptens (2 databases. It also presents examples of the application of computer methods in studies related to allergies.

  6. The Princeton Protein Orthology Database (P-POD: a comparative genomics analysis tool for biologists.

    Directory of Open Access Journals (Sweden)

    Sven Heinicke

    2007-08-01

    Full Text Available Many biological databases that provide comparative genomics information and tools are now available on the internet. While certainly quite useful, to our knowledge none of the existing databases combine results from multiple comparative genomics methods with manually curated information from the literature. Here we describe the Princeton Protein Orthology Database (P-POD, http://ortholog.princeton.edu, a user-friendly database system that allows users to find and visualize the phylogenetic relationships among predicted orthologs (based on the OrthoMCL method to a query gene from any of eight eukaryotic organisms, and to see the orthologs in a wider evolutionary context (based on the Jaccard clustering method. In addition to the phylogenetic information, the database contains experimental results manually collected from the literature that can be compared to the computational analyses, as well as links to relevant human disease and gene information via the OMIM, model organism, and sequence databases. Our aim is for the P-POD resource to be extremely useful to typical experimental biologists wanting to learn more about the evolutionary context of their favorite genes. P-POD is based on the commonly used Generic Model Organism Database (GMOD schema and can be downloaded in its entirety for installation on one's own system. Thus, bioinformaticians and software developers may also find P-POD useful because they can use the P-POD database infrastructure when developing their own comparative genomics resources and database tools.

  7. ODG: Omics database generator - a tool for generating, querying, and analyzing multi-omics comparative databases to facilitate biological understanding.

    Science.gov (United States)

    Guhlin, Joseph; Silverstein, Kevin A T; Zhou, Peng; Tiffin, Peter; Young, Nevin D

    2017-08-10

    Rapid generation of omics data in recent years have resulted in vast amounts of disconnected datasets without systemic integration and knowledge building, while individual groups have made customized, annotated datasets available on the web with few ways to link them to in-lab datasets. With so many research groups generating their own data, the ability to relate it to the larger genomic and comparative genomic context is becoming increasingly crucial to make full use of the data. The Omics Database Generator (ODG) allows users to create customized databases that utilize published genomics data integrated with experimental data which can be queried using a flexible graph database. When provided with omics and experimental data, ODG will create a comparative, multi-dimensional graph database. ODG can import definitions and annotations from other sources such as InterProScan, the Gene Ontology, ENZYME, UniPathway, and others. This annotation data can be especially useful for studying new or understudied species for which transcripts have only been predicted, and rapidly give additional layers of annotation to predicted genes. In better studied species, ODG can perform syntenic annotation translations or rapidly identify characteristics of a set of genes or nucleotide locations, such as hits from an association study. ODG provides a web-based user-interface for configuring the data import and for querying the database. Queries can also be run from the command-line and the database can be queried directly through programming language hooks available for most languages. ODG supports most common genomic formats as well as generic, easy to use tab-separated value format for user-provided annotations. ODG is a user-friendly database generation and query tool that adapts to the supplied data to produce a comparative genomic database or multi-layered annotation database. ODG provides rapid comparative genomic annotation and is therefore particularly useful for non-model or

  8. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  9. Multiple imputation as one tool to provide longitudinal databases for modelling human height and weight development.

    Science.gov (United States)

    Aßmann, C

    2016-06-01

    Besides large efforts regarding field work, provision of valid databases requires statistical and informational infrastructure to enable long-term access to longitudinal data sets on height, weight and related issues. To foster use of longitudinal data sets within the scientific community, provision of valid databases has to address data-protection regulations. It is, therefore, of major importance to hinder identifiability of individuals from publicly available databases. To reach this goal, one possible strategy is to provide a synthetic database to the public allowing for pretesting strategies for data analysis. The synthetic databases can be established using multiple imputation tools. Given the approval of the strategy, verification is based on the original data. Multiple imputation by chained equations is illustrated to facilitate provision of synthetic databases as it allows for capturing a wide range of statistical interdependencies. Also missing values, typically occurring within longitudinal databases for reasons of item non-response, can be addressed via multiple imputation when providing databases. The provision of synthetic databases using multiple imputation techniques is one possible strategy to ensure data protection, increase visibility of longitudinal databases and enhance the analytical potential.

  10. jSPyDB, an open source database-independent tool for data management

    International Nuclear Information System (INIS)

    Pierro, Giuseppe Antonio; Cavallari, Francesca; Di Guida, Salvatore; Innocente, Vincenzo

    2011-01-01

    Nowadays, the number of commercial tools available for accessing Databases, built on Java or .Net, is increasing. However, many of these applications have several drawbacks: usually they are not open-source, they provide interfaces only with a specific kind of database, they are platform-dependent and very CPU and memory consuming. jSPyDB is a free web-based tool written using Python and Javascript. It relies on jQuery and python libraries, and is intended to provide a simple handler to different database technologies inside a local web browser. Such a tool, exploiting fast access libraries such as SQLAlchemy, is easy to install, and to configure. The design of this tool envisages three layers. The front-end client side in the local web browser communicates with a backend server. Only the server is able to connect to the different databases for the purposes of performing data definition and manipulation. The server makes the data available to the client, so that the user can display and handle them safely. Moreover, thanks to jQuery libraries, this tool supports export of data in different formats, such as XML and JSON. Finally, by using a set of pre-defined functions, users are allowed to create their customized views for a better data visualization. In this way, we optimize the performance of database servers by avoiding short connections and concurrent sessions. In addition, security is enforced since we do not provide users the possibility to directly execute any SQL statement.

  11. Database tools for enhanced analysis of TMX-U data. Revision 1

    International Nuclear Information System (INIS)

    Stewart, M.E.; Carter, M.R.; Casper, T.A.; Meyer, W.H.; Perkins, D.E.; Whitney, D.M.

    1986-01-01

    A commercial database software package has been used to create several databases and tools that assist and enhance the ability of experimental physicists to analyze data from the Tandem Mirror Experiment-Upgrade (TMX-U) experiment. This software runs on a DEC-20 computer in M-Division's User Service Center at Lawrence Livermore National Laboratory (LLNL), where data can be analyzed offline from the main TMX-U acquisition computers. When combined with interactive data analysis programs, these tools provide the capability to do batch-style processing or interactive data analysis on the computers in the USC or the supercomputers of the National Magnetic Fusion Energy Computer Center (NMFECC) in addition to the normal processing done by the TMX-U acquisition system. One database tool provides highly reduced data for searching and correlation analysis of several diagnostic signals within a single shot or over many shots. A second database tool provides retrieval and storage of unreduced data for use in detailed analysis of one or more diagnostic signals. We will show how these database tools form the core of an evolving offline data analysis environment on the USC computers

  12. Tools and Databases of the KOMICS Web Portal for Preprocessing, Mining, and Dissemination of Metabolomics Data

    Directory of Open Access Journals (Sweden)

    Nozomu Sakurai

    2014-01-01

    Full Text Available A metabolome—the collection of comprehensive quantitative data on metabolites in an organism—has been increasingly utilized for applications such as data-intensive systems biology, disease diagnostics, biomarker discovery, and assessment of food quality. A considerable number of tools and databases have been developed to date for the analysis of data generated by various combinations of chromatography and mass spectrometry. We report here a web portal named KOMICS (The Kazusa Metabolomics Portal, where the tools and databases that we developed are available for free to academic users. KOMICS includes the tools and databases for preprocessing, mining, visualization, and publication of metabolomics data. Improvements in the annotation of unknown metabolites and dissemination of comprehensive metabolomic data are the primary aims behind the development of this portal. For this purpose, PowerGet and FragmentAlign include a manual curation function for the results of metabolite feature alignments. A metadata-specific wiki-based database, Metabolonote, functions as a hub of web resources related to the submitters' work. This feature is expected to increase citation of the submitters' work, thereby promoting data publication. As an example of the practical use of KOMICS, a workflow for a study on Jatropha curcas is presented. The tools and databases available at KOMICS should contribute to enhanced production, interpretation, and utilization of metabolomic Big Data.

  13. Tools and databases of the KOMICS web portal for preprocessing, mining, and dissemination of metabolomics data.

    Science.gov (United States)

    Sakurai, Nozomu; Ara, Takeshi; Enomoto, Mitsuo; Motegi, Takeshi; Morishita, Yoshihiko; Kurabayashi, Atsushi; Iijima, Yoko; Ogata, Yoshiyuki; Nakajima, Daisuke; Suzuki, Hideyuki; Shibata, Daisuke

    2014-01-01

    A metabolome--the collection of comprehensive quantitative data on metabolites in an organism--has been increasingly utilized for applications such as data-intensive systems biology, disease diagnostics, biomarker discovery, and assessment of food quality. A considerable number of tools and databases have been developed to date for the analysis of data generated by various combinations of chromatography and mass spectrometry. We report here a web portal named KOMICS (The Kazusa Metabolomics Portal), where the tools and databases that we developed are available for free to academic users. KOMICS includes the tools and databases for preprocessing, mining, visualization, and publication of metabolomics data. Improvements in the annotation of unknown metabolites and dissemination of comprehensive metabolomic data are the primary aims behind the development of this portal. For this purpose, PowerGet and FragmentAlign include a manual curation function for the results of metabolite feature alignments. A metadata-specific wiki-based database, Metabolonote, functions as a hub of web resources related to the submitters' work. This feature is expected to increase citation of the submitters' work, thereby promoting data publication. As an example of the practical use of KOMICS, a workflow for a study on Jatropha curcas is presented. The tools and databases available at KOMICS should contribute to enhanced production, interpretation, and utilization of metabolomic Big Data.

  14. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  15. The PRoteomics IDEntifications (PRIDE) database and associated tools: status in 2013.

    Science.gov (United States)

    Vizcaíno, Juan Antonio; Côté, Richard G; Csordas, Attila; Dianes, José A; Fabregat, Antonio; Foster, Joseph M; Griss, Johannes; Alpi, Emanuele; Birim, Melih; Contell, Javier; O'Kelly, Gavin; Schoenegger, Andreas; Ovelleiro, David; Pérez-Riverol, Yasset; Reisinger, Florian; Ríos, Daniel; Wang, Rui; Hermjakob, Henning

    2013-01-01

    The PRoteomics IDEntifications (PRIDE, http://www.ebi.ac.uk/pride) database at the European Bioinformatics Institute is one of the most prominent data repositories of mass spectrometry (MS)-based proteomics data. Here, we summarize recent developments in the PRIDE database and related tools. First, we provide up-to-date statistics in data content, splitting the figures by groups of organisms and species, including peptide and protein identifications, and post-translational modifications. We then describe the tools that are part of the PRIDE submission pipeline, especially the recently developed PRIDE Converter 2 (new submission tool) and PRIDE Inspector (visualization and analysis tool). We also give an update about the integration of PRIDE with other MS proteomics resources in the context of the ProteomeXchange consortium. Finally, we briefly review the quality control efforts that are ongoing at present and outline our future plans.

  16. Global search tool for the Advanced Photon Source Integrated Relational Model of Installed Systems (IRMIS) database

    International Nuclear Information System (INIS)

    Quock, D.E.R.; Cianciarulo, M.B.

    2007-01-01

    The Integrated Relational Model of Installed Systems (IRMIS) is a relational database tool that has been implemented at the Advanced Photon Source to maintain an updated account of approximately 600 control system software applications, 400,000 process variables, and 30,000 control system hardware components. To effectively display this large amount of control system information to operators and engineers, IRMIS was initially built with nine Web-based viewers: Applications Organizing Index, IOC, PLC, Component Type, Installed Components, Network, Controls Spares, Process Variables, and Cables. However, since each viewer is designed to provide details from only one major category of the control system, the necessity for a one-stop global search tool for the entire database became apparent. The user requirements for extremely fast database search time and ease of navigation through search results led to the choice of Asynchronous JavaScript and XML (AJAX) technology in the implementation of the IRMIS global search tool. Unique features of the global search tool include a two-tier level of displayed search results, and a database data integrity validation and reporting mechanism.

  17. jSPyDB, an open source database-independent tool for data management

    CERN Document Server

    Pierro, Giuseppe Antonio

    2010-01-01

    Nowadays, the number of commercial tools available for accessing Databases, built on Java or .Net, is increasing. However, many of these applications have several drawbacks: usually they are not open-source, they provide interfaces only with a specific kind of database, they are platform-dependent and very CPU and memory consuming. jSPyDB is a free web based tool written using Python and Javascript. It relies on jQuery and python libraries, and is intended to provide a simple handler to different Database technologies inside a local web browser. Such a tool, exploiting fast access libraries such as SQLAlchemy, is easy to install, and to configure. The design of this tool envisages three layers. The front-end client side in the local web browser communicates with a backend server. Only the server is able to connect to the different databases for the purposes of performing data definition and manipulation. The server makes the data available to the client, so that the user can display and handle them safely. ...

  18. Exposure Modeling Tools and Databases for Consideration for Relevance to the Amended TSCA (ISES)

    Science.gov (United States)

    The Agency’s Office of Research and Development (ORD) has a number of ongoing exposure modeling tools and databases. These efforts are anticipated to be useful in supporting ongoing implementation of the amended Toxic Substances Control Act (TSCA). Under ORD’s Chemic...

  19. Online analytical processing (OLAP): a fast and effective data mining tool for gene expression databases.

    Science.gov (United States)

    Alkharouf, Nadim W; Jamison, D Curtis; Matthews, Benjamin F

    2005-06-30

    Gene expression databases contain a wealth of information, but current data mining tools are limited in their speed and effectiveness in extracting meaningful biological knowledge from them. Online analytical processing (OLAP) can be used as a supplement to cluster analysis for fast and effective data mining of gene expression databases. We used Analysis Services 2000, a product that ships with SQLServer2000, to construct an OLAP cube that was used to mine a time series experiment designed to identify genes associated with resistance of soybean to the soybean cyst nematode, a devastating pest of soybean. The data for these experiments is stored in the soybean genomics and microarray database (SGMD). A number of candidate resistance genes and pathways were found. Compared to traditional cluster analysis of gene expression data, OLAP was more effective and faster in finding biologically meaningful information. OLAP is available from a number of vendors and can work with any relational database management system through OLE DB.

  20. A web-based data visualization tool for the MIMIC-II database.

    Science.gov (United States)

    Lee, Joon; Ribey, Evan; Wallace, James R

    2016-02-04

    Although MIMIC-II, a public intensive care database, has been recognized as an invaluable resource for many medical researchers worldwide, becoming a proficient MIMIC-II researcher requires knowledge of SQL programming and an understanding of the MIMIC-II database schema. These are challenging requirements especially for health researchers and clinicians who may have limited computer proficiency. In order to overcome this challenge, our objective was to create an interactive, web-based MIMIC-II data visualization tool that first-time MIMIC-II users can easily use to explore the database. The tool offers two main features: Explore and Compare. The Explore feature enables the user to select a patient cohort within MIMIC-II and visualize the distributions of various administrative, demographic, and clinical variables within the selected cohort. The Compare feature enables the user to select two patient cohorts and visually compare them with respect to a variety of variables. The tool is also helpful to experienced MIMIC-II researchers who can use it to substantially accelerate the cumbersome and time-consuming steps of writing SQL queries and manually visualizing extracted data. Any interested researcher can use the MIMIC-II data visualization tool for free to quickly and conveniently conduct a preliminary investigation on MIMIC-II with a few mouse clicks. Researchers can also use the tool to learn the characteristics of the MIMIC-II patients. Since it is still impossible to conduct multivariable regression inside the tool, future work includes adding analytics capabilities. Also, the next version of the tool will aim to utilize MIMIC-III which contains more data.

  1. Alaska Geochemical Database - Mineral Exploration Tool for the 21st Century - PDF of presentation

    Science.gov (United States)

    Granitto, Matthew; Schmidt, Jeanine M.; Labay, Keith A.; Shew, Nora B.; Gamble, Bruce M.

    2012-01-01

    The U.S. Geological Survey has created a geochemical database of geologic material samples collected in Alaska. This database is readily accessible to anyone with access to the Internet. Designed as a tool for mineral or environmental assessment, land management, or mineral exploration, the initial version of the Alaska Geochemical Database - U.S. Geological Survey Data Series 637 - contains geochemical, geologic, and geospatial data for 264,158 samples collected from 1962-2009: 108,909 rock samples; 92,701 sediment samples; 48,209 heavy-mineral-concentrate samples; 6,869 soil samples; and 7,470 mineral samples. In addition, the Alaska Geochemical Database contains mineralogic data for 18,138 nonmagnetic-fraction heavy mineral concentrates, making it the first U.S. Geological Survey database of this scope that contains both geochemical and mineralogic data. Examples from the Alaska Range will illustrate potential uses of the Alaska Geochemical Database in mineral exploration. Data from the Alaska Geochemical Database have been extensively checked for accuracy of sample media description, sample site location, and analytical method using U.S. Geological Survey sample-submittal archives and U.S. Geological Survey publications (plus field notebooks and sample site compilation base maps from the Alaska Technical Data Unit in Anchorage, Alaska). The database is also the repository for nearly all previously released U.S. Geological Survey Alaska geochemical datasets. Although the Alaska Geochemical Database is a fully relational database in Microsoft® Access 2003 and 2010 formats, these same data are also provided as a series of spreadsheet files in Microsoft® Excel 2003 and 2010 formats, and as ASCII text files. A DVD version of the Alaska Geochemical Database was released in October 2011, as U.S. Geological Survey Data Series 637, and data downloads are available at http://pubs.usgs.gov/ds/637/. Also, all Alaska Geochemical Database data have been incorporated into

  2. A tool for the LHCb MWPC production monitoring the LNF on-line database

    CERN Document Server

    Sarti, A

    2006-01-01

    The LHCb Muon detector system is made of several (1368) Multi Wire Proportional Chambers (MWPCs). Those chambers differ in size, readout and constructing materials and are built in various countries: their assembly takes place in different sites and foresee many steps. To ensure the production quality and keep track of panels/chambers production an on-line database has been set up. This web based tool allows an easy sharing of the information: the database update is made possible trough a web interface. The update is easy and reliable. The interface helps the information sharing in different sites and helps optimizing the production rate by allowing feedback on production needs and quality between sites contributing to the production chain. The planning is made using the information on the material availability. Quality tests and crosschecks are made easier, as well as the overall production monitoring. The tool has been developed using mySQL, PHP, ROOT and PERL software languages.

  3. GPCALMA: A Tool For Mammography With A GRID-Connected Distributed Database

    Science.gov (United States)

    Bottigli, U.; Cerello, P.; Cheran, S.; Delogu, P.; Fantacci, M. E.; Fauci, F.; Golosio, B.; Lauria, A.; Lopez Torres, E.; Magro, R.; Masala, G. L.; Oliva, P.; Palmiero, R.; Raso, G.; Retico, A.; Stumbo, S.; Tangaro, S.

    2003-09-01

    The GPCALMA (Grid Platform for Computer Assisted Library for MAmmography) collaboration involves several departments of physics, INFN (National Institute of Nuclear Physics) sections, and italian hospitals. The aim of this collaboration is developing a tool that can help radiologists in early detection of breast cancer. GPCALMA has built a large distributed database of digitised mammographic images (about 5500 images corresponding to 1650 patients) and developed a CAD (Computer Aided Detection) software which is integrated in a station that can also be used to acquire new images, as archive and to perform statistical analysis. The images (18×24 cm2, digitised by a CCD linear scanner with a 85 μm pitch and 4096 gray levels) are completely described: pathological ones have a consistent characterization with radiologist's diagnosis and histological data, non pathological ones correspond to patients with a follow up at least three years. The distributed database is realized throught the connection of all the hospitals and research centers in GRID tecnology. In each hospital local patients digital images are stored in the local database. Using GRID connection, GPCALMA will allow each node to work on distributed database data as well as local database data. Using its database the GPCALMA tools perform several analysis. A texture analysis, i.e. an automated classification on adipose, dense or glandular texture, can be provided by the system. GPCALMA software also allows classification of pathological features, in particular massive lesions (both opacities and spiculated lesions) analysis and microcalcification clusters analysis. The detection of pathological features is made using neural network software that provides a selection of areas showing a given "suspicion level" of lesion occurrence. The performance of the GPCALMA system will be presented in terms of the ROC (Receiver Operating Characteristic) curves. The results of GPCALMA system as "second reader" will also

  4. TSN Database Australia, a new tool to monitor antimicrobial resistance in Australia.

    Science.gov (United States)

    Turnidge, John; McCarthy, Laurence R; Master, Ronald N; Kepner, Douglas E; Weslock, James

    2003-01-01

    An electronic network of Australian microbiology laboratories was established to monitor the emergence and occurrence of antimicrobial resistance among clinically relevant bacteria. It is believed that the data network collected approximately 42 per cent of all antibacterial susceptibility test results generated by Australian laboratories. The network comprised 94 hospitals and 9 private commercial laboratories. Selected data elements were extracted and electronically transmitted to a central location. Upon receipt, all data were first normalised and thereafter examined for errors. Duplicate results for the same patient were identified to prevent skewing of the data toward resistance. All data passing quality assessment was staged for release of a new database release that occurred monthly. Unusual test results were first validated prior to their inclusion into the database. Using an Internet-based query tool, individual institutions could query their own data, but could only query aggregated data for other regional or national analyses. Individual patient results could be examined nor could the results of any individual institution other than their own. As of March 2002, TSN Database Australia contained 14,648,752 test results, from 2,000,394 strains (453 different taxa) and 1,213,605 patients. Since the same database concept has been established in 10 other countries (United States of America, Europe, and Canada), observations made in Australia may be compared to those observed elsewhere in the world. This article will describe TSN in greater detail, describe the query tool and some of the analyses that are possible.

  5. Major histocompatibility complex linked databases and prediction tools for designing vaccines.

    Science.gov (United States)

    Singh, Satarudra Prakash; Mishra, Bhartendu Nath

    2016-03-01

    Presently, the major histocompatibility complex (MHC) is receiving considerable interest owing to its remarkable role in antigen presentation and vaccine design. The specific databases and prediction approaches related to MHC sequences, structures and binding/nonbinding peptides have been aggressively developed in the past two decades with their own benchmarks and standards. Before using these databases and prediction tools, it is important to analyze why and how the tools are constructed along with their strengths and limitations. The current review presents insights into web-based immunological bioinformatics resources that include searchable databases of MHC sequences, epitopes and prediction tools that are linked to MHC based vaccine design, including population coverage analysis. In T cell epitope forecasts, MHC class I binding predictions are very accurate for most of the identified MHC alleles. However, these predictions could be further improved by integrating proteasome cleavage (in conjugation with transporter associated with antigen processing (TAP) binding) prediction, as well as T cell receptor binding prediction. On the other hand, MHC class II restricted epitope predictions display relatively low accuracy compared to MHC class I. To date, pan-specific tools have been developed, which not only deliver significantly improved predictions in terms of accuracy, but also in terms of the coverage of MHC alleles and supertypes. In addition, structural modeling and simulation systems for peptide-MHC complexes enable the molecular-level investigation of immune processes. Finally, epitope prediction tools, and their assessments and guidelines, have been presented to immunologist for the design of novel vaccine and diagnostics. Copyright © 2015 American Society for Histocompatibility and Immunogenetics. Published by Elsevier Inc. All rights reserved.

  6. Overview of Nuclear Physics Data: Databases, Web Applications and Teaching Tools

    Science.gov (United States)

    McCutchan, Elizabeth

    2017-01-01

    The mission of the United States Nuclear Data Program (USNDP) is to provide current, accurate, and authoritative data for use in pure and applied areas of nuclear science and engineering. This is accomplished by compiling, evaluating, and disseminating extensive datasets. Our main products include the Evaluated Nuclear Structure File (ENSDF) containing information on nuclear structure and decay properties and the Evaluated Nuclear Data File (ENDF) containing information on neutron-induced reactions. The National Nuclear Data Center (NNDC), through the website www.nndc.bnl.gov, provides web-based retrieval systems for these and many other databases. In addition, the NNDC hosts several on-line physics tools, useful for calculating various quantities relating to basic nuclear physics. In this talk, I will first introduce the quantities which are evaluated and recommended in our databases. I will then outline the searching capabilities which allow one to quickly and efficiently retrieve data. Finally, I will demonstrate how the database searches and web applications can provide effective teaching tools concerning the structure of nuclei and how they interact. Work supported by the Office of Nuclear Physics, Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886.

  7. PRIDE and "Database on Demand" as valuable tools for computational proteomics.

    Science.gov (United States)

    Vizcaíno, Juan Antonio; Reisinger, Florian; Côté, Richard; Martens, Lennart

    2011-01-01

    The Proteomics Identifications Database (PRIDE, http://www.ebi.ac.uk/pride ) provides users with the ability to explore and compare mass spectrometry-based proteomics experiments that reveal details of the protein expression found in a broad range of taxonomic groups, tissues, and disease states. A PRIDE experiment typically includes identifications of proteins, peptides, and protein modifications. Additionally, many of the submitted experiments also include the mass spectra that provide the evidence for these identifications. Finally, one of the strongest advantages of PRIDE in comparison with other proteomics repositories is the amount of metadata it contains, a key point to put the above-mentioned data in biological and/or technical context. Several informatics tools have been developed in support of the PRIDE database. The most recent one is called "Database on Demand" (DoD), which allows custom sequence databases to be built in order to optimize the results from search engines. We describe the use of DoD in this chapter. Additionally, in order to show the potential of PRIDE as a source for data mining, we also explore complex queries using federated BioMart queries to integrate PRIDE data with other resources, such as Ensembl, Reactome, or UniProt.

  8. Comparison of hospital databases on antibiotic consumption in France, for a single management tool.

    Science.gov (United States)

    Henard, S; Boussat, S; Demoré, B; Clément, S; Lecompte, T; May, T; Rabaud, C

    2014-07-01

    The surveillance of antibiotic use in hospitals and of data on resistance is an essential measure for antibiotic stewardship. There are 3 national systems in France to collect data on antibiotic use: DREES, ICATB, and ATB RAISIN. We compared these databases and drafted recommendations for the creation of an optimized database of information on antibiotic use, available to all concerned personnel: healthcare authorities, healthcare facilities, and healthcare professionals. We processed and analyzed the 3 databases (2008 data), and surveyed users. The qualitative analysis demonstrated major discrepancies in terms of objectives, healthcare facilities, participation rate, units of consumption, conditions for collection, consolidation, and control of data, and delay before availability of results. The quantitative analysis revealed that the consumption data for a given healthcare facility differed from one database to another, challenging the reliability of data collection. We specified user expectations: to compare consumption and resistance data, to carry out benchmarking, to obtain data on the prescribing habits in healthcare units, or to help understand results. The study results demonstrated the need for a reliable, single, and automated tool to manage data on antibiotic consumption compared with resistance data on several levels (national, regional, healthcare facility, healthcare units), providing rapid local feedback and educational benchmarking. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  9. Migration check tool: automatic plan verification following treatment management systems upgrade and database migration.

    Science.gov (United States)

    Hadley, Scott W; White, Dale; Chen, Xiaoping; Moran, Jean M; Keranen, Wayne M

    2013-11-04

    Software upgrades of the treatment management system (TMS) sometimes require that all data be migrated from one version of the database to another. It is necessary to verify that the data are correctly migrated to assure patient safety. It is impossible to verify by hand the thousands of parameters that go into each patient's radiation therapy treatment plan. Repeating pretreatment QA is costly, time-consuming, and may be inadequate in detecting errors that are introduced during the migration. In this work we investigate the use of an automatic Plan Comparison Tool to verify that plan data have been correctly migrated to a new version of a TMS database from an older version. We developed software to query and compare treatment plans between different versions of the TMS. The same plan in the two TMS systems are translated into an XML schema. A plan comparison module takes the two XML schemas as input and reports any differences in parameters between the two versions of the same plan by applying a schema mapping. A console application is used to query the database to obtain a list of active or in-preparation plans to be tested. It then runs in batch mode to compare all the plans, and a report of success or failure of the comparison is saved for review. This software tool was used as part of software upgrade and database migration from Varian's Aria 8.9 to Aria 11 TMS. Parameters were compared for 358 treatment plans in 89 minutes. This direct comparison of all plan parameters in the migrated TMS against the previous TMS surpasses current QA methods that relied on repeating pretreatment QA measurements or labor-intensive and fallible hand comparisons.

  10. Interoperability of the CDPP tools and databases through the EPN-TAP protocol

    Science.gov (United States)

    Gangloff, M.; Génot, V.; André, N.; Erard, S.; Cecconi, B.; Jourdane, N.; Indurain, M.; Bouchemit, M.; Blelly, P.-L.; Rouillard, A. P.; Marchaudon, A.; Beigbeder, L.; Budnik, E.; Glorian, J.-M.

    2017-09-01

    The French Plasma Physics Data Centre (CDPP, http://cdpp.eu) )distributes and valorizes natural plasma data for nearly 20 years. The CDPP is involved for many years in the development and implementation of interoperability standards like SPASE, IVOA and IPDA. In the frame of the VESPA work package of Europlanet H2020, the CDPP has developed an EPN-TAP compatible server which distributes observational time series from the AMDA database, illumination maps from the 67P comet, and simulation results from the IPIM model. An EPN-TAP compatible interface was also added in AMDA , 3DView and PropagationTool.

  11. Fifteen years SIB Swiss Institute of Bioinformatics: life science databases, tools and support.

    Science.gov (United States)

    Stockinger, Heinz; Altenhoff, Adrian M; Arnold, Konstantin; Bairoch, Amos; Bastian, Frederic; Bergmann, Sven; Bougueleret, Lydie; Bucher, Philipp; Delorenzi, Mauro; Lane, Lydie; Le Mercier, Philippe; Lisacek, Frédérique; Michielin, Olivier; Palagi, Patricia M; Rougemont, Jacques; Schwede, Torsten; von Mering, Christian; van Nimwegen, Erik; Walther, Daniel; Xenarios, Ioannis; Zavolan, Mihaela; Zdobnov, Evgeny M; Zoete, Vincent; Appel, Ron D

    2014-07-01

    The SIB Swiss Institute of Bioinformatics (www.isb-sib.ch) was created in 1998 as an institution to foster excellence in bioinformatics. It is renowned worldwide for its databases and software tools, such as UniProtKB/Swiss-Prot, PROSITE, SWISS-MODEL, STRING, etc, that are all accessible on ExPASy.org, SIB's Bioinformatics Resource Portal. This article provides an overview of the scientific and training resources SIB has consistently been offering to the life science community for more than 15 years. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Knowledge base technology for CT-DIMS: Report 1. [CT-DIMS (Cutting Tool - Database and Information Management System)

    Energy Technology Data Exchange (ETDEWEB)

    Kelley, E.E.

    1993-05-01

    This report discusses progress on the Cutting Tool-Database and Information Management System (CT-DIMS) project being conducted by the University of Illinois Urbana-Champaign (UIUC) under contract to the Department of Energy. This project was initiated in October 1991 by UIUC. The Knowledge-Based Engineering Systems Research Laboratory (KBESRL) at UIUC is developing knowledge base technology and prototype software for the presentation and manipulation of the cutting tool databases at Allied-Signal Inc., Kansas City Division (KCD). The graphical tool selection capability being developed for CT-DIMS in the Intelligent Design Environment for Engineering Automation (IDEEA) will provide a concurrent environment for simultaneous access to tool databases, tool standard libraries, and cutting tool knowledge.

  13. GarlicESTdb: an online database and mining tool for garlic EST sequences

    Directory of Open Access Journals (Sweden)

    Choi Sang-Haeng

    2009-05-01

    Full Text Available Abstract Background Allium sativum., commonly known as garlic, is a species in the onion genus (Allium, which is a large and diverse one containing over 1,250 species. Its close relatives include chives, onion, leek and shallot. Garlic has been used throughout recorded history for culinary, medicinal use and health benefits. Currently, the interest in garlic is highly increasing due to nutritional and pharmaceutical value including high blood pressure and cholesterol, atherosclerosis and cancer. For all that, there are no comprehensive databases available for Expressed Sequence Tags(EST of garlic for gene discovery and future efforts of genome annotation. That is why we developed a new garlic database and applications to enable comprehensive analysis of garlic gene expression. Description GarlicESTdb is an integrated database and mining tool for large-scale garlic (Allium sativum EST sequencing. A total of 21,595 ESTs collected from an in-house cDNA library were used to construct the database. The analysis pipeline is an automated system written in JAVA and consists of the following components: automatic preprocessing of EST reads, assembly of raw sequences, annotation of the assembled sequences, storage of the analyzed information into MySQL databases, and graphic display of all processed data. A web application was implemented with the latest J2EE (Java 2 Platform Enterprise Edition software technology (JSP/EJB/JavaServlet for browsing and querying the database, for creation of dynamic web pages on the client side, and for mapping annotated enzymes to KEGG pathways, the AJAX framework was also used partially. The online resources, such as putative annotation, single nucleotide polymorphisms (SNP and tandem repeat data sets, can be searched by text, explored on the website, searched using BLAST, and downloaded. To archive more significant BLAST results, a curation system was introduced with which biologists can easily edit best-hit annotation

  14. The Rat Genome Database 2013—data, tools and users

    Science.gov (United States)

    Hayman, G. Thomas; Wang, Shur-Jen; Smith, Jennifer R.; Lowry, Timothy F.; Nigam, Rajni; Petri, Victoria; de Pons, Jeff; Dwinell, Melinda R.; Shimoyama, Mary; Munzenmaier, Diane H.; Worthey, Elizabeth A.; Jacob, Howard J.

    2013-01-01

    The Rat Genome Database (RGD) was started >10 years ago to provide a core genomic resource for rat researchers. Currently, RGD combines genetic, genomic, pathway, phenotype and strain information with a focus on disease. RGD users are provided with access to structured and curated data from the molecular level through the organismal level. Those users access RGD from all over the world. End users are not only rat researchers but also researchers working with mouse and human data. Translational research is supported by RGD’s comparative genetics/genomics data in disease portals, in GBrowse, in VCMap and on gene report pages. The impact of RGD also goes beyond the traditional biomedical researcher, as the influence of RGD reaches bioinformaticians, tool developers and curators. Import of RGD data into other publicly available databases expands the influence of RGD to a larger set of end users than those who avail themselves of the RGD website. The value of RGD continues to grow as more types of data and more tools are added, while reaching more types of end users. PMID:23434633

  15. The hydrogeological well database TANGRAM©: a tool for data processing to support groundwater assessment

    Directory of Open Access Journals (Sweden)

    Tullia Bonomi

    2014-06-01

    Full Text Available At the Department of Earth and Environmental Sciences of the University of Milano-Bicocca (DISAT-UNIMIB, a hydrogeological well database, called TANGRAM©, has been developed and published on line at www.TANGRAM.samit.unimib.it, developing an earlier 1989 DOS version. This package can be used to store, display, and process all data related to water wells, including administrative information, well characteristics, stratigraphic logs, water levels, pumping rates, and other hydrogeological information. Currently, the database contains more than 39.200 wells located in the Italian region of Lombardy (90%, Piedmont (9% and Valle d’Aosta (1%. TANGRAM© has been created both as a tool for researches and for public administration’s administrators who have projects in common with DISAT-UNIMIB. Indeed, transferring wells data from paper into TANGRAM© offers both an easier and more robust way to correlate hydrogeological data and a more organized management of the administrative information. Some Administrations use TANGRAM© regularly as a tool for wells data management (Brescia Province, ARPA Valle Aosta. An innovative aspect of the database is the quantitative extraction of stratigraphic data. In the part of the software intended for research purposes, all well logs are translated into 8-digit alphanumeric codes and the user composes the code interpreting the description at each stratigraphic level. So the stratigraphic well data can be coded, then quantified and processed. This is made possible by attributing a weight to the digits of the code for textures. The program calculates the weighted percentage of the chosen lithology, as related to each individual layer. These extractions are the starting point for subsequent hydrogeological studies: well head protection area, reconstruction of the dynamics of flow, realization of the quarry plans and flux and transport hydrogeological models. The results of a two-dimensional distribution of coarse

  16. Accessing the SEED genome databases via Web services API: tools for programmers.

    Science.gov (United States)

    Disz, Terry; Akhter, Sajia; Cuevas, Daniel; Olson, Robert; Overbeek, Ross; Vonstein, Veronika; Stevens, Rick; Edwards, Robert A

    2010-06-14

    The SEED integrates many publicly available genome sequences into a single resource. The database contains accurate and up-to-date annotations based on the subsystems concept that leverages clustering between genomes and other clues to accurately and efficiently annotate microbial genomes. The backend is used as the foundation for many genome annotation tools, such as the Rapid Annotation using Subsystems Technology (RAST) server for whole genome annotation, the metagenomics RAST server for random community genome annotations, and the annotation clearinghouse for exchanging annotations from different resources. In addition to a web user interface, the SEED also provides Web services based API for programmatic access to the data in the SEED, allowing the development of third-party tools and mash-ups. The currently exposed Web services encompass over forty different methods for accessing data related to microbial genome annotations. The Web services provide comprehensive access to the database back end, allowing any programmer access to the most consistent and accurate genome annotations available. The Web services are deployed using a platform independent service-oriented approach that allows the user to choose the most suitable programming platform for their application. Example code demonstrate that Web services can be used to access the SEED using common bioinformatics programming languages such as Perl, Python, and Java. We present a novel approach to access the SEED database. Using Web services, a robust API for access to genomics data is provided, without requiring large volume downloads all at once. The API ensures timely access to the most current datasets available, including the new genomes as soon as they come online.

  17. solQTL: a tool for QTL analysis, visualization and linking to genomes at SGN database

    Directory of Open Access Journals (Sweden)

    van der Knaap Esther

    2010-10-01

    Full Text Available Abstract Background A common approach to understanding the genetic basis of complex traits is through identification of associated quantitative trait loci (QTL. Fine mapping QTLs requires several generations of backcrosses and analysis of large populations, which is time-consuming and costly effort. Furthermore, as entire genomes are being sequenced and an increasing amount of genetic and expression data are being generated, a challenge remains: linking phenotypic variation to the underlying genomic variation. To identify candidate genes and understand the molecular basis underlying the phenotypic variation of traits, bioinformatic approaches are needed to exploit information such as genetic map, expression and whole genome sequence data of organisms in biological databases. Description The Sol Genomics Network (SGN, http://solgenomics.net is a primary repository for phenotypic, genetic, genomic, expression and metabolic data for the Solanaceae family and other related Asterids species and houses a variety of bioinformatics tools. SGN has implemented a new approach to QTL data organization, storage, analysis, and cross-links with other relevant data in internal and external databases. The new QTL module, solQTL, http://solgenomics.net/qtl/, employs a user-friendly web interface for uploading raw phenotype and genotype data to the database, R/QTL mapping software for on-the-fly QTL analysis and algorithms for online visualization and cross-referencing of QTLs to relevant datasets and tools such as the SGN Comparative Map Viewer and Genome Browser. Here, we describe the development of the solQTL module and demonstrate its application. Conclusions solQTL allows Solanaceae researchers to upload raw genotype and phenotype data to SGN, perform QTL analysis and dynamically cross-link to relevant genetic, expression and genome annotations. Exploration and synthesis of the relevant data is expected to help facilitate identification of candidate genes

  18. Accessing the SEED Genome Databases via Web Services API: Tools for Programmers

    Directory of Open Access Journals (Sweden)

    Vonstein Veronika

    2010-06-01

    Full Text Available Abstract Background The SEED integrates many publicly available genome sequences into a single resource. The database contains accurate and up-to-date annotations based on the subsystems concept that leverages clustering between genomes and other clues to accurately and efficiently annotate microbial genomes. The backend is used as the foundation for many genome annotation tools, such as the Rapid Annotation using Subsystems Technology (RAST server for whole genome annotation, the metagenomics RAST server for random community genome annotations, and the annotation clearinghouse for exchanging annotations from different resources. In addition to a web user interface, the SEED also provides Web services based API for programmatic access to the data in the SEED, allowing the development of third-party tools and mash-ups. Results The currently exposed Web services encompass over forty different methods for accessing data related to microbial genome annotations. The Web services provide comprehensive access to the database back end, allowing any programmer access to the most consistent and accurate genome annotations available. The Web services are deployed using a platform independent service-oriented approach that allows the user to choose the most suitable programming platform for their application. Example code demonstrate that Web services can be used to access the SEED using common bioinformatics programming languages such as Perl, Python, and Java. Conclusions We present a novel approach to access the SEED database. Using Web services, a robust API for access to genomics data is provided, without requiring large volume downloads all at once. The API ensures timely access to the most current datasets available, including the new genomes as soon as they come online.

  19. Visualising biological data: a semantic approach to tool and database integration

    Directory of Open Access Journals (Sweden)

    Marsh James

    2009-06-01

    Full Text Available Abstract Motivation In the biological sciences, the need to analyse vast amounts of information has become commonplace. Such large-scale analyses often involve drawing together data from a variety of different databases, held remotely on the internet or locally on in-house servers. Supporting these tasks are ad hoc collections of data-manipulation tools, scripting languages and visualisation software, which are often combined in arcane ways to create cumbersome systems that have been customised for a particular purpose, and are consequently not readily adaptable to other uses. For many day-to-day bioinformatics tasks, the sizes of current databases, and the scale of the analyses necessary, now demand increasing levels of automation; nevertheless, the unique experience and intuition of human researchers is still required to interpret the end results in any meaningful biological way. Putting humans in the loop requires tools to support real-time interaction with these vast and complex data-sets. Numerous tools do exist for this purpose, but many do not have optimal interfaces, most are effectively isolated from other tools and databases owing to incompatible data formats, and many have limited real-time performance when applied to realistically large data-sets: much of the user's cognitive capacity is therefore focused on controlling the software and manipulating esoteric file formats rather than on performing the research. Methods To confront these issues, harnessing expertise in human-computer interaction (HCI, high-performance rendering and distributed systems, and guided by bioinformaticians and end-user biologists, we are building reusable software components that, together, create a toolkit that is both architecturally sound from a computing point of view, and addresses both user and developer requirements. Key to the system's usability is its direct exploitation of semantics, which, crucially, gives individual components knowledge of their

  20. Bovine Genome Database: new tools for gleaning function from the Bos taurus genome.

    Science.gov (United States)

    Elsik, Christine G; Unni, Deepak R; Diesh, Colin M; Tayal, Aditi; Emery, Marianne L; Nguyen, Hung N; Hagen, Darren E

    2016-01-04

    We report an update of the Bovine Genome Database (BGD) (http://BovineGenome.org). The goal of BGD is to support bovine genomics research by providing genome annotation and data mining tools. We have developed new genome and annotation browsers using JBrowse and WebApollo for two Bos taurus genome assemblies, the reference genome assembly (UMD3.1.1) and the alternate genome assembly (Btau_4.6.1). Annotation tools have been customized to highlight priority genes for annotation, and to aid annotators in selecting gene evidence tracks from 91 tissue specific RNAseq datasets. We have also developed BovineMine, based on the InterMine data warehousing system, to integrate the bovine genome, annotation, QTL, SNP and expression data with external sources of orthology, gene ontology, gene interaction and pathway information. BovineMine provides powerful query building tools, as well as customized query templates, and allows users to analyze and download genome-wide datasets. With BovineMine, bovine researchers can use orthology to leverage the curated gene pathways of model organisms, such as human, mouse and rat. BovineMine will be especially useful for gene ontology and pathway analyses in conjunction with GWAS and QTL studies. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. CellMiner: a relational database and query tool for the NCI-60 cancer cell lines

    Directory of Open Access Journals (Sweden)

    Reinhold William C

    2009-06-01

    Full Text Available Abstract Background Advances in the high-throughput omic technologies have made it possible to profile cells in a large number of ways at the DNA, RNA, protein, chromosomal, functional, and pharmacological levels. A persistent problem is that some classes of molecular data are labeled with gene identifiers, others with transcript or protein identifiers, and still others with chromosomal locations. What has lagged behind is the ability to integrate the resulting data to uncover complex relationships and patterns. Those issues are reflected in full form by molecular profile data on the panel of 60 diverse human cancer cell lines (the NCI-60 used since 1990 by the U.S. National Cancer Institute to screen compounds for anticancer activity. To our knowledge, CellMiner is the first online database resource for integration of the diverse molecular types of NCI-60 and related meta data. Description CellMiner enables scientists to perform advanced querying of molecular information on NCI-60 (and additional types through a single web interface. CellMiner is a freely available tool that organizes and stores raw and normalized data that represent multiple types of molecular characterizations at the DNA, RNA, protein, and pharmacological levels. Annotations for each project, along with associated metadata on the samples and datasets, are stored in a MySQL database and linked to the molecular profile data. Data can be queried and downloaded along with comprehensive information on experimental and analytic methods for each data set. A Data Intersection tool allows selection of a list of genes (proteins in common between two or more data sets and outputs the data for those genes (proteins in the respective sets. In addition to its role as an integrative resource for the NCI-60, the CellMiner package also serves as a shell for incorporation of molecular profile data on other cell or tissue sample types. Conclusion CellMiner is a relational database tool for

  2. REALIZING BUSINESS PROCESS MANAGEMENT BY HELP OF A PROCESS MAPPING DATABASE TOOL

    CERN Document Server

    Vergili, Ceren

    2016-01-01

    In a typical business sector, processes are the building blocks of the achievement. A considerable percentage of the processes are consisting of business processes. This fact is bringing the fact that business sectors are in need of a management discipline. Business Process Management (BPM) is a discipline that combines modelling, automation, execution, control, measurement, and optimization of process by considering enterprise goals, spanning systems, employees, customers, and partners. CERN’s EN – HE – HM section desires to apply the BPM discipline appropriately for improving their necessary technical, administrative and managerial actions to supply appropriate CERN industrial transport, handling and lifting equipment and to maintain it. For this reason, a Process Mapping Database Tool is created to develop a common understanding about how the section members can visualize their processes, agree on quality standards and on how to improve. It provides a management support by establishing Process Charts...

  3. The Danish Ventral Hernia Database – a valuable tool for quality assessment and research

    Directory of Open Access Journals (Sweden)

    Helgstrand F

    2016-10-01

    Full Text Available Frederik Helgstrand,1 Lars Nannestad Jorgensen2 1Department of Surgery, Køge Hospital, University of Copenhagen, Køge, Denmark; 2Digestive Disease Center, Bispebjerg Hospital, University of Copenhagen, Copenhagen NV, Denmark Aim: The Danish Ventral Hernia Database (DVHD provides national surveillance of current surgical practice and clinical postoperative outcomes. The intention is to reduce postoperative morbidity and hernia recurrence, evaluate new treatment strategies, and facilitate nationwide implementation of evidence-based treatment strategies. This paper describes the design and purpose of DVHD. Study population: Adult (≥18 years patients with a Danish Civil Registration Number and undergoing surgery under elective or emergency conditions for ventral hernia in a Danish surgical department from 2007 and beyond. A total of 80% of all ventral hernia repairs performed in Denmark were reported to the DVHD. Main variables: Demographic data (age, sex, and center, detailed hernia description (eg, type, size, surgical priority, and technical aspects (open/laparoscopic and mesh related factors related to the surgical repair are recorded. Data registration is mandatory. Data may be merged with other Danish health registries and information from patient questionnaires or clinical examinations. Descriptive data: More than 37,000 operations have been registered. Data have demonstrated high agreement with patient files. The data allow technical proposals for surgical improvement with special emphasis on reduced incidences of postoperative complications, hernia recurrence, and chronic pain. Conclusion: DVHD is a prospective and mandatory registration system for Danish surgeons. It has collected a high number of operations and is an excellent tool for observing changes over time, including adjustment of several confounders. This national database registry has impacted on clinical practice in Denmark and led to a high number of scientific publications

  4. Carbohydrate Structure Database: tools for statistical analysis of bacterial, plant and fungal glycomes

    Science.gov (United States)

    Egorova, K.S.; Kondakova, A.N.; Toukach, Ph.V.

    2015-01-01

    Carbohydrates are biological blocks participating in diverse and crucial processes both at cellular and organism levels. They protect individual cells, establish intracellular interactions, take part in the immune reaction and participate in many other processes. Glycosylation is considered as one of the most important modifications of proteins and other biologically active molecules. Still, the data on the enzymatic machinery involved in the carbohydrate synthesis and processing are scattered, and the advance on its study is hindered by the vast bulk of accumulated genetic information not supported by any experimental evidences for functions of proteins that are encoded by these genes. In this article, we present novel instruments for statistical analysis of glycomes in taxa. These tools may be helpful for investigating carbohydrate-related enzymatic activities in various groups of organisms and for comparison of their carbohydrate content. The instruments are developed on the Carbohydrate Structure Database (CSDB) platform and are available freely on the CSDB web-site at http://csdb.glycoscience.ru. Database URL: http://csdb.glycoscience.ru PMID:26337239

  5. The CRISPRdb database and tools to display CRISPRs and to generate dictionaries of spacers and repeats

    Directory of Open Access Journals (Sweden)

    Vergnaud Gilles

    2007-05-01

    Full Text Available Abstract Background In Archeae and Bacteria, the repeated elements called CRISPRs for "clustered regularly interspaced short palindromic repeats" are believed to participate in the defence against viruses. Short sequences called spacers are stored in-between repeated elements. In the current model, motifs comprising spacers and repeats may target an invading DNA and lead to its degradation through a proposed mechanism similar to RNA interference. Analysis of intra-species polymorphism shows that new motifs (one spacer and one repeated element are added in a polarised fashion. Although their principal characteristics have been described, a lot remains to be discovered on the way CRISPRs are created and evolve. As new genome sequences become available it appears necessary to develop automated scanning tools to make available CRISPRs related information and to facilitate additional investigations. Description We have produced a program, CRISPRFinder, which identifies CRISPRs and extracts the repeated and unique sequences. Using this software, a database is constructed which is automatically updated monthly from newly released genome sequences. Additional tools were created to allow the alignment of flanking sequences in search for similarities between different loci and to build dictionaries of unique sequences. To date, almost six hundred CRISPRs have been identified in 475 published genomes. Two Archeae out of thirty-seven and about half of Bacteria do not possess a CRISPR. Fine analysis of repeated sequences strongly supports the current view that new motifs are added at one end of the CRISPR adjacent to the putative promoter. Conclusion It is hoped that availability of a public database, regularly updated and which can be queried on the web will help in further dissecting and understanding CRISPR structure and flanking sequences evolution. Subsequent analyses of the intra-species CRISPR polymorphism will be facilitated by CRISPRFinder and the

  6. Dietary MicroRNA Database (DMD): An Archive Database and Analytic Tool for Food-Borne microRNAs.

    Science.gov (United States)

    Chiang, Kevin; Shu, Jiang; Zempleni, Janos; Cui, Juan

    2015-01-01

    With the advent of high throughput technology, a huge amount of microRNA information has been added to the growing body of knowledge for non-coding RNAs. Here we present the Dietary MicroRNA Databases (DMD), the first repository for archiving and analyzing the published and novel microRNAs discovered in dietary resources. Currently there are fifteen types of dietary species, such as apple, grape, cow milk, and cow fat, included in the database originating from 9 plant and 5 animal species. Annotation for each entry, a mature microRNA indexed as DM0000*, covers information of the mature sequences, genome locations, hairpin structures of parental pre-microRNAs, cross-species sequence comparison, disease relevance, and the experimentally validated gene targets. Furthermore, a few functional analyses including target prediction, pathway enrichment and gene network construction have been integrated into the system, which enable users to generate functional insights through viewing the functional pathways and building protein-protein interaction networks associated with each microRNA. Another unique feature of DMD is that it provides a feature generator where a total of 411 descriptive attributes can be calculated for any given microRNAs based on their sequences and structures. DMD would be particularly useful for research groups studying microRNA regulation from a nutrition point of view. The database can be accessed at http://sbbi.unl.edu/dmd/.

  7. Dietary MicroRNA Database (DMD: An Archive Database and Analytic Tool for Food-Borne microRNAs.

    Directory of Open Access Journals (Sweden)

    Kevin Chiang

    Full Text Available With the advent of high throughput technology, a huge amount of microRNA information has been added to the growing body of knowledge for non-coding RNAs. Here we present the Dietary MicroRNA Databases (DMD, the first repository for archiving and analyzing the published and novel microRNAs discovered in dietary resources. Currently there are fifteen types of dietary species, such as apple, grape, cow milk, and cow fat, included in the database originating from 9 plant and 5 animal species. Annotation for each entry, a mature microRNA indexed as DM0000*, covers information of the mature sequences, genome locations, hairpin structures of parental pre-microRNAs, cross-species sequence comparison, disease relevance, and the experimentally validated gene targets. Furthermore, a few functional analyses including target prediction, pathway enrichment and gene network construction have been integrated into the system, which enable users to generate functional insights through viewing the functional pathways and building protein-protein interaction networks associated with each microRNA. Another unique feature of DMD is that it provides a feature generator where a total of 411 descriptive attributes can be calculated for any given microRNAs based on their sequences and structures. DMD would be particularly useful for research groups studying microRNA regulation from a nutrition point of view. The database can be accessed at http://sbbi.unl.edu/dmd/.

  8. GENIUS: A tool for multi-disciplinary and multi-scalar databases

    Science.gov (United States)

    Bonhomme, M.; Masson, V.; Adolphe, L.; Faraut, S.

    2013-12-01

    Cities are responsible for the majority of energy consumption on the planet. As a consequence, researches regarding energy use in urban context have been increasing for the last decades. Recently the interrelationship between city, energy consumption and urban microclimate appeared as a key component of urban sustainability. To be accurate, those studies must take into account a multidisciplinary urban context and modelling tools need high definition data. Nevertheless, at the city scale, input data is either imprecise or only available for small areas. In particular, there is a lack of information about buildings footprints, roofs sloping, envelope materials, etc. Moreover, the existing data do not allow researchers to explore prospective issues such as climate change or future urban development. In this sense, we developed a new tool called GENIUS (GENerator of Interactive Urban blockS) to build high definition and evolutionary maps from available databases. GENIUS creates maps composed of archetypical neighbourhood coming as shape-files of polygons with additional information (height, age, use, thermal insulation, etc.). Those archetypical neighbourhoods come to seven types of urban blocks that can be found in most European cities. Those types can be compared with Stewart and Oke Local Climate Zones (LCZ). The first step of our method is to transform an existing map into an 'archetypical map'. To do this, the urban database of the IGN (French Geographical Institute) was used. The maps were divided into cells of 250 meters resolution. For each cell, about 40 morphological indicators were calculated. Seven groups of blocks were then identified by means of Principal Component Analysis. GENIUS databases are also able to evolve through time. As a matter of fact, the initial map is transformed, year after year, by taking into account changes in density and urban history. In that sense, GENIUS communicates with NEDUM, a model developed by the CIRED (International

  9. Discerning molecular interactions: A comprehensive review on biomolecular interaction databases and network analysis tools.

    Science.gov (United States)

    Miryala, Sravan Kumar; Anbarasu, Anand; Ramaiah, Sudha

    2018-02-05

    Computational analysis of biomolecular interaction networks is now gaining a lot of importance to understand the functions of novel genes/proteins. Gene interaction (GI) network analysis and protein-protein interaction (PPI) network analysis play a major role in predicting the functionality of interacting genes or proteins and gives an insight into the functional relationships and evolutionary conservation of interactions among the genes. An interaction network is a graphical representation of gene/protein interactome, where each gene/protein is a node, and interaction between gene/protein is an edge. In this review, we discuss the popular open source databases that serve as data repositories to search and collect protein/gene interaction data, and also tools available for the generation of interaction network, visualization and network analysis. Also, various network analysis approaches like topological approach and clustering approach to study the network properties and functional enrichment server which illustrates the functions and pathway of the genes and proteins has been discussed. Hence the distinctive attribute mentioned in this review is not only to provide an overview of tools and web servers for gene and protein-protein interaction (PPI) network analysis but also to extract useful and meaningful information from the interaction networks. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. DevMouse, the mouse developmental methylome database and analysis tools.

    Science.gov (United States)

    Liu, Hongbo; Zhu, Rangfei; Lv, Jie; He, Hongjuan; Yang, Lin; Huang, Zhijun; Su, Jianzhong; Zhang, Yan; Yu, Shihuan; Wu, Qiong

    2014-01-01

    DNA methylation undergoes dynamic changes during mouse development and plays crucial roles in embryogenesis, cell-lineage determination and genomic imprinting. Bisulfite sequencing enables profiling of mouse developmental methylomes on an unprecedented scale; however, integrating and mining these data are challenges for experimental biologists. Therefore, we developed DevMouse, which focuses on the efficient storage of DNA methylomes in temporal order and quantitative analysis of methylation dynamics during mouse development. The latest release of DevMouse incorporates 32 normalized and temporally ordered methylomes across 15 developmental stages and related genome information. A flexible query engine is developed for acquisition of methylation profiles for genes, microRNAs, long non-coding RNAs and genomic intervals of interest across selected developmental stages. To facilitate in-depth mining of these profiles, DevMouse offers online analysis tools for the quantification of methylation variation, identification of differentially methylated genes, hierarchical clustering, gene function annotation and enrichment. Moreover, a configurable MethyBrowser is provided to view the base-resolution methylomes under a genomic context. In brief, DevMouse hosts comprehensive mouse developmental methylome data and provides online tools to explore the relationships of DNA methylation and development. Database URL: http://www.devmouse.org/

  11. International Reactor Physics Handbook Database and Analysis Tool (IDAT) - IDAT user manual

    International Nuclear Information System (INIS)

    2013-01-01

    The IRPhEP Database and Analysis Tool (IDAT) was first released in 2013 and is included on the DVD. This database and corresponding user interface allows easy access to handbook information. Selected information from each configuration was entered into IDAT, such as the measurements performed, benchmark values, calculated values and materials specifications of the benchmark. In many cases this is supplemented with calculated data such as neutron balance data, spectra data, k-eff nuclear data sensitivities, and spatial reaction rate plots. IDAT accomplishes two main objectives: 1. Allow users to search the handbook for experimental configurations that satisfy their input criteria. 2. Allow users to trend results and identify suitable benchmarks experiments for their application. IDAT provides the user with access to several categories of calculated data, including: - 1-group neutron balance data for each configuration with individual isotope contributions in the reactor system. - Flux and other reaction rates spectra in a 299-group energy scheme. Plotting capabilities were implemented into IDAT allowing the user to compare the spectra of selected configurations in the original fine energy structure or on any user-defined broader energy structure. - Sensitivity coefficients (percent changes of k-effective due to elementary change of basic nuclear data) for the major nuclides and nuclear processes in a 238-group energy structure. IDAT is actively being developed. Those approved to access the online version of the handbook will also have access to an online version of IDAT. As May 2013 marks the first release, IDAT may contain data entry errors and omissions. The handbook remains the primary source of reactor physics benchmark data. A copy of IDAT user's manual is attached to this document. A copy of the IRPhE Handbook can be obtained on request at http://www.oecd-nea.org/science/wprs/irphe/irphe-handbook/form.html

  12. ProBiS tools (algorithm, database, and web servers) for predicting and modeling of biologically interesting proteins.

    Science.gov (United States)

    Konc, Janez; Janežič, Dušanka

    2017-09-01

    ProBiS (Protein Binding Sites) Tools consist of algorithm, database, and web servers for prediction of binding sites and protein ligands based on the detection of structurally similar binding sites in the Protein Data Bank. In this article, we review the operations that ProBiS Tools perform, provide comments on the evolution of the tools, and give some implementation details. We review some of its applications to biologically interesting proteins. ProBiS Tools are freely available at http://probis.cmm.ki.si and http://probis.nih.gov. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Canis mtDNA HV1 database: a web-based tool for collecting and surveying Canis mtDNA HV1 haplotype in public database.

    Science.gov (United States)

    Thai, Quan Ke; Chung, Dung Anh; Tran, Hoang-Dung

    2017-06-26

    Canine and wolf mitochondrial DNA haplotypes, which can be used for forensic or phylogenetic analyses, have been defined in various schemes depending on the region analyzed. In recent studies, the 582 bp fragment of the HV1 region is most commonly used. 317 different canine HV1 haplotypes have been reported in the rapidly growing public database GenBank. These reported haplotypes contain several inconsistencies in their haplotype information. To overcome this issue, we have developed a Canis mtDNA HV1 database. This database collects data on the HV1 582 bp region in dog mitochondrial DNA from the GenBank to screen and correct the inconsistencies. It also supports users in detection of new novel mutation profiles and assignment of new haplotypes. The Canis mtDNA HV1 database (CHD) contains 5567 nucleotide entries originating from 15 subspecies in the species Canis lupus. Of these entries, 3646 were haplotypes and grouped into 804 distinct sequences. 319 sequences were recognized as previously assigned haplotypes, while the remaining 485 sequences had new mutation profiles and were marked as new haplotype candidates awaiting further analysis for haplotype assignment. Of the 3646 nucleotide entries, only 414 were annotated with correct haplotype information, while 3232 had insufficient or lacked haplotype information and were corrected or modified before storing in the CHD. The CHD can be accessed at http://chd.vnbiology.com . It provides sequences, haplotype information, and a web-based tool for mtDNA HV1 haplotyping. The CHD is updated monthly and supplies all data for download. The Canis mtDNA HV1 database contains information about canine mitochondrial DNA HV1 sequences with reconciled annotation. It serves as a tool for detection of inconsistencies in GenBank and helps identifying new HV1 haplotypes. Thus, it supports the scientific community in naming new HV1 haplotypes and to reconcile existing annotation of HV1 582 bp sequences.

  14. Unraveling the web of viroinformatics: computational tools and databases in virus research.

    Science.gov (United States)

    Sharma, Deepak; Priyadarshini, Pragya; Vrati, Sudhanshu

    2015-02-01

    The beginning of the second century of research in the field of virology (the first virus was discovered in 1898) was marked by its amalgamation with bioinformatics, resulting in the birth of a new domain--viroinformatics. The availability of more than 100 Web servers and databases embracing all or specific viruses (for example, dengue virus, influenza virus, hepatitis virus, human immunodeficiency virus [HIV], hemorrhagic fever virus [HFV], human papillomavirus [HPV], West Nile virus, etc.) as well as distinct applications (comparative/diversity analysis, viral recombination, small interfering RNA [siRNA]/short hairpin RNA [shRNA]/microRNA [miRNA] studies, RNA folding, protein-protein interaction, structural analysis, and phylotyping and genotyping) will definitely aid the development of effective drugs and vaccines. However, information about their access and utility is not available at any single source or on any single platform. Therefore, a compendium of various computational tools and resources dedicated specifically to virology is presented in this article. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  15. Best Basis Inventory Maintenance Tool (BBIM) Database Description and User Guide

    Energy Technology Data Exchange (ETDEWEB)

    TRAN, T.T.

    2000-03-27

    The Best Basis Inventory Maintenance Tool (BBIM) is a computer database application with built-in calculations that model the chemical composition of the Hanford tank wastes in terms of three fundamental parameters: (1) Analyte concentration; (2) Waste density; and (3) Waste volume. Using these parameters, the BBIM is able to calculate for all of the Best-Basis constituents in each of the 177 tanks: (1) Total tank waste inventories; (2) Phase-based inventories; and (3) Phase-based concentrations. Calculations are handled differently depending upon the pedigree or type of the underlying data; for example, the input concentration could be in solid units, such as ''ug/g'' or in liquid units, such as ''ug/mL''. In each of these cases, there would be slight variations to the basic inventory calculation formula (Concentration - Density - Volume). In addition to calculating inventories, the BBIM also documents the source of the underlying data and how the calculations were performed. An enhancement is planned for 1Q00 to account for wastes transferred between tanks. When this is in place, the BBIM will be able to reflect ongoing Tank Farm operations, and will continuously (with a slight lag behind operational activities) maintain the documented best-basis inventory.

  16. U-Index, a dataset and an impact metric for informatics tools and databases

    Science.gov (United States)

    Callahan, Alison; Winnenburg, Rainer; Shah, Nigam H

    2018-01-01

    Measuring the usage of informatics resources such as software tools and databases is essential to quantifying their impact, value and return on investment. We have developed a publicly available dataset of informatics resource publications and their citation network, along with an associated metric (u-Index) to measure informatics resources’ impact over time. Our dataset differentiates the context in which citations occur to distinguish between ‘awareness’ and ‘usage’, and uses a citing universe of open access publications to derive citation counts for quantifying impact. Resources with a high ratio of usage citations to awareness citations are likely to be widely used by others and have a high u-Index score. We have pre-calculated the u-Index for nearly 100,000 informatics resources. We demonstrate how the u-Index can be used to track informatics resource impact over time. The method of calculating the u-Index metric, the pre-computed u-Index values, and the dataset we compiled to calculate the u-Index are publicly available. PMID:29557976

  17. The Use of PDB database as a Tool for Biochemistry Active Learning of Undergraduate Students

    Directory of Open Access Journals (Sweden)

    T.M.F. Günther et al.

    2017-07-01

    .50% and 66.60% also referred others databases. Surprisingly, PDB number was less mentioned(12.83%. Conclusions: Scientific curated information on PDB is an excellent digital resource, with tools capable to improve Biochemistry skills through the active learning process.

  18. The TNO Vehicle Dynamics Database (VDD): a tool for comparision of vehicle dynamics behaviour

    NARCIS (Netherlands)

    Kleuskens, R.J.A.

    1996-01-01

    TNO has started a database containing data of vehicle dynamics properties of passenger cars, based on the results of eight objective ISO driving tests. From this database, car manufacturers and their suppliers can get the required information for design and marketing purposes in a fast and

  19. The alpha/beta-Hydrolase Fold 3DM Database (ABHDB) as a Tool for Protein Engineering

    NARCIS (Netherlands)

    Kourist, R.; Jochens, H.; Bartsch, S.; Kuipers, R.K.P.; Padhi, S.K.; Gall, M.; Bottcher, D.; Joosten, H.J.; Bornscheuer, U.T.

    2010-01-01

    Aligning the haystack to expose the needle: The 3DM method was used to generate a comprehensive database of the a/ß-hydrolase fold enzyme superfamily. This database facilitates the analysis of structure–function relationships and enables novel insights into this superfamily to be made. In addition

  20. An empirical modeling tool and glass property database in development of US-DOE radioactive waste glasses

    International Nuclear Information System (INIS)

    Muller, I.; Gan, H.

    1997-01-01

    An integrated glass database has been developed at the Vitreous State Laboratory of Catholic University of America. The major objective of this tool was to support glass formulation using the MAWS approach (Minimum Additives Waste Stabilization). An empirical modeling capability, based on the properties of over 1000 glasses in the database, was also developed to help formulate glasses from waste streams under multiple user-imposed constraints. The use of this modeling capability, the performance of resulting models in predicting properties of waste glasses, and the correlation of simple structural theories to glass properties are the subjects of this paper. (authors)

  1. The International Experimental Thermal Hydraulic Systems database – TIETHYS: A new NEA validation tool

    Energy Technology Data Exchange (ETDEWEB)

    Rohatgi, Upendra S.

    2018-07-22

    Nuclear reactor codes require validation with appropriate data representing the plant for specific scenarios. The thermal-hydraulic data is scattered in different locations and in different formats. Some of the data is in danger of being lost. A relational database is being developed to organize the international thermal hydraulic test data for various reactor concepts and different scenarios. At the reactor system level, that data is organized to include separate effect tests and integral effect tests for specific scenarios and corresponding phenomena. The database relies on the phenomena identification sections of expert developed PIRTs. The database will provide a summary of appropriate data, review of facility information, test description, instrumentation, references for the experimental data and some examples of application of the data for validation. The current database platform includes scenarios for PWR, BWR, VVER, and specific benchmarks for CFD modelling data and is to be expanded to include references for molten salt reactors. There are place holders for high temperature gas cooled reactors, CANDU and liquid metal reactors. This relational database is called The International Experimental Thermal Hydraulic Systems (TIETHYS) database and currently resides at Nuclear Energy Agency (NEA) of the OECD and is freely open to public access. Going forward the database will be extended to include additional links and data as they become available. https://www.oecd-nea.org/tiethysweb/

  2. Online Analytical Processing (OLAP: A Fast and Effective Data Mining Tool for Gene Expression Databases

    Directory of Open Access Journals (Sweden)

    Nadim W. Alkharouf

    2005-01-01

    soybean cyst nematode, a devastating pest of soybean. The data for these experiments is stored in the soybean genomics and microarray database (SGMD. A number of candidate resistance genes and pathways were found. Compared to traditional cluster analysis of gene expression data, OLAP was more effective and faster in finding biologically meaningful information. OLAP is available from a number of vendors and can work with any relational database management system through OLE DB.

  3. ARACHNID: A prototype object-oriented database tool for distributed systems

    Science.gov (United States)

    Younger, Herbert; Oreilly, John; Frogner, Bjorn

    1994-01-01

    This paper discusses the results of a Phase 2 SBIR project sponsored by NASA and performed by MIMD Systems, Inc. A major objective of this project was to develop specific concepts for improved performance in accessing large databases. An object-oriented and distributed approach was used for the general design, while a geographical decomposition was used as a specific solution. The resulting software framework is called ARACHNID. The Faint Source Catalog developed by NASA was the initial database testbed. This is a database of many giga-bytes, where an order of magnitude improvement in query speed is being sought. This database contains faint infrared point sources obtained from telescope measurements of the sky. A geographical decomposition of this database is an attractive approach to dividing it into pieces. Each piece can then be searched on individual processors with only a weak data linkage between the processors being required. As a further demonstration of the concepts implemented in ARACHNID, a tourist information system is discussed. This version of ARACHNID is the commercial result of the project. It is a distributed, networked, database application where speed, maintenance, and reliability are important considerations. This paper focuses on the design concepts and technologies that form the basis for ARACHNID.

  4. Application of SIG and OLAP technologies on IBGE databases as a decision support tool for the county administration

    Directory of Open Access Journals (Sweden)

    REGO, E. A.

    2008-06-01

    Full Text Available This paper shows a Decision Support System development for any brazilian county. The system is free of any costs research. For doing so, one uses the datawarehouse, OLAP and GIS technologies all together with the IBGE's database to give to the user a query building tool, showing the results in maps or/and tables format, on a very simple and efficient way.

  5. MetalS(3), a database-mining tool for the identification of structurally similar metal sites.

    Science.gov (United States)

    Valasatava, Yana; Rosato, Antonio; Cavallaro, Gabriele; Andreini, Claudia

    2014-08-01

    We have developed a database search tool to identify metal sites having structural similarity to a query metal site structure within the MetalPDB database of minimal functional sites (MFSs) contained in metal-binding biological macromolecules. MFSs describe the local environment around the metal(s) independently of the larger context of the macromolecular structure. Such a local environment has a determinant role in tuning the chemical reactivity of the metal, ultimately contributing to the functional properties of the whole system. The database search tool, which we called MetalS(3) (Metal Sites Similarity Search), can be accessed through a Web interface at http://metalweb.cerm.unifi.it/tools/metals3/ . MetalS(3) uses a suitably adapted version of an algorithm that we previously developed to systematically compare the structure of the query metal site with each MFS in MetalPDB. For each MFS, the best superposition is kept. All these superpositions are then ranked according to the MetalS(3) scoring function and are presented to the user in tabular form. The user can interact with the output Web page to visualize the structural alignment or the sequence alignment derived from it. Options to filter the results are available. Test calculations show that the MetalS(3) output correlates well with expectations from protein homology considerations. Furthermore, we describe some usage scenarios that highlight the usefulness of MetalS(3) to obtain mechanistic and functional hints regardless of homology.

  6. SurvExpress: an online biomarker validation tool and database for cancer gene expression data using survival analysis.

    Science.gov (United States)

    Aguirre-Gamboa, Raul; Gomez-Rueda, Hugo; Martínez-Ledesma, Emmanuel; Martínez-Torteya, Antonio; Chacolla-Huaringa, Rafael; Rodriguez-Barrientos, Alberto; Tamez-Peña, José G; Treviño, Victor

    2013-01-01

    Validation of multi-gene biomarkers for clinical outcomes is one of the most important issues for cancer prognosis. An important source of information for virtual validation is the high number of available cancer datasets. Nevertheless, assessing the prognostic performance of a gene expression signature along datasets is a difficult task for Biologists and Physicians and also time-consuming for Statisticians and Bioinformaticians. Therefore, to facilitate performance comparisons and validations of survival biomarkers for cancer outcomes, we developed SurvExpress, a cancer-wide gene expression database with clinical outcomes and a web-based tool that provides survival analysis and risk assessment of cancer datasets. The main input of SurvExpress is only the biomarker gene list. We generated a cancer database collecting more than 20,000 samples and 130 datasets with censored clinical information covering tumors over 20 tissues. We implemented a web interface to perform biomarker validation and comparisons in this database, where a multivariate survival analysis can be accomplished in about one minute. We show the utility and simplicity of SurvExpress in two biomarker applications for breast and lung cancer. Compared to other tools, SurvExpress is the largest, most versatile, and quickest free tool available. SurvExpress web can be accessed in http://bioinformatica.mty.itesm.mx/SurvExpress (a tutorial is included). The website was implemented in JSP, JavaScript, MySQL, and R.

  7. TiPs: a database of therapeutic targets in pathogens and associated tools.

    KAUST Repository

    Lepore, Rosalba

    2013-05-21

    MOTIVATION: The need for new drugs and new targets is particularly compelling in an era that is witnessing an alarming increase of drug resistance in human pathogens. The identification of new targets of known drugs is a promising approach, which has proven successful in several cases. Here, we describe a database that includes information on 5153 putative drug-target pairs for 150 human pathogens derived from available drug-target crystallographic complexes. AVAILABILITY AND IMPLEMENTATION: The TiPs database is freely available at http://biocomputing.it/tips. CONTACT: anna.tramontano@uniroma1.it or allegra.via@uniroma1.it.

  8. The Current Status of Germplum Database: a Tool for Characterization of Plum Genetic Resources in Romania

    Directory of Open Access Journals (Sweden)

    Monica Harta

    2016-11-01

    Full Text Available In Romania, Prunus genetic resources are kept in collections of varieties, populations and biotypes, mainly located in research and development institutes or fruit growing stations and, in the last years, by some private enterprises. Creating the experimental model for the Germplum database based on phenotypic descriptors and SSR molecular markers analysis is an important and topical objective for the efficient characterization of genetic resources and also for establishing a public-private partnership for the effective management of plum germplasm resources in Romania. The technical development of the Germplum database was completed and data will be added continuously after characterizing each new accession.

  9. Cambridge Structural Database as a tool for studies of general structural features of organic molecular crystals

    International Nuclear Information System (INIS)

    Kuleshova, Lyudmila N; Antipin, Mikhail Yu

    1999-01-01

    The review surveys and generalises data on the use of the Cambridge Structural Database (CSD) for studying and revealing general structural features of organic molecular crystals. It is demonstrated that software and facilities of the CSD allow one to test the applicability of a number of known concepts of organic crystal chemistry (the principle of close packing, the frequency of occurrence of space groups, the preferred formation of centrosymmetrical molecular crystals, etc.) on the basis of abundant statistical data. Examples of the use of the Cambridge Structural Database in engineering of molecular crystals and in the systematic search for compounds with specified properties are given. The bibliography includes 122 references.

  10. Development and Exploration of a Regional Stormwater BMP Performance Database to Parameterize an Integrated Decision Support Tool (i-DST)

    Science.gov (United States)

    Bell, C.; Li, Y.; Lopez, E.; Hogue, T. S.

    2017-12-01

    Decision support tools that quantitatively estimate the cost and performance of infrastructure alternatives are valuable for urban planners. Such a tool is needed to aid in planning stormwater projects to meet diverse goals such as the regulation of stormwater runoff and its pollutants, minimization of economic costs, and maximization of environmental and social benefits in the communities served by the infrastructure. This work gives a brief overview of an integrated decision support tool, called i-DST, that is currently being developed to serve this need. This presentation focuses on the development of a default database for the i-DST that parameterizes water quality treatment efficiency of stormwater best management practices (BMPs) by region. Parameterizing the i-DST by region will allow the tool to perform accurate simulations in all parts of the United States. A national dataset of BMP performance is analyzed to determine which of a series of candidate regionalizations explains the most variance in the national dataset. The data used in the regionalization analysis comes from the International Stormwater BMP Database and data gleaned from an ongoing systematic review of peer-reviewed and gray literature. In addition to identifying a regionalization scheme for water quality performance parameters in the i-DST, our review process will also provide example methods and protocols for systematic reviews in the field of Earth Science.

  11. Update of the Diatom EST Database: a new tool for digital transcriptomics

    Science.gov (United States)

    Maheswari, Uma; Mock, Thomas; Armbrust, E. Virginia; Bowler, Chris

    2009-01-01

    The Diatom Expressed Sequence Tag (EST) Database was constructed to provide integral access to ESTs from these ecologically and evolutionarily interesting microalgae. It has now been updated with 130 000 Phaeodactylum tricornutum ESTs from 16 cDNA libraries and 77 000 Thalassiosira pseudonana ESTs from seven libraries, derived from cells grown in different nutrient and stress regimes. The updated relational database incorporates results from statistical analyses such as log-likelihood ratios and hierarchical clustering, which help to identify differentially expressed genes under different conditions, and allow similarities in gene expression in different libraries to be investigated in a functional context. The database also incorporates links to the recently sequenced genomes of P. tricornutum and T. pseudonana, enabling an easy cross-talk between the expression pattern of diatom orthologs and the genome browsers. These improvements will facilitate exploration of diatom responses to conditions of ecological relevance and will aid gene function identification of diatom-specific genes and in silico gene prediction in this largely unexplored class of eukaryotes. The updated Diatom EST Database is available at http://www.biologie.ens.fr/diatomics/EST3. PMID:19029140

  12. Data extraction tool and colocation database for satellite and model product evaluation (Invited)

    Science.gov (United States)

    Ansari, S.; Zhang, H.; Privette, J. L.; Del Greco, S.; Urzen, M.; Pan, Y.; Cook, R. B.; Wilson, B. E.; Wei, Y.

    2009-12-01

    The Satellite Product Evaluation Center (SPEC) is an ongoing project to integrate operational monitoring of data products from satellite and model analysis, with support for quantitative calibration, validation and algorithm improvement. The system uniquely allows scientists and others to rapidly access, subset, visualize, statistically compare and download multi-temporal data from multiple in situ, satellite, weather radar and model sources without reference to native data and metadata formats, packaging or physical location. Although still in initial development, the SPEC database and services will contain a wealth of integrated data for evaluation, validation, and discovery science activities across many different disciplines. The SPEC data extraction architecture departs from traditional dataset and research driven approaches through the use of standards and relational database technology. The NetCDF for Java API is used as a framework for data decoding and abstraction. The data are treated as generic feature types (such as Grid or Swath) as defined by the NetCDF Climate and Forecast (CF) metadata conventions. Colocation data for various field measurement networks, such as the Climate Reference Network (CRN) and Ameriflux network, are extracted offline, from local disk or distributed sources. The resulting data subsets are loaded into a relational database for fast access. URL-based (Representational State Transfer (REST)) web services are provided for simple database access to application programmers and scientists. SPEC supports broad NOAA, U.S. Global Change Research Program (USGCRP) and World Climate Research Programme (WCRP) initiatives including the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and NOAA’s Climate Data Record (CDR) programs. SPEC is a collaboration between NOAA’s National Climatic Data Center (NCDC) and DOE’s Oak Ridge National Laboratory (ORNL). In this presentation we will describe the data extraction

  13. ePORT, NASA's Computer Database Program for System Safety Risk Management Oversight (Electronic Project Online Risk Tool)

    Science.gov (United States)

    Johnson, Paul W.

    2008-01-01

    ePORT (electronic Project Online Risk Tool) provides a systematic approach to using an electronic database program to manage a program/project risk management processes. This presentation will briefly cover the standard risk management procedures, then thoroughly cover NASA's Risk Management tool called ePORT. This electronic Project Online Risk Tool (ePORT) is a web-based risk management program that provides a common framework to capture and manage risks, independent of a programs/projects size and budget. It is used to thoroughly cover the risk management paradigm providing standardized evaluation criterion for common management reporting, ePORT improves Product Line, Center and Corporate Management insight, simplifies program/project manager reporting, and maintains an archive of data for historical reference.

  14. A plant resource and experiment management system based on the Golm Plant Database as a basic tool for omics research

    Directory of Open Access Journals (Sweden)

    Selbig Joachim

    2008-05-01

    Full Text Available Abstract Background For omics experiments, detailed characterisation of experimental material with respect to its genetic features, its cultivation history and its treatment history is a requirement for analyses by bioinformatics tools and for publication needs. Furthermore, meta-analysis of several experiments in systems biology based approaches make it necessary to store this information in a standardised manner, preferentially in relational databases. In the Golm Plant Database System, we devised a data management system based on a classical Laboratory Information Management System combined with web-based user interfaces for data entry and retrieval to collect this information in an academic environment. Results The database system contains modules representing the genetic features of the germplasm, the experimental conditions and the sampling details. In the germplasm module, genetically identical lines of biological material are generated by defined workflows, starting with the import workflow, followed by further workflows like genetic modification (transformation, vegetative or sexual reproduction. The latter workflows link lines and thus create pedigrees. For experiments, plant objects are generated from plant lines and united in so-called cultures, to which the cultivation conditions are linked. Materials and methods for each cultivation step are stored in a separate ACCESS database of the plant cultivation unit. For all cultures and thus every plant object, each cultivation site and the culture's arrival time at a site are logged by a barcode-scanner based system. Thus, for each plant object, all site-related parameters, e.g. automatically logged climate data, are available. These life history data and genetic information for the plant objects are linked to analytical results by the sampling module, which links sample components to plant object identifiers. This workflow uses controlled vocabulary for organs and treatments. Unique

  15. A database and tool, IM Browser, for exploring and integrating emerging gene and protein interaction data for Drosophila

    Directory of Open Access Journals (Sweden)

    Parrish Jodi R

    2006-04-01

    Full Text Available Abstract Background Biological processes are mediated by networks of interacting genes and proteins. Efforts to map and understand these networks are resulting in the proliferation of interaction data derived from both experimental and computational techniques for a number of organisms. The volume of this data combined with the variety of specific forms it can take has created a need for comprehensive databases that include all of the available data sets, and for exploration tools to facilitate data integration and analysis. One powerful paradigm for the navigation and analysis of interaction data is an interaction graph or map that represents proteins or genes as nodes linked by interactions. Several programs have been developed for graphical representation and analysis of interaction data, yet there remains a need for alternative programs that can provide casual users with rapid easy access to many existing and emerging data sets. Description Here we describe a comprehensive database of Drosophila gene and protein interactions collected from a variety of sources, including low and high throughput screens, genetic interactions, and computational predictions. We also present a program for exploring multiple interaction data sets and for combining data from different sources. The program, referred to as the Interaction Map (IM Browser, is a web-based application for searching and visualizing interaction data stored in a relational database system. Use of the application requires no downloads and minimal user configuration or training, thereby enabling rapid initial access to interaction data. IM Browser was designed to readily accommodate and integrate new types of interaction data as it becomes available. Moreover, all information associated with interaction measurements or predictions and the genes or proteins involved are accessible to the user. This allows combined searches and analyses based on either common or technique-specific attributes

  16. Database with web interface and search engine as a diagnostics tool for electromagnetic calorimeter

    CERN Document Server

    Paluoja, Priit

    2017-01-01

    During 2016 data collection, the Compact Muon Solenoid Data Acquisition (CMS DAQ) system has shown a very good reliability. Nevertheless, the high complexity of the hardware and the software involved is, by its nature, prone to some occasional problems. As CMS subdetector, electromagnetic calorimeter (ECAL) is affected in the same way. Some of the issues are not predictable and can appear during the year more than once such as components getting noisy, power shortcuts or failing communication between machines. The chain detection-diagnosis-intervention must be as fast as possible to minimise the downtime of the detector. The aim of this project was to create a diagnostic software for ECAL crew, which consists of database and its web interface that allows to search, add and edit the contents of the database.

  17. Genealogical databases as a tool for extending follow-up in clinical reviews.

    Science.gov (United States)

    Ho, Thuy-Van; Chowdhury, Naweed; Kandl, Christopher; Hoover, Cindy; Robinson, Ann; Hoover, Larry

    2016-08-01

    Long-term follow-up in clinical reviews often presents significant difficulty with conventional medical records alone. Publicly accessible genealogical databases such as Ancestry.com provide another avenue for obtaining extended follow-up and added outcome information. No previous studies have described the use of genealogical databases in the follow-up of individual patients. Ancestry.com, the largest genealogical database in the United States, houses extensive demographic data on an increasing number of Americans. In a recent retrospective review of esthesioneuroblastoma patients treated at our institution, we used this resource to ascertain the outcomes of patients otherwise lost to follow-up. Additional information such as quality of life and supplemental treatments the patient may have received at home was obtained through direct contact with living relatives. The use of Ancestry.com resulted in a 25% increase (20 months) in follow-up duration as well as incorporation of an additional 7 patients in our study (18%) who would otherwise not have had adequate hospital chart data for inclusion. Many patients within this subset had more advanced disease or were remotely located from our institution. As such, exclusion of these outliers can impact the quality of subsequent outcome analysis. Online genealogical databases provide a unique resource of public information that is acceptable to institutional review boards for patient follow-up in clinical reviews. Utilization of Ancestry.com data led to significant improvement in follow-up duration and increased the number of patients with sufficient data that could be included in our retrospective study. © 2016 ARS-AAOA, LLC.

  18. NCBI GEO: mining millions of expression profiles—database and tools

    OpenAIRE

    Barrett, Tanya; Suzek, Tugba O.; Troup, Dennis B.; Wilhite, Stephen E.; Ngau, Wing-Chi; Ledoux, Pierre; Rudnev, Dmitry; Lash, Alex E.; Fujibuchi, Wataru; Edgar, Ron

    2004-01-01

    The Gene Expression Omnibus (GEO) at the National Center for Biotechnology Information (NCBI) is the largest fully public repository for high-throughput molecular abundance data, primarily gene expression data. The database has a flexible and open design that allows the submission, storage and retrieval of many data types. These data include microarray-based experiments measuring the abundance of mRNA, genomic DNA and protein molecules, as well as non-array-based technologies such as serial a...

  19. NCBI GEO: mining tens of millions of expression profiles?database and tools update

    OpenAIRE

    Barrett, Tanya; Troup, Dennis B.; Wilhite, Stephen E.; Ledoux, Pierre; Rudnev, Dmitry; Evangelista, Carlos; Kim, Irene F.; Soboleva, Alexandra; Tomashevsky, Maxim; Edgar, Ron

    2006-01-01

    The Gene Expression Omnibus (GEO) repository at the National Center for Biotechnology Information (NCBI) archives and freely disseminates microarray and other forms of high-throughput data generated by the scientific community. The database has a minimum information about a microarray experiment (MIAME)-compliant infrastructure that captures fully annotated raw and processed data. Several data deposit options and formats are supported, including web forms, spreadsheets, XML and Simple Omnibus...

  20. The GLIMS geospatial glacier database: A new tool for studying glacier change

    Science.gov (United States)

    Raup, Bruce; Racoviteanu, Adina; Khalsa, Siri Jodha Singh; Helm, Christopher; Armstrong, Richard; Arnaud, Yves

    2007-03-01

    The Global Land Ice Measurement from Space (GLIMS) project is a cooperative effort of over sixty institutions world-wide with the goal of inventorying a majority of the world's estimated 160 000 glaciers. Each institution (called a Regional Center, or RC) oversees the analysis of satellite imagery for a particular region containing glacier ice. Data received by the GLIMS team at the National Snow and Ice Data Center (NSIDC) in Boulder, Colorado are ingested into a spatially-enabled database (PostGIS) and made available via a website featuring an interactive map, and a Web-Mapping Service (WMS). The WMS, an Open Geospatial Consortium (OGC)-compliant web interface, makes GLIMS glacier data available to other data servers. The GLIMS Glacier Database is accessible on the World Wide Web at " http://nsidc.org/glims/". There, users can browse custom maps, display various data layers, query information within the GLIMS database, and download query results in different GIS-compatible formats. Map layers include glacier outlines, footprints of ASTER satellite optical images acquired over glaciers, and Regional Center information. The glacier and ASTER footprint layers may be queried for scalar attribute data, such as analyst name and date of contribution for glacier data, and acquisition time and browse imagery for the ASTER footprint layer. We present an example analysis of change in Cordillera Blanca glaciers, as determined by comparing data in the GLIMS Glacier Database to historical data. Results show marked changes in that system over the last 30 years, but also point out the need for establishing clear protocols for glacier monitoring from remote-sensing data.

  1. Scientific Meetings Database: A New Tool for CTBT-Related International Cooperation

    Energy Technology Data Exchange (ETDEWEB)

    Knapik, Jerzy F.; Girven, Mary L.

    1999-08-20

    The mission of international cooperation is defined in the Comprehensive Nuclear-Test-Ban Treaty (CTBT). Ways and means of implementation were the subject of discussion during the International Cooperation Workshop held in Vienna in November 1998, and during the Regional Workshop for CTBTO International Cooperation held in Cairo, Egypt in June 1999. In particular, a database of ''Scientific and Technical Meetings Directly or Indirectly Related to CTBT Verification-Related Technologies'' was developed by the CTBTO PrepCom/PTS/International Cooperation section and integrated into the organization's various web sites in cooperation with the U.S. Department of Energy CTBT Research and Development Program. This database, the structure and use of which is described in this paper/presentation is meant to assist the CTBT-related scientific community in identifying worldwide expertise in the CTBT verification-related technologies and should help experts, particularly those of less technologically advanced States Signatories, to strengthen contacts and to pursue international cooperation under the Tredy regime. Specific opportunities for international cooperation, in particular those provided by active participation in the use and further development of this database, are presented in this paper and/or presentation.

  2. NPASS: natural product activity and species source database for natural product research, discovery and tool development.

    Science.gov (United States)

    Zeng, Xian; Zhang, Peng; He, Weidong; Qin, Chu; Chen, Shangying; Tao, Lin; Wang, Yali; Tan, Ying; Gao, Dan; Wang, Bohua; Chen, Zhe; Chen, Weiping; Jiang, Yu Yang; Chen, Yu Zong

    2018-01-04

    There has been renewed interests in the exploration of natural products (NPs) for drug discovery, and continuous investigations of the therapeutic claims and mechanisms of traditional and herbal medicines. In-silico methods have been employed for facilitating these studies. These studies and the optimization of in-silico algorithms for NP applications can be facilitated by the quantitative activity and species source data of the NPs. A number of databases collectively provide the structural and other information of ∼470 000 NPs, including qualitative activity information for many NPs, but only ∼4000 NPs are with the experimental activity values. There is a need for the activity and species source data of more NPs. We therefore developed a new database, NPASS (Natural Product Activity and Species Source) to complement other databases by providing the experimental activity values and species sources of 35 032 NPs from 25 041 species targeting 5863 targets (2946 proteins, 1352 microbial species and 1227 cell-lines). NPASS contains 446 552 quantitative activity records (e.g. IC50, Ki, EC50, GI50 or MIC mainly in units of nM) of 222 092 NP-target pairs and 288 002 NP-species pairs. NPASS, http://bidd2.nus.edu.sg/NPASS/, is freely accessible with its contents searchable by keywords, physicochemical property range, structural similarity, species and target search facilities. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. Development of a Software Tool for Calculating Transmission Line Parameters and Updating Related Databases

    Science.gov (United States)

    Xiu, Wanjing; Liao, Yuan

    2014-12-01

    Transmission lines are essential components of electric power grids. Diverse power system applications and simulation based studies require transmission line parameters including series resistance, reactance, and shunt susceptance, and accurate parameters are pivotal in ensuring the accuracy of analyses and reliable system operation. Commercial software packages for performing power system studies usually have their own databases that store the power system model including line parameters. When there is a physical system model change, the corresponding component in the database of the software packages will need to be modified. Manually updating line parameters are tedious and error-prone. This paper proposes a solution for streamlining the calculation of line parameters and updating of their values in respective software databases. The algorithms used for calculating the values of line parameters are described. The software developed for implementing the solution is described, and typical results are presented. The proposed solution is developed for a utility and has a potential to be put into use by other utilities.

  4. World Ocean Database as a dissemination tool for distributed quality controlled ocean profile data

    Science.gov (United States)

    Reagan, J. R.; Boyer, T.; Locarnini, R. A.; Zweng, M.; Paver, C.; Smolyar, I.; Garcia, H. E.; Baranova, O.

    2016-02-01

    The World Ocean Database (WOD) is the largest publicly available uniform format quality controlled database for ocean profile data (temperature, salinity, oxygen, nutrients, carbon variables, biological variables). The WOD is a basis for many oceanographic and climate studies. Climate studies in particular are dependent on high quality data to separate climate change signal from noise. With over 14 million ocean profiles from ship based and autonomous instruments and growing, the task of identifying high quality measurements and applying automatic and manual quality control within WOD is large and ongoing. The International Quality Controlled Oceanographic Database (IQuOD) project aims to make publicly available an internationally agreed upon set of temperature (and eventually salinity) profile data with quality control suitable for climate studies. The IQuOD project will subject data within the WOD (and newly acquired historical data) to rigorous expert quality control following agreed upon standards. The IQuOD dataset will be available through the WOD distribution system with its own set of IQuOD quality flags. Original values, bias corrections, and instrument based uncertainties will be included. This will allow a researcher to request and download a standardized quality dataset for climate research. This will relieve the researcher of the need to perform basic quality control and will allow for comparison of results whereby data and quality control will be a constant rather than a variable. How and in what form IQuOD will be disseminated through WOD and the relation of IQuOD to the overall WOD will be discussed.

  5. Metadata Dictionary Database: A Proposed Tool for Academic Library Metadata Management

    Science.gov (United States)

    Southwick, Silvia B.; Lampert, Cory

    2011-01-01

    This article proposes a metadata dictionary (MDD) be used as a tool for metadata management. The MDD is a repository of critical data necessary for managing metadata to create "shareable" digital collections. An operational definition of metadata management is provided. The authors explore activities involved in metadata management in…

  6. ProteoLens: a visual analytic tool for multi-scale database-driven biological network data mining.

    Science.gov (United States)

    Huan, Tianxiao; Sivachenko, Andrey Y; Harrison, Scott H; Chen, Jake Y

    2008-08-12

    New systems biology studies require researchers to understand how interplay among myriads of biomolecular entities is orchestrated in order to achieve high-level cellular and physiological functions. Many software tools have been developed in the past decade to help researchers visually navigate large networks of biomolecular interactions with built-in template-based query capabilities. To further advance researchers' ability to interrogate global physiological states of cells through multi-scale visual network explorations, new visualization software tools still need to be developed to empower the analysis. A robust visual data analysis platform driven by database management systems to perform bi-directional data processing-to-visualizations with declarative querying capabilities is needed. We developed ProteoLens as a JAVA-based visual analytic software tool for creating, annotating and exploring multi-scale biological networks. It supports direct database connectivity to either Oracle or PostgreSQL database tables/views, on which SQL statements using both Data Definition Languages (DDL) and Data Manipulation languages (DML) may be specified. The robust query languages embedded directly within the visualization software help users to bring their network data into a visualization context for annotation and exploration. ProteoLens supports graph/network represented data in standard Graph Modeling Language (GML) formats, and this enables interoperation with a wide range of other visual layout tools. The architectural design of ProteoLens enables the de-coupling of complex network data visualization tasks into two distinct phases: 1) creating network data association rules, which are mapping rules between network node IDs or edge IDs and data attributes such as functional annotations, expression levels, scores, synonyms, descriptions etc; 2) applying network data association rules to build the network and perform the visual annotation of graph nodes and edges

  7. The Fluka Linebuilder and Element Database: Tools for Building Complex Models of Accelerators Beam Lines

    CERN Document Server

    Mereghetti, A; Cerutti, F; Versaci, R; Vlachoudis, V

    2012-01-01

    Extended FLUKA models of accelerator beam lines can be extremely complex: heavy to manipulate, poorly versatile and prone to mismatched positioning. We developed a framework capable of creating the FLUKA model of an arbitrary portion of a given accelerator, starting from the optics configuration and a few other information provided by the user. The framework includes a builder (LineBuilder), an element database and a series of configuration and analysis scripts. The LineBuilder is a Python program aimed at dynamically assembling complex FLUKA models of accelerator beam lines: positions, magnetic fields and scorings are automatically set up, and geometry details such as apertures of collimators, tilting and misalignment of elements, beam pipes and tunnel geometries can be entered at user’s will. The element database (FEDB) is a collection of detailed FLUKA geometry models of machine elements. This framework has been widely used for recent LHC and SPS beam-machine interaction studies at CERN, and led to a dra...

  8. The New Zealand Food Composition Database: A useful tool for assessing New Zealanders' nutrient intake.

    Science.gov (United States)

    Sivakumaran, Subathira; Huffman, Lee; Sivakumaran, Sivalingam

    2018-01-01

    A country-specific food composition databases is useful for assessing nutrient intake reliably in national nutrition surveys, research studies and clinical practice. The New Zealand Food Composition Database (NZFCDB) programme seeks to maintain relevant and up-to-date food records that reflect the composition of foods commonly consumed in New Zealand following Food Agricultural Organisation of the United Nations/International Network of Food Data Systems (FAO/INFOODS) guidelines. Food composition data (FCD) of up to 87 core components for approximately 600 foods have been added to NZFCDB since 2010. These foods include those identified as providing key nutrients in a 2008/09 New Zealand Adult Nutrition Survey. Nutrient data obtained by analysis of composite samples or are calculated from analytical data. Currently >2500 foods in 22 food groups are freely available in various NZFCDB output products on the website: www.foodcomposition.co.nz. NZFCDB is the main source of FCD for estimating nutrient intake in New Zealand nutrition surveys. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Automating testbed documentation and database access using World Wide Web (WWW) tools

    Science.gov (United States)

    Ames, Charles; Auernheimer, Brent; Lee, Young H.

    1994-01-01

    A method for providing uniform transparent access to disparate distributed information systems was demonstrated. A prototype testing interface was developed to access documentation and information using publicly available hypermedia tools. The prototype gives testers a uniform, platform-independent user interface to on-line documentation, user manuals, and mission-specific test and operations data. Mosaic was the common user interface, and HTML (Hypertext Markup Language) provided hypertext capability.

  10. Building, running and dismantling the world's largest scientific instrument with the same database tools

    CERN Document Server

    Billen, R; CERN. Geneva. SPS and LHC Division

    2001-01-01

    Many people have heard of CERN, the European Organisation for Nuclear Research, and its enormous scientific masterpiece LEP, the Large Electron Positron collider. This is a 27-kilometer long particle accelerator designed to peek deeply inside the structure of matter in the framework of fundamental research. Despite the millions of Internet users, few of them know that the World Wide Web was invented at CERN in 1989, the same year that LEP was commissioned. Even fewer people know that CERN was among the first European organisations to have purchased the Oracle RDBMS back in 1983 and effectively put it in use for mission critical data management applications. Since that date, Oracle databases have been used extensively at CERN and in particular for technical and scientific data. This paper gives an overview of the use of Oracle throughout the lifecycle of CERN's flagship: the construction, exploitation and dismantling of LEP.

  11. PGAdb-builder: A web service tool for creating pan-genome allele database for molecular fine typing.

    Science.gov (United States)

    Liu, Yen-Yi; Chiou, Chien-Shun; Chen, Chih-Chieh

    2016-11-08

    With the advance of next generation sequencing techniques, whole genome sequencing (WGS) is expected to become the optimal method for molecular subtyping of bacterial isolates. To use WGS as a general subtyping method for disease outbreak investigation and surveillance, the layout of WGS-based typing must be comparable among laboratories. Whole genome multilocus sequence typing (wgMLST) is an approach that achieves this requirement. To apply wgMLST as a standard subtyping approach, a pan-genome allele database (PGAdb) for the population of a bacterial organism must first be established. We present a free web service tool, PGAdb-builder (http://wgmlstdb.imst.nsysu.edu.tw), for the construction of bacterial PGAdb. The effectiveness of PGAdb-builder was tested by constructing a pan-genome allele database for Salmonella enterica serovar Typhimurium, with the database being applied to create a wgMLST tree for a panel of epidemiologically well-characterized S. Typhimurium isolates. The performance of the wgMLST-based approach was as high as that of the SNP-based approach in Leekitcharoenphon's study used for discerning among epidemiologically related and non-related isolates.

  12. PROCARB: A Database of Known and Modelled Carbohydrate-Binding Protein Structures with Sequence-Based Prediction Tools

    Directory of Open Access Journals (Sweden)

    Adeel Malik

    2010-01-01

    Full Text Available Understanding of the three-dimensional structures of proteins that interact with carbohydrates covalently (glycoproteins as well as noncovalently (protein-carbohydrate complexes is essential to many biological processes and plays a significant role in normal and disease-associated functions. It is important to have a central repository of knowledge available about these protein-carbohydrate complexes as well as preprocessed data of predicted structures. This can be significantly enhanced by tools de novo which can predict carbohydrate-binding sites for proteins in the absence of structure of experimentally known binding site. PROCARB is an open-access database comprising three independently working components, namely, (i Core PROCARB module, consisting of three-dimensional structures of protein-carbohydrate complexes taken from Protein Data Bank (PDB, (ii Homology Models module, consisting of manually developed three-dimensional models of N-linked and O-linked glycoproteins of unknown three-dimensional structure, and (iii CBS-Pred prediction module, consisting of web servers to predict carbohydrate-binding sites using single sequence or server-generated PSSM. Several precomputed structural and functional properties of complexes are also included in the database for quick analysis. In particular, information about function, secondary structure, solvent accessibility, hydrogen bonds and literature reference, and so forth, is included. In addition, each protein in the database is mapped to Uniprot, Pfam, PDB, and so forth.

  13. Pictorial materials database: 1200 combinations of pigments, dyes, binders and varnishes designed as a tool for heritage science and conservation

    Science.gov (United States)

    Cavaleri, Tiziana; Buscaglia, Paola; Migliorini, Simonetta; Nervo, Marco; Piccablotto, Gabriele; Piccirillo, Anna; Pisani, Marco; Puglisi, Davide; Vaudan, Dario; Zucco, Massimo

    2017-06-01

    The conservation of artworks requires a profound knowledge about pictorial materials, their chemical and physical properties and their interaction and/or degradation processes. For this reason, pictorial materials databases are widely used to study and investigate cultural heritage. At Centre for Conservation and Restoration La Venaria Reale, we prepared a set of about 1200 mock-ups with 173 different pigments and/or dyes, used across all the historical times or as products for conservation, four binders, two varnishes and four different materials for underdrawings. In collaboration with the Laboratorio Analisi Scientifiche of Regione Autonoma Valle d'Aosta, the National Institute of Metrological Research and the Department of Architecture and Design of the Polytechnic of Turin, we created a scientific database that is now available online (http://www.centrorestaurovenaria.it/en/areas/diagnostic/pictorial-materials-database) designed as a tool for heritage science and conservation. Here, we present a focus on materials for pictorial retouching where the hyperspectral imaging application, conducted with a prototype of new technology, allowed to provide a list of pigments that could be more suitable for conservation treatments and pictorial retouching. Then we present the case study of the industrial painting Notte Barbara (1962) by Pinot Gallizio where the use of the database including modern and contemporary art materials showed to be very useful and where the fibre optics reflectance spectroscopy technique was decisive for pigment identification purpose. Later in this research, the mock-ups will be exploited to study degradation processes, e.g., the lightfastness, or the possible formation of interaction products, e.g., metal carboxylates.

  14. Human transporter database: comprehensive knowledge and discovery tools in the human transporter genes.

    Directory of Open Access Journals (Sweden)

    Adam Y Ye

    Full Text Available Transporters are essential in homeostatic exchange of endogenous and exogenous substances at the systematic, organic, cellular, and subcellular levels. Gene mutations of transporters are often related to pharmacogenetics traits. Recent developments in high throughput technologies on genomics, transcriptomics and proteomics allow in depth studies of transporter genes in normal cellular processes and diverse disease conditions. The flood of high throughput data have resulted in urgent need for an updated knowledgebase with curated, organized, and annotated human transporters in an easily accessible way. Using a pipeline with the combination of automated keywords query, sequence similarity search and manual curation on transporters, we collected 1,555 human non-redundant transporter genes to develop the Human Transporter Database (HTD (http://htd.cbi.pku.edu.cn. Based on the extensive annotations, global properties of the transporter genes were illustrated, such as expression patterns and polymorphisms in relationships with their ligands. We noted that the human transporters were enriched in many fundamental biological processes such as oxidative phosphorylation and cardiac muscle contraction, and significantly associated with Mendelian and complex diseases such as epilepsy and sudden infant death syndrome. Overall, HTD provides a well-organized interface to facilitate research communities to search detailed molecular and genetic information of transporters for development of personalized medicine.

  15. Event detection using population-based health care databases in randomized clinical trials: a novel research tool in interventional cardiology

    Directory of Open Access Journals (Sweden)

    Thuesen L

    2013-09-01

    Full Text Available Leif Thuesen,1 Lisette Okkels Jensen,2 Hans Henrik Tilsted,3 Michael Mæng,1 Christian Terkelsen,1 Per Thayssen,2 Jan Ravkilde,3 Evald Høj Christiansen,1 Hans Erik Bøtker,1 Morten Madsen,4 Jens F Lassen1 1Department of Cardiology, Aarhus University Hospital, Skejby, Denmark; 2Department of Cardiology, Odense University Hospital, Odense, Denmark; 3Department of Cardiology, Aalborg University Hospital, Aalborg, Denmark; 4Department of Clinical Epidemiology, Aarhus University Hospital, Skejby, Denmark Aim: To describe a new research tool, designed to reflect routine clinical practice and relying on population-based health care databases to detect clinical events in randomized clinical trials. Background: Randomized clinical trials often focus on short-term efficacy and safety in a controlled environment. Trial follow-up may be linked with study-related investigations and differ from routine clinical practice. Because treatment and control in randomized trials differ from daily practice, trial results may have reduced general applicability and may be of limited value in clinical decision-making. Further, it is economically very costly to conduct randomized clinical trials. Methods and results: Population-based health care databases collect data continuously and prospectively, and make it possible to monitor lifelong outcomes of cardiac interventions in large numbers of patients. This strengthens external validity by eliminating the effects of study-related monitoring or diagnostic tests. Further, follow-up data can be obtained at low expense. Importantly, data sources encompassing a complete population are likely to reflect clinical practice. Because population-based health care databases collect data for quality-control and administrative purposes unrelated to scientific investigations, certain biases, such as nonresponse bias, recall bias, and bias from losses to follow-up, can be avoided. Conclusion: Event detection using population

  16. Upgrades to the GALEX photon tools for production of databases of GALEX UV photometry and variability at short timescales

    Science.gov (United States)

    Million, Chase

    This project will make use of gPhoton, a database and software suite of individual photons detected by the Galaxy Evolution Explorer (GALEX) UV survey telescope mission that has recently become available at the Mikulski Archive at Space Telescope, making studies of astrophysical phenomena observed by this mission on short time scales (seconds to minutes) feasible for the first time since the end of the mission. This project will extend the capabilities of gPhoton to make it computationally feasible to conduct studies of this type across the whole mission. That new capability will then be leveraged to search for, analyze, and build a comprehensive of short time domain behavior for a particularly interesting class of object: dwarf flares. The frequency, energy, and nature of these energetic stellar events have important consequences for habitability of orbiting exoplants. Their dynamic origin is also not currently fully understood, and the catalog produced by this work will provide an important observational resource in wavelengths and energies not available in other surveys. The proposed work will create a new software tool, in the form of a source extraction stage of gPhoton, that makes it computationally feasible to systematically and comprehensively mine the entire GALEX data set on short time domains. The proposed work will also generate and archive a new high level database product derived from the legacy GALEX mission data set in the form of a catalog of short time domain UV lightcurves of flares, extending the value and utility of the GALEX data beyond what currently exists. The proposed work is therefore responsive to ``1.3.3 Astrophysical databases and development of new data products/analysis tools'' of this solicitation and falls under ADAP Research Area ``10. Astrophysical Databases.'' The proposed work also includes a scientific focus on the structure and evolution of main sequence stars with consequences for exoplanets and is therefore responsive to

  17. Special issue on enabling open and interoperable access to Planetary Science and Heliophysics databases and tools

    Science.gov (United States)

    2018-01-01

    The large amount of data generated by modern space missions calls for a change of organization of data distribution and access procedures. Although long term archives exist for telescopic and space-borne observations, high-level functions need to be developed on top of these repositories to make Planetary Science and Heliophysics data more accessible and to favor interoperability. Results of simulations and reference laboratory data also need to be integrated to support and interpret the observations. Interoperable software and interfaces have recently been developed in many scientific domains. The Virtual Observatory (VO) interoperable standards developed for Astronomy by the International Virtual Observatory Alliance (IVOA) can be adapted to Planetary Sciences, as demonstrated by the VESPA (Virtual European Solar and Planetary Access) team within the Europlanet-H2020-RI project. Other communities have developed their own standards: GIS (Geographic Information System) for Earth and planetary surfaces tools, SPASE (Space Physics Archive Search and Extract) for space plasma, PDS4 (NASA Planetary Data System, version 4) and IPDA (International Planetary Data Alliance) for planetary mission archives, etc, and an effort to make them interoperable altogether is starting, including automated workflows to process related data from different sources.

  18. LEGER: knowledge database and visualization tool for comparative genomics of pathogenic and non-pathogenic Listeria species.

    Science.gov (United States)

    Dieterich, Guido; Kärst, Uwe; Fischer, Elmar; Wehland, Jürgen; Jänsch, Lothar

    2006-01-01

    Listeria species are ubiquitous in the environment and often contaminate foods because they grow under conditions used for food preservation. Listeria monocytogenes, the human and animal pathogen, causes Listeriosis, an infection with a high mortality rate in risk groups such as immune-compromised individuals. Furthermore, L.monocytogenes is a model organism for the study of intracellular bacterial pathogens. The publication of its genome sequence and that of the non-pathogenic species Listeria innocua initiated numerous comparative studies and efforts to sequence all species comprising the genus. The Proteome database LEGER (http://leger2.gbf.de/cgi-bin/expLeger.pl) was developed to support functional genome analyses by combining information obtained by applying bioinformatics methods and from public databases to improve the original annotations. LEGER offers three unique key features: (i) it is the first comprehensive information system focusing on the functional assignment of genes and proteins; (ii) integrated visualization tools, KEGG pathway and Genome Viewer, alleviate the functional exploration of complex data; and (iii) LEGER presents results of systematic post-genome studies, thus facilitating analyses combining computational and experimental results. Moreover, LEGER provides an unpublished membrane proteome analysis of L.innocua and in total visualizes experimentally validated information about the subcellular localizations of 789 different listerial proteins.

  19. Research Electronic Data Capture (REDCap®) used as an audit tool with a built-in database.

    Science.gov (United States)

    Kragelund, Signe H; Kjærsgaard, Mona; Jensen-Fangel, Søren; Leth, Rita A; Ank, Nina

    2018-04-09

    The aim of this study was to develop an audit tool with a built-in database using Research Electronic Data Capture (REDCap®) as part of an antimicrobial stewardship program at a regional hospital in the Central Denmark Region, and to analyse the need, if any, to involve more than one expert in the evaluation of cases of antimicrobial treatment, and the level of agreement among the experts. Patients treated with systemic antimicrobials in the period from 1 September 2015 to 31 August 2016 were included, in total 722 cases. Data were collected retrospectively and entered manually. The audit was based on seven flow charts regarding: 1) initiation of antimicrobial treatment 2) infection 3) prescription and administration of antimicrobials 4) discontinuation of antimicrobials 5) reassessment within 48 hours after the first prescription of antimicrobials 6) microbiological sampling in the period between suspicion of infection and the first administration of antimicrobials 7) microbiological results. The audit was based on automatic calculations drawing on the entered data and on expert assessments. Initially, two experts completed the audit, and in the cases in which they disagreed, a third expert was consulted. In 31.9% of the cases, the two experts agreed on all elements of the audit. In 66.2%, the two experts reached agreement by discussing the cases. Finally, 1.9% of the cases were completed in cooperation with a third expert. The experts assessed 3,406 flow charts of which they agreed on 75.8%. We succeeded in creating an audit tool with a built-in database that facilitates independent expert evaluation using REDCap. We found a large inter-observer difference that needs to be considered when constructing a project based on expert judgements. Our two experts agreed on most of the flow charts after discussion, whereas the third expert's intervention did not have any influence on the overall assessment. Copyright © 2018. Published by Elsevier Inc.

  20. Peptidase specificity from the substrate cleavage collection in the MEROPS database and a tool to measure cleavage site conservation.

    Science.gov (United States)

    Rawlings, Neil D

    2016-03-01

    One peptidase can usually be distinguished from another biochemically by its action on proteins, peptides and synthetic substrates. Since 1996, the MEROPS database (http://merops.sanger.ac.uk) has accumulated a collection of cleavages in substrates that now amounts to 66,615 cleavages. The total number of peptidases for which at least one cleavage is known is 1700 out of a total of 2457 different peptidases. This paper describes how the cleavages are obtained from the scientific literature, how they are annotated and how cleavages in peptides and proteins are cross-referenced to entries in the UniProt protein sequence database. The specificity profiles of 556 peptidases are shown for which ten or more substrate cleavages are known. However, it has been proposed that at least 40 cleavages in disparate proteins are required for specificity analysis to be meaningful, and only 163 peptidases (6.6%) fulfil this criterion. Also described are the various displays shown on the website to aid with the understanding of peptidase specificity, which are derived from the substrate cleavage collection. These displays include a logo, distribution matrix, and tables to summarize which amino acids or groups of amino acids are acceptable (or not acceptable) in each substrate binding pocket. For each protein substrate, there is a display to show how it is processed and degraded. Also described are tools on the website to help with the assessment of the physiological relevance of cleavages in a substrate. These tools rely on the hypothesis that a cleavage site that is conserved in orthologues is likely to be physiologically relevant, and alignments of substrate protein sequences are made utilizing the UniRef50 database, in which in each entry sequences are 50% or more identical. Conservation in this case means substitutions are permitted only if the amino acid is known to occupy the same substrate binding pocket from at least one other substrate cleaved by the same peptidase

  1. The Chemical Aquatic Fate and Effects database (CAFE), a tool that supports assessments of chemical spills in aquatic environments.

    Science.gov (United States)

    Bejarano, Adriana C; Farr, James K; Jenne, Polly; Chu, Valerie; Hielscher, Al

    2016-06-01

    The Chemical Aquatic Fate and Effects (CAFE) database is a centralized repository that allows for rapid and unrestricted access to data. Information in CAFE is integrated into a user-friendly tool with modules containing fate and effects data for 32 377 and 4498 chemicals, respectively. Toxicity data are summarized in the form of species sensitivity distributions (SSDs) with associated 1st and 5th percentile hazard concentrations (HCs). An assessment of data availability relative to reported chemical incidents showed that CAFE had fate and toxicity data for 32 and 20 chemicals, respectively, of 55 chemicals reported in the US National Response Center database (2000-2014), and fate and toxicity data for 86 and 103, respectively, of 205 chemicals reported by the National Oceanic and Atmospheric Administration (2003-2014). Modeled environmental concentrations of 2 hypothetical spills (acrylonitrile, 625 barrels; and denatured ethanol, 857 barrels) were used to demonstrate CAFE's practical application. Most species in the 24-h SSD could be potentially impacted by acrylonitrile and denatured ethanol during the first 35 min and 15 h post spill, respectively, with concentrations falling below their HC5s (17 mg/L and 2676 mg/L) at 45 min and 60 h post spill, respectively. Comparisons of CAFE-based versus published HC5 values for 100 chemicals showed that nearly half of values were within a 2-fold difference, with a relatively small number of comparisons exceeding a 10-fold difference. The development of CAFE facilitates access to relevant environmental information, with potential uses likely expanding beyond those related to assessment of spills in aquatic environments. Environ Toxicol Chem 2016;35:1576-1586. © 2015 SETAC. © 2015 SETAC.

  2. RSSsite: a reference database and prediction tool for the identification of cryptic Recombination Signal Sequences in human and murine genomes.

    Science.gov (United States)

    Merelli, Ivan; Guffanti, Alessandro; Fabbri, Marco; Cocito, Andrea; Furia, Laura; Grazini, Ursula; Bonnal, Raoul J; Milanesi, Luciano; McBlane, Fraser

    2010-07-01

    Recombination signal sequences (RSSs) flanking V, D and J gene segments are recognized and cut by the VDJ recombinase during development of B and T lymphocytes. All RSSs are composed of seven conserved nucleotides, followed by a spacer (containing either 12 +/- 1 or 23 +/- 1 poorly conserved nucleotides) and a conserved nonamer. Errors in V(D)J recombination, including cleavage of cryptic RSS outside the immunoglobulin and T cell receptor loci, are associated with oncogenic translocations observed in some lymphoid malignancies. We present in this paper the RSSsite web server, which is available from the address http://www.itb.cnr.it/rss. RSSsite consists of a web-accessible database, RSSdb, for the identification of pre-computed potential RSSs, and of the related search tool, DnaGrab, which allows the scoring of potential RSSs in user-supplied sequences. This latter algorithm makes use of probability models, which can be recasted to Bayesian network, taking into account correlations between groups of positions of a sequence, developed starting from specific reference sets of RSSs. In validation laboratory experiments, we selected 33 predicted cryptic RSSs (cRSSs) from 11 chromosomal regions outside the immunoglobulin and TCR loci for functional testing.

  3. Web-based tools for data analysis and quality assurance on a life-history trait database of plants of Northwest Europe

    NARCIS (Netherlands)

    Stadler, Michael; Ahlers, Dirk; Bekker, Rene M.; Finke, Jens; Kunzmann, Dierk; Sonnenschein, Michael

    2006-01-01

    Most data mining techniques have rarely been used in ecology. To address the specific needs of scientists analysing data from a plant trait database developed during the LEDA project, a web-based data mining tool has been developed. This paper presents the DIONE data miner and the project it has

  4. GRAFLAB 2.3 for UNIX - A MATLAB database, plotting, and analysis tool: User`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, W.N.

    1998-03-01

    This report is a user`s manual for GRAFLAB, which is a new database, analysis, and plotting package that has been written entirely in the MATLAB programming language. GRAFLAB is currently used for data reduction, analysis, and archival. GRAFLAB was written to replace GRAFAID, which is a FORTRAN database, analysis, and plotting package that runs on VAX/VMS.

  5. Progress on statistical learning systems as data mining tools for the creation of automatic databases in Fusion environments

    Energy Technology Data Exchange (ETDEWEB)

    Vega, J., E-mail: jesus.vega@ciemat.e [JET-EFDA, Culham Science Center, OX14 3DB, Abingdon (United Kingdom); Asociacion EURATOM/CIEMAT para Fusion. Avda. Complutense, 22, 28040 Madrid (Spain); Murari, A. [JET-EFDA, Culham Science Center, OX14 3DB, Abingdon (United Kingdom); Associazione EURATOM-ENEA per la Fusione, Consorzio RFX, 4-35127 Padova (Italy); Ratta, G.A.; Gonzalez, S. [JET-EFDA, Culham Science Center, OX14 3DB, Abingdon (United Kingdom); Asociacion EURATOM/CIEMAT para Fusion. Avda. Complutense, 22, 28040 Madrid (Spain); Dormido-Canto, S. [JET-EFDA, Culham Science Center, OX14 3DB, Abingdon (United Kingdom); Dpto. Informatica y Automatica, UNED, Madrid (Spain)

    2010-07-15

    Nowadays, processing all information of a fusion database is a much more important issue than acquiring more data. Although typically fusion devices produce tens of thousands of discharges, specialized databases for physics studies are normally limited to a few tens of shots. This is due to the fact that these databases are almost always generated manually, which is a very time consuming and unreliable activity. The development of automatic methods to create specialized databases ensures first, the reduction of human efforts to identify and locate physical events, second, the standardization of criteria (reducing the vulnerability to human errors) and, third, the improvement of statistical relevance. Classification and regression techniques have been used for these purposes. The objective has been the automatic recognition of physical events (that can appear in a random and/or infrequent way) in waveforms and video-movies. Results are shown for the JET database.

  6. IDEAS international contamination database: a compilation of published internal contamination cases. A tool for the internal dosimetry community

    International Nuclear Information System (INIS)

    Hurtgen, C.

    2007-01-01

    The aim of the IDEAS project was to develop General Guidelines for the Assessment of Internal Dose from Monitoring Data. The project was divided into 5 Work Packages for the major tasks. Work Package 1 entitled Collection of incorporation cases was devoted to the collection of data by means of bibliographic research (survey of the open literature), contacting and collecting data from specific organisations and using information from existing databases on incorporation cases. To ensure that the guidelines would be applicable to a wide range of practical situations, a database of cases of internal contamination including monitoring data suitable for dose assessment was compiled. The IDEAS Bibliography database and the IDEAS Internal Contamination database were prepared and some reference cases were selected for use in Work Package 3. The other Work packages of the IDEAS Project (WP-2 Preparation of evaluation software, WP-3 Evaluation of incorporation cases, WP-4 Development of the general guidelines and WP-5 Practical testing of general guidelines) have been described in detail elsewhere and can be found on the IDEAS website. A search for reference from the open literature, which contained information on cases of internal contamination from which intake and committed doses could be assessed, has been compiled into a database. The IDEAS Bibliography Database includes references to papers which might (but were not certain to) contain such information, or which included references to papers which contained such information. This database contains the usual bibliographical information: authors' name(s), year of publication, title of publication and the journal or report number. Up to now, a comprehensive Bibliography Database containing 563 references has been compiled. Not surprisingly more than half of the references are from Health Physics and Radiation Protection Dosimetry Journals.The next step was for the partners of the IDEAS project to obtain the references

  7. Federal databases

    International Nuclear Information System (INIS)

    Welch, M.J.; Welles, B.W.

    1988-01-01

    Accident statistics on all modes of transportation are available as risk assessment analytical tools through several federal agencies. This paper reports on the examination of the accident databases by personal contact with the federal staff responsible for administration of the database programs. This activity, sponsored by the Department of Energy through Sandia National Laboratories, is an overview of the national accident data on highway, rail, air, and marine shipping. For each mode, the definition or reporting requirements of an accident are determined and the method of entering the accident data into the database is established. Availability of the database to others, ease of access, costs, and who to contact were prime questions to each of the database program managers. Additionally, how the agency uses the accident data was of major interest

  8. DMPD: Ubiquitin: tool and target for intracellular NF-kappaB inhibitors. [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 16982211 Ubiquitin: tool and target for intracellular NF-kappaB inhibitors. Wullaer...vg) (.html) (.csml) Show Ubiquitin: tool and target for intracellular NF-kappaB inhibitors. PubmedID 16982211 Title Ubiq

  9. The needs for chemistry standards, database tools and data curation at the chemical-biology interface (SLAS meeting)

    Science.gov (United States)

    This presentation will highlight known challenges with the production of high quality chemical databases and outline recent efforts made to address these challenges. Specific examples will be provided illustrating these challenges within the U.S. Environmental Protection Agency ...

  10. THE NASA AMES PAH IR SPECTROSCOPIC DATABASE VERSION 2.00: UPDATED CONTENT, WEB SITE, AND ON(OFF)LINE TOOLS

    Energy Technology Data Exchange (ETDEWEB)

    Boersma, C.; Mattioda, A. L.; Allamandola, L. J. [NASA Ames Research Center, MS 245-6, Moffett Field, CA 94035 (United States); Bauschlicher, C. W. Jr.; Ricca, A. [NASA Ames Research Center, MS 230-3, Moffett Field, CA 94035 (United States); Cami, J.; Peeters, E.; De Armas, F. Sánchez; Saborido, G. Puerta [SETI Institute, 189 Bernardo Avenue 100, Mountain View, CA 94043 (United States); Hudgins, D. M., E-mail: Christiaan.Boersma@nasa.gov [NASA Headquarters, MS 3Y28, 300 E St. SW, Washington, DC 20546 (United States)

    2014-03-01

    A significantly updated version of the NASA Ames PAH IR Spectroscopic Database, the first major revision since its release in 2010, is presented. The current version, version 2.00, contains 700 computational and 75 experimental spectra compared, respectively, with 583 and 60 in the initial release. The spectra span the 2.5-4000 μm (4000-2.5 cm{sup -1}) range. New tools are available on the site that allow one to analyze spectra in the database and compare them with imported astronomical spectra as well as a suite of IDL object classes (a collection of programs utilizing IDL's object-oriented programming capabilities) that permit offline analysis called the AmesPAHdbIDLSuite. Most noteworthy among the additions are the extension of the computational spectroscopic database to include a number of significantly larger polycyclic aromatic hydrocarbons (PAHs), the ability to visualize the molecular atomic motions corresponding to each vibrational mode, and a new tool that allows one to perform a non-negative least-squares fit of an imported astronomical spectrum with PAH spectra in the computational database. Finally, a methodology is described in the Appendix, and implemented using the AmesPAHdbIDLSuite, that allows the user to enforce charge balance during the fitting procedure.

  11. Towards secondary use of heterogeneous radio-oncological data for retrospective clinical trials: service-oriented connection of a central research database with image analysis tools

    Science.gov (United States)

    Bougatf, Nina; Bendl, Rolf; Debus, Jürgen

    2015-03-01

    Our overall objective is the utilization of heterogeneous and distributed radio-oncological data in retrospective clinical trials. Previously, we have successfully introduced a central research database for collection of heterogeneous data from distributed systems. The next step is the integration of image analysis tools in the standard retrieval process. Hence, analyses for complex medical questions can be processed automatically and facilitated immensely. In radiation oncology recurrence analysis is a central approach for the evaluation of therapeutic concepts. However, various analysis steps have to be performed like image registration, dose transformation and dose statistics. In this paper we show the integration of image analysis tools in the standard retrieval process by connecting them with our central research database using a service-oriented approach. A concrete problem from recurrence analysis has been selected to prove our concept exemplarily. We implemented service-oriented data collection and analysis tools to use them in a central analysis platform, which is based on a work flow management system. An analysis work flow has been designed that, at first, identifies patients in the research database fulfilling the inclusion criteria. Then the relevant imaging data is collected. Finally the imaging data is analyzed automatically. After the successful work flow execution, the results are available for further evaluation by a physician. As a result, the central research database has been connected successfully with automatic data collection and image analysis tools and the feasibility of our service-oriented approach has been demonstrated. In conclusion, our approach will simplify retrospective clinical trials in our department in future.

  12. CANGS DB: a stand-alone web-based database tool for processing, managing and analyzing 454 data in biodiversity studies.

    Science.gov (United States)

    Pandey, Ram Vinay; Nolte, Viola; Boenigk, Jens; Schlötterer, Christian

    2011-06-30

    Next generation sequencing (NGS) is widely used in metagenomic and transcriptomic analyses in biodiversity. The ease of data generation provided by NGS platforms has allowed researchers to perform these analyses on their particular study systems. In particular the 454 platform has become the preferred choice for PCR amplicon based biodiversity surveys because it generates the longest sequence reads. Nevertheless, the handling and organization of massive amounts of sequencing data poses a major problem for the research community, particularly when multiple researchers are involved in data acquisition and analysis. An integrated and user-friendly tool, which performs quality control, read trimming, PCR primer removal, and data organization is desperately needed, therefore, to make data interpretation fast and manageable. We developed CANGS DB (Cleaning and Analyzing Next Generation Sequences DataBase) a flexible, stand alone and user-friendly integrated database tool. CANGS DB is specifically designed to organize and manage the massive amount of sequencing data arising from various NGS projects. CANGS DB also provides an intuitive user interface for sequence trimming and quality control, taxonomy analysis and rarefaction analysis. Our database tool can be easily adapted to handle multiple sequencing projects in parallel with different sample information, amplicon sizes, primer sequences, and quality thresholds, which makes this software especially useful for non-bioinformaticians. Furthermore, CANGS DB is especially suited for projects where multiple users need to access the data. CANGS DB is available at http://code.google.com/p/cangsdb/. CANGS DB provides a simple and user-friendly solution to process, store and analyze 454 sequencing data. Being a local database that is accessible through a user-friendly interface, CANGS DB provides the perfect tool for collaborative amplicon based biodiversity surveys without requiring prior bioinformatics skills.

  13. CANGS DB: a stand-alone web-based database tool for processing, managing and analyzing 454 data in biodiversity studies

    Directory of Open Access Journals (Sweden)

    Schlötterer Christian

    2011-06-01

    Full Text Available Abstract Background Next generation sequencing (NGS is widely used in metagenomic and transcriptomic analyses in biodiversity. The ease of data generation provided by NGS platforms has allowed researchers to perform these analyses on their particular study systems. In particular the 454 platform has become the preferred choice for PCR amplicon based biodiversity surveys because it generates the longest sequence reads. Nevertheless, the handling and organization of massive amounts of sequencing data poses a major problem for the research community, particularly when multiple researchers are involved in data acquisition and analysis. An integrated and user-friendly tool, which performs quality control, read trimming, PCR primer removal, and data organization is desperately needed, therefore, to make data interpretation fast and manageable. Findings We developed CANGS DB (Cleaning and Analyzing Next Generation Sequences DataBase a flexible, stand alone and user-friendly integrated database tool. CANGS DB is specifically designed to organize and manage the massive amount of sequencing data arising from various NGS projects. CANGS DB also provides an intuitive user interface for sequence trimming and quality control, taxonomy analysis and rarefaction analysis. Our database tool can be easily adapted to handle multiple sequencing projects in parallel with different sample information, amplicon sizes, primer sequences, and quality thresholds, which makes this software especially useful for non-bioinformaticians. Furthermore, CANGS DB is especially suited for projects where multiple users need to access the data. CANGS DB is available at http://code.google.com/p/cangsdb/. Conclusion CANGS DB provides a simple and user-friendly solution to process, store and analyze 454 sequencing data. Being a local database that is accessible through a user-friendly interface, CANGS DB provides the perfect tool for collaborative amplicon based biodiversity surveys

  14. A comprehensive aligned nifH gene database: a multipurpose tool for studies of nitrogen-fixing bacteria.

    Science.gov (United States)

    Gaby, John Christian; Buckley, Daniel H

    2014-01-01

    We describe a nitrogenase gene sequence database that facilitates analysis of the evolution and ecology of nitrogen-fixing organisms. The database contains 32 954 aligned nitrogenase nifH sequences linked to phylogenetic trees and associated sequence metadata. The database includes 185 linked multigene entries including full-length nifH, nifD, nifK and 16S ribosomal RNA (rRNA) gene sequences. Evolutionary analyses enabled by the multigene entries support an ancient horizontal transfer of nitrogenase genes between Archaea and Bacteria and provide evidence that nifH has a different history of horizontal gene transfer from the nifDK enzyme core. Further analyses show that lineages in nitrogenase cluster I and cluster III have different rates of substitution within nifD, suggesting that nifD is under different selection pressure in these two lineages. Finally, we find that that the genetic divergence of nifH and 16S rRNA genes does not correlate well at sequence dissimilarity values used commonly to define microbial species, as stains having <3% sequence dissimilarity in their 16S rRNA genes can have up to 23% dissimilarity in nifH. The nifH database has a number of uses including phylogenetic and evolutionary analyses, the design and assessment of primers/probes and the evaluation of nitrogenase sequence diversity. Database URL: http://www.css.cornell.edu/faculty/buckley/nifh.htm.

  15. Failure database and tools for wind turbine availability and reliability analyses. The application of reliability data for selected wind turbines

    DEFF Research Database (Denmark)

    Kozine, Igor; Christensen, P.; Winther-Jensen, M.

    2000-01-01

    The objective of this project was to develop and establish a database for collecting reliability and reliability-related data, for assessing the reliability of wind turbine components and subsystems and wind turbines as a whole, as well as for assessingwind turbine availability while ranking the ...... similar safety systems. The database was established with Microsoft Access DatabaseManagement System, the software for reliability and availability assessments was created with Visual Basic.......The objective of this project was to develop and establish a database for collecting reliability and reliability-related data, for assessing the reliability of wind turbine components and subsystems and wind turbines as a whole, as well as for assessingwind turbine availability while ranking...... the contributions at both the component and system levels. The project resulted in a software package combining a failure database with programs for predicting WTB availability and the reliability of all thecomponents and systems, especially the safety system. The report consists of a description of the theoretical...

  16. Data management with a landslide inventory of the Franconian Alb (Germany) using a spatial database and GIS tools

    Science.gov (United States)

    Bemm, Stefan; Sandmeier, Christine; Wilde, Martina; Jaeger, Daniel; Schwindt, Daniel; Terhorst, Birgit

    2014-05-01

    The area of the Swabian-Franconian cuesta landscape (Southern Germany) is highly prone to landslides. This was apparent in the late spring of 2013, when numerous landslides occurred as a consequence of heavy and long-lasting rainfalls. The specific climatic situation caused numerous damages with serious impact on settlements and infrastructure. Knowledge on spatial distribution of landslides, processes and characteristics are important to evaluate the potential risk that can occur from mass movements in those areas. In the frame of two projects about 400 landslides were mapped and detailed data sets were compiled during years 2011 to 2014 at the Franconian Alb. The studies are related to the project "Slope stability and hazard zones in the northern Bavarian cuesta" (DFG, German Research Foundation) as well as to the LfU (The Bavarian Environment Agency) within the project "Georisks and climate change - hazard indication map Jura". The central goal of the present study is to create a spatial database for landslides. The database should contain all fundamental parameters to characterize the mass movements and should provide the potential for secure data storage and data management, as well as statistical evaluations. The spatial database was created with PostgreSQL, an object-relational database management system and PostGIS, a spatial database extender for PostgreSQL, which provides the possibility to store spatial and geographic objects and to connect to several GIS applications, like GRASS GIS, SAGA GIS, QGIS and GDAL, a geospatial library (Obe et al. 2011). Database access for querying, importing, and exporting spatial and non-spatial data is ensured by using GUI or non-GUI connections. The database allows the use of procedural languages for writing advanced functions in the R, Python or Perl programming languages. It is possible to work directly with the (spatial) data entirety of the database in R. The inventory of the database includes (amongst others

  17. JASPAR, the open access database of transcription factor-binding profiles: new content and tools in the 2008 update

    DEFF Research Database (Denmark)

    Bryne, J.C.; Valen, E.; Tang, M.H.E.

    2008-01-01

    databasethe JASPAR CORE sub-databasehas increased by 12 in size, and three new specialized sub-databases have been added. New functions include clustering of matrix models by similarity, generation of random matrices by sampling from selected sets of existing models and a language-independent Web Service...

  18. Database for Regional Geology, Phase 1: A Tool for Informing Regional Evaluations of Alternative Geologic Media and Decision Making

    Energy Technology Data Exchange (ETDEWEB)

    Perry, Frank Vinton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kelley, Richard E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Birdsell, Suzanne M. [Clark Univ., Worcester, MA (United States); Lugo, Alexander Bryan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dobson, Patrick [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Houseworth, James [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-11-12

    Reported is progress in the following areas: Phase 1 and 2 websites for the regional geology GIS database; terrane maps of crystalline basement rocks; inventory of shale formations in the US; and rock properties and in-situ conditions for shale estimated from sonic velocity measurements.

  19. IMI - oral biopharmaceutics tools project - evaluation of bottom-up PBPK prediction success part 1: Characterisation of the OrBiTo database of compounds.

    Science.gov (United States)

    Margolskee, Alison; Darwich, Adam S; Pepin, Xavier; Pathak, Shriram M; Bolger, Michael B; Aarons, Leon; Rostami-Hodjegan, Amin; Angstenberger, Jonas; Graf, Franziska; Laplanche, Loic; Müller, Thomas; Carlert, Sara; Daga, Pankaj; Murphy, Dónal; Tannergren, Christer; Yasin, Mohammed; Greschat-Schade, Susanne; Mück, Wolfgang; Muenster, Uwe; van der Mey, Dorina; Frank, Kerstin Julia; Lloyd, Richard; Adriaenssen, Lieve; Bevernage, Jan; De Zwart, Loeckie; Swerts, Dominique; Tistaert, Christophe; Van Den Bergh, An; Van Peer, Achiel; Beato, Stefania; Nguyen-Trung, Anh-Thu; Bennett, Joanne; McAllister, Mark; Wong, Mei; Zane, Patricia; Ollier, Céline; Vicat, Pascale; Kolhmann, Markus; Marker, Alexander; Brun, Priscilla; Mazuir, Florent; Beilles, Stéphane; Venczel, Marta; Boulenc, Xavier; Loos, Petra; Lennernäs, Hans; Abrahamsson, Bertil

    2017-01-01

    Predicting oral bioavailability (F oral ) is of importance for estimating systemic exposure of orally administered drugs. Physiologically-based pharmacokinetic (PBPK) modelling and simulation have been applied extensively in biopharmaceutics recently. The Oral Biopharmaceutical Tools (OrBiTo) project (Innovative Medicines Initiative) aims to develop and improve upon biopharmaceutical tools, including PBPK absorption models. A large-scale evaluation of PBPK models may be considered the first step. Here we characterise the OrBiTo active pharmaceutical ingredient (API) database for use in a large-scale simulation study. The OrBiTo database comprised 83 APIs and 1475 study arms. The database displayed a median logP of 3.60 (2.40-4.58), human blood-to-plasma ratio of 0.62 (0.57-0.71), and fraction unbound in plasma of 0.05 (0.01-0.17). The database mainly consisted of basic compounds (48.19%) and Biopharmaceutics Classification System class II compounds (55.81%). Median human intravenous clearance was 16.9L/h (interquartile range: 11.6-43.6L/h; n=23), volume of distribution was 80.8L (54.5-239L; n=23). The majority of oral formulations were immediate release (IR: 87.6%). Human F oral displayed a median of 0.415 (0.203-0.724; n=22) for IR formulations. The OrBiTo database was found to be largely representative of previously published datasets. 43 of the APIs were found to satisfy the minimum inclusion criteria for the simulation exercise, and many of these have significant gaps of other key parameters, which could potentially impact the interpretability of the simulation outcome. However, the OrBiTo simulation exercise represents a unique opportunity to perform a large-scale evaluation of the PBPK approach to predicting oral biopharmaceutics. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. FY11 Development of Fully Coupled Repository THCM Simulation Tools Report: Thermodynamic Database Development, with Emphasis on Complex Clay Minerals

    Energy Technology Data Exchange (ETDEWEB)

    Wolery, Thomas J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Tayne, Andrew [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jove-Colon, Carlos F. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2011-07-29

    Thermodynamic data are essential for understanding and evaluating geochemical processes, as by speciation-solubility calculations, reaction -path modeling, or reactive transport simulation. These data are required to evaluate both equilibrium states and the kinetic approach to such states (via the affinity term in rate laws). The development of thermodynamic databases for these purposes has a long history in geochemistry (e.g., Garrels and Christ, 1965; Helgeson et al., 1969; Helgeson et al., 1978, Johnson et al., 1992; Robie and Hemingway, 1995), paralleled by related and applicable work in the larger scientific community (e.g., Wagman et al., 1982, 1989; Cox et al., 1989; Barin and Platzki, 1995; Binneweis and Milke, 1999). The Yucca Mountain Project developed two qualified thermodynamic databases for to model geochemical processes, including ones involving repository components such as spent fuel. The first of the two (BSC, 2007a) was for systems containing dilute aqueous solutions only, the other (BSC, 2007b) for systems involving concentrated aqueous solutions and incorporating a model for such based on Pitzer’s (1991) equations . A 25°C-only database with similarities to the latter was also developed for WIPP (cf. Xiong, 2005). The YMP dilute systems database is widely used in the geochemistry community for a variety of applications involving rock/water interactions. The purpose of the present task is to improve these databases for work on the Used Fuel Disposition Project and maintain some semblance of order that will support qualification in support of the development of future underground high level nuclear waste disposal.

  1. Preliminary development of a diabetic foot ulcer database from a wound electronic medical record: a tool to decrease limb amputations.

    Science.gov (United States)

    Golinko, Michael S; Margolis, David J; Tal, Adit; Hoffstad, Ole; Boulton, Andrew J M; Brem, Harold

    2009-01-01

    Our objective was to create a practical standardized database of clinically relevant variables in the care of patients with diabetes and foot ulcers. Numerical clinical variables such as age, baseline laboratory values, and wound area were extracted from the wound electronic medical record (WEMR). A coding system was developed to translate narrative data, culture, and pathology reports into discrete, quantifiable variables. Using data extracted from the WEMR, a diabetic foot ulcer-specific database incorporated the following tables: (1) demographics, medical history, and baseline laboratory values; (2) vascular testing data; (3) radiology data; (4) wound characteristics; and (5) wound debridement data including pathology, culture results, and amputation data. The database contains variables that can be easily exported for analysis. Amputation was studied in 146 patients who had at least two visits (e.g., two entries in the database). Analysis revealed that 19 (13%) patients underwent 32 amputations (nine major and 23 minor) in 23 limbs. There was a decreased risk of amputation, 0.87 (0.78, 1.00), using a proportional hazards model, associated with an increased number of visits and entries in the WEMR. Further analysis revealed no significant difference in age, gender, HbA1c%, cholesterol, white blood cell count, or prealbumin at baseline, whereas hemoglobin and albumin were significantly lower in the amputee group (psafety and quality of care, allowing clinicians to more easily identify a nonhealing wound and intervene. This report describes a method of capturing data relevant to clinical care of a patient with a diabetic foot ulcer, and may enable clinicians to adapt such a system to their own patient population.

  2. Digital bedrock mapping at the Geological Survey of Norway: BGS SIGMA tool and in-house database structure

    Science.gov (United States)

    Gasser, Deta; Viola, Giulio; Bingen, Bernard

    2016-04-01

    Since 2010, the Geological Survey of Norway has been implementing and continuously developing a digital workflow for geological bedrock mapping in Norway, from fieldwork to final product. Our workflow is based on the ESRI ArcGIS platform, and we use rugged Windows computers in the field. Three different hardware solutions have been tested over the past 5 years (2010-2015). (1) Panasonic Toughbook CE-19 (2.3 kg), (2) Panasonic Toughbook CF H2 Field (1.6 kg) and (3) Motion MC F5t tablet (1.5 kg). For collection of point observations in the field we mainly use the SIGMA Mobile application in ESRI ArcGIS developed by the British Geological Survey, which allows the mappers to store georeferenced comments, structural measurements, sample information, photographs, sketches, log information etc. in a Microsoft Access database. The application is freely downloadable from the BGS websites. For line- and polygon work we use our in-house database, which is currently under revision. Our line database consists of three feature classes: (1) bedrock boundaries, (2) bedrock lineaments, and (3) bedrock lines, with each feature class having up to 24 different attribute fields. Our polygon database consists of one feature class with 38 attribute fields enabling to store various information concerning lithology, stratigraphic order, age, metamorphic grade and tectonic subdivision. The polygon and line databases are coupled via topology in ESRI ArcGIS, which allows us to edit them simultaneously. This approach has been applied in two large-scale 1:50 000 bedrock mapping projects, one in the Kongsberg domain of the Sveconorwegian orogen, and the other in the greater Trondheim area (Orkanger) in the Caledonian belt. The mapping projects combined collection of high-resolution geophysical data, digital acquisition of field data, and collection of geochronological, geochemical and petrological data. During the Kongsberg project, some 25000 field observation points were collected by eight

  3. Investigating the Potential Impacts of Energy Production in the Marcellus Shale Region Using the Shale Network Database and CUAHSI-Supported Data Tools

    Science.gov (United States)

    Brazil, L.

    2017-12-01

    The Shale Network's extensive database of water quality observations enables educational experiences about the potential impacts of resource extraction with real data. Through open source tools that are developed and maintained by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI), researchers, educators, and citizens can access and analyze the very same data that the Shale Network team has used in peer-reviewed publications about the potential impacts of hydraulic fracturing on water. The development of the Shale Network database has been made possible through collection efforts led by an academic team and involving numerous individuals from government agencies, citizen science organizations, and private industry. Thus far, CUAHSI-supported data tools have been used to engage high school students, university undergraduate and graduate students, as well as citizens so that all can discover how energy production impacts the Marcellus Shale region, which includes Pennsylvania and other nearby states. This presentation will describe these data tools, how the Shale Network has used them in developing educational material, and the resources available to learn more.

  4. A survey and evaluation of Web-based tools/databases for variant analysis of TCGA data.

    Science.gov (United States)

    Zhang, Zhuo; Li, Hao; Jiang, Shuai; Li, Ruijiang; Li, Wanying; Chen, Hebing; Bo, Xiaochen

    2018-03-29

    The Cancer Genome Atlas (TCGA) is a publicly funded project that aims to catalog and discover major cancer-causing genomic alterations with the goal of creating a comprehensive 'atlas' of cancer genomic profiles. The availability of this genome-wide information provides an unprecedented opportunity to expand our knowledge of tumourigenesis. Computational analytics and mining are frequently used as effective tools for exploring this byzantine series of biological and biomedical data. However, some of the more advanced computational tools are often difficult to understand or use, thereby limiting their application by scientists who do not have a strong computational background. Hence, it is of great importance to build user-friendly interfaces that allow both computational scientists and life scientists without a computational background to gain greater biological and medical insights. To that end, this survey was designed to systematically present available Web-based tools and facilitate the use TCGA data for cancer research.

  5. Methodological framework, analytical tool and database for the assessment of climate change impacts, adaptation and vulnerability in Denmark

    DEFF Research Database (Denmark)

    Kaspersen, Per Skougaard; Halsnæs, Kirsten; Gregg, Jay Sterling

    This report was prepared at the request of and funded by the Coordination Unit for Research in Climate Change Adaptation (KFT). The report is a milestone of the project titled “Udvikling af metodisk ramme for dansk analytisk værktøj og database over klimasårbarhed og klimatilpasning”, funded by K...... in this report should be considered work-in-progress. The analysis was conducted by DHI as part of the DSF project Centre for Regional Change of the Earth System (CRES)....

  6. Critical Care Health Informatics Collaborative (CCHIC): Data, tools and methods for reproducible research: A multi-centre UK intensive care database.

    Science.gov (United States)

    Harris, Steve; Shi, Sinan; Brealey, David; MacCallum, Niall S; Denaxas, Spiros; Perez-Suarez, David; Ercole, Ari; Watkinson, Peter; Jones, Andrew; Ashworth, Simon; Beale, Richard; Young, Duncan; Brett, Stephen; Singer, Mervyn

    2018-04-01

    To build and curate a linkable multi-centre database of high resolution longitudinal electronic health records (EHR) from adult Intensive Care Units (ICU). To develop a set of open-source tools to make these data 'research ready' while protecting patient's privacy with a particular focus on anonymisation. We developed a scalable EHR processing pipeline for extracting, linking, normalising and curating and anonymising EHR data. Patient and public involvement was sought from the outset, and approval to hold these data was granted by the NHS Health Research Authority's Confidentiality Advisory Group (CAG). The data are held in a certified Data Safe Haven. We followed sustainable software development principles throughout, and defined and populated a common data model that links to other clinical areas. Longitudinal EHR data were loaded into the CCHIC database from eleven adult ICUs at 5 UK teaching hospitals. From January 2014 to January 2017, this amounted to 21,930 and admissions (18,074 unique patients). Typical admissions have 70 data-items pertaining to admission and discharge, and a median of 1030 (IQR 481-2335) time-varying measures. Training datasets were made available through virtual machine images emulating the data processing environment. An open source R package, cleanEHR, was developed and released that transforms the data into a square table readily analysable by most statistical packages. A simple language agnostic configuration file will allow the user to select and clean variables, and impute missing data. An audit trail makes clear the provenance of the data at all times. Making health care data available for research is problematic. CCHIC is a unique multi-centre longitudinal and linkable resource that prioritises patient privacy through the highest standards of data security, but also provides tools to clean, organise, and anonymise the data. We believe the development of such tools are essential if we are to meet the twin requirements of

  7. Childhood immunization rates in rural Intibucá, Honduras: an analysis of a local database tool and community health center records for assessing and improving vaccine coverage.

    Science.gov (United States)

    He, Yuan; Zarychta, Alan; Ranz, Joseph B; Carroll, Mary; Singleton, Lori M; Wilson, Paria M; Schlaudecker, Elizabeth P

    2012-12-07

    Vaccines are highly effective at preventing infectious diseases in children, and prevention is especially important in resource-limited countries where treatment is difficult to access. In Honduras, the World Health Organization (WHO) reports very high immunization rates in children. To determine whether or not these estimates accurately depict the immunization coverage in non-urban regions of the country, we compared the WHO data to immunization rates obtained from a local database tool and community health center records in rural Intibucá, Honduras. We used data from two sources to comprehensively evaluate immunization rates in the area: 1) census data from a local database and 2) immunization data collected at health centers. We compared these rates using logistic regression, and we compared them to publicly available WHO-reported estimates using confidence interval inclusion. We found that mean immunization rates for each vaccine were high (range 84.4 to 98.8 percent), but rates recorded at the health centers were significantly higher than those reported from the census data (p ≤ 0.001). Combining the results from both databases, the mean rates of four out of five vaccines were less than WHO-reported rates (p 0.05), except for diphtheria/tetanus/pertussis vaccine (p=0.02) and oral polio vaccine (p Honduras were high across data sources, though most of the rates recorded in rural Honduras were less than WHO-reported rates. Despite geographical difficulties and barriers to access, the local database and Honduran community health workers have developed a thorough system for ensuring that children receive their immunizations on time. The successful integration of community health workers and a database within the Honduran decentralized health system may serve as a model for other immunization programs in resource-limited countries where health care is less accessible.

  8. New UV-source catalogs, UV spectral database, UV variables and science tools from the GALEX surveys

    Science.gov (United States)

    Bianchi, Luciana; de la Vega, Alexander; Shiao, Bernard; Bohlin, Ralph

    2018-03-01

    We present a new, expanded and improved catalog of Ultraviolet (UV) sources from the GALEX All-Sky Imaging survey: GUVcat_AIS (Bianchi et al. in Astrophys. J. Suppl. Ser. 230:24, 2017). The catalog includes 83 million unique sources (duplicate measurements and rim artifacts are removed) measured in far-UV and near-UV. With respect to previous versions (Bianchi et al. in Mon. Not. R. Astron. Soc. 411:2770 2011a, Adv. Space Res. 53:900-991, 2014), GUVcat_AIS covers a slightly larger area, 24,790 square degrees, and includes critical corrections and improvements, as well as new tags, in particular to identify sources in the footprint of extended objects, where pipeline source detection may fail and custom-photometry may be necessary. The UV unique-source catalog facilitates studies of density of sources, and matching of the UV samples with databases at other wavelengths. We also present first results from two ongoing projects, addressing respectively UV variability searches on time scales from seconds to years by mining the GALEX photon archive, and the construction of a database of ˜120,000 GALEX UV spectra (range ˜1300-3000 Å), including quality and calibration assessment and classification of the grism, hence serendipitous, spectral sources.

  9. SU-F-P-35: A Multi-Institutional Plan Quality Checking Tool Built On Oncospace: A Shared Radiation Oncology Database System

    International Nuclear Information System (INIS)

    Bowers, M; Robertson, S; Moore, J; Wong, J; Phillips, M; Hendrickson, K; Evans, K; McNutt, T

    2016-01-01

    Purpose: Late toxicity from radiation to critical structures limits the possible dose in Radiation Therapy. Perfectly conformal treatment of a target is not realizable, so the clinician must accept a certain level of collateral radiation to nearby OARs. But how much? General guidelines exist for healthy tissue sparing which guide RT treatment planning, but are these guidelines good enough to create the optimal plan given the individualized patient anatomy? We propose a means to evaluate the planned dose level to an OAR using a multi-institutional data-store of previously treated patients, so a clinician might reconsider planning objectives. Methods: The tool is built on Oncospace, a federated data-store system, which consists of planning data import, web based analysis tools, and a database containing:1) DVHs: dose by percent volume delivered to each ROI for each patient previously treated and included in the database.2) Overlap Volume Histograms (OVHs): Anatomical measure defined as the percent volume of an ROI within a given distance to target structures.Clinicians know what OARs are important to spare. For any ROI, Oncospace knows for which patients’ anatomy that ROI was harder to plan in the past (the OVH is less). The planned dose should be close to the least dose of previous patients. The tool displays the dose those OARs were subjected to, and the clinician can make a determination about the planning objectives used.Multiple institutions contribute to the Oncospace Consortium, and their DVH and OVH data are combined and color coded in the output. Results: The Oncospace website provides a plan quality display tool which identifies harder to treat patients, and graphically displays the dose delivered to them for comparison with the proposed plan. Conclusion: The Oncospace Consortium manages a data-store of previously treated patients which can be used for quality checking new plans. Grant funding by Elekta.

  10. SU-F-P-35: A Multi-Institutional Plan Quality Checking Tool Built On Oncospace: A Shared Radiation Oncology Database System

    Energy Technology Data Exchange (ETDEWEB)

    Bowers, M; Robertson, S; Moore, J; Wong, J [Johns Hopkins University, Baltimore, MD (United States); Phillips, M [University Washington, Seattle, WA (United States); Hendrickson, K; Evans, K [University of Washington, Seattle, WA (United States); McNutt, T [Johns Hopkins University, Severna Park, MD (United States)

    2016-06-15

    Purpose: Late toxicity from radiation to critical structures limits the possible dose in Radiation Therapy. Perfectly conformal treatment of a target is not realizable, so the clinician must accept a certain level of collateral radiation to nearby OARs. But how much? General guidelines exist for healthy tissue sparing which guide RT treatment planning, but are these guidelines good enough to create the optimal plan given the individualized patient anatomy? We propose a means to evaluate the planned dose level to an OAR using a multi-institutional data-store of previously treated patients, so a clinician might reconsider planning objectives. Methods: The tool is built on Oncospace, a federated data-store system, which consists of planning data import, web based analysis tools, and a database containing:1) DVHs: dose by percent volume delivered to each ROI for each patient previously treated and included in the database.2) Overlap Volume Histograms (OVHs): Anatomical measure defined as the percent volume of an ROI within a given distance to target structures.Clinicians know what OARs are important to spare. For any ROI, Oncospace knows for which patients’ anatomy that ROI was harder to plan in the past (the OVH is less). The planned dose should be close to the least dose of previous patients. The tool displays the dose those OARs were subjected to, and the clinician can make a determination about the planning objectives used.Multiple institutions contribute to the Oncospace Consortium, and their DVH and OVH data are combined and color coded in the output. Results: The Oncospace website provides a plan quality display tool which identifies harder to treat patients, and graphically displays the dose delivered to them for comparison with the proposed plan. Conclusion: The Oncospace Consortium manages a data-store of previously treated patients which can be used for quality checking new plans. Grant funding by Elekta.

  11. Approaching Drosophila development through proteomic tools and databases: At the hub of the post-genomic era

    OpenAIRE

    Carmena, Ana

    2009-01-01

    The past decade has witnessed an explosion in the growth of proteomics. The completion of numerous genome sequences, the development of powerful protein analytical technologies, as well as the design of innovative bioinformatics tools have marked the beginning of a new post-genomic era. Proteomics, the large-scale analysis of proteins in an organism, organ or organelle encompasses different aspects: (1) the identification, analysis of post-translational modifications and quantification of pro...

  12. Using a genetic-fuzzy algorithm as a computer aided diagnosis tool on Saudi Arabian breast cancer database.

    Science.gov (United States)

    Alharbi, Abir; Tchier, F

    2017-04-01

    The computer-aided diagnosis has become one of the major research topics in medical diagnostics. In this research paper, we focus on designing an automated computer diagnosis by combining two major methodologies, namely the fuzzy base systems and the evolutionary genetic algorithms and applying them to the Saudi Arabian breast cancer diagnosis database, to be employed for assisting physicians in the early detection of breast cancers, and hence obtaining an early-computerized diagnosis complementary to that by physicians. Our hybrid algorithm, the genetic-fuzzy algorithm, has produced optimized diagnosis systems that attain high classification performance, in fact, our best three rule system obtained a 97% accuracy, with simple and well interpretive rules, and with a good degree of confidence of 91%. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. A Novel Forensic Tool for the Characterization and Comparison of Printing Ink Evidence: Development and Evaluation of a Searchable Database Using Data Fusion of Spectrochemical Methods.

    Science.gov (United States)

    Trejos, Tatiana; Torrione, Peter; Corzo, Ruthmara; Raeva, Ana; Subedi, Kiran; Williamson, Rhett; Yoo, Jong; Almirall, Jose

    2016-05-01

    A searchable printing ink database was designed and validated as a tool to improve the chemical information gathered from the analysis of ink evidence. The database contains 319 samples from printing sources that represent some of the global diversity in toner, inkjet, offset, and intaglio inks. Five analytical methods were used to generate data to populate the searchable database including FTIR, SEM-EDS, LA-ICP-MS, DART-MS, and Py-GC-MS. The search algorithm based on partial least-squares discriminant analysis generates a similarity "score" used for the association between similar samples. The performance of a particular analytical method to associate similar inks was found to be dependent on the ink type with LA-ICP-MS performing best, followed by SEM-EDS and DART-MS methods, while FTIR and Py-GC-MS were less useful in association but were still useful for classification purposes. Data fusion of data collected from two complementary methods (i.e., LA-ICP-MS and DART-MS) improves the classification and association of similar inks. © 2016 American Academy of Forensic Sciences.

  14. Oceans of Data: In what ways can learning research inform the development of electronic interfaces and tools for use by students accessing large scientific databases?

    Science.gov (United States)

    Krumhansl, R. A.; Foster, J.; Peach, C. L.; Busey, A.; Baker, I.

    2012-12-01

    The practice of science and engineering is being revolutionized by the development of cyberinfrastructure for accessing near real-time and archived observatory data. Large cyberinfrastructure projects have the potential to transform the way science is taught in high school classrooms, making enormous quantities of scientific data available, giving students opportunities to analyze and draw conclusions from many kinds of complex data, and providing students with experiences using state-of-the-art resources and techniques for scientific investigations. However, online interfaces to scientific data are built by scientists for scientists, and their design can significantly impede broad use by novices. Knowledge relevant to the design of student interfaces to complex scientific databases is broadly dispersed among disciplines ranging from cognitive science to computer science and cartography and is not easily accessible to designers of educational interfaces. To inform efforts at bridging scientific cyberinfrastructure to the high school classroom, Education Development Center, Inc. and the Scripps Institution of Oceanography conducted an NSF-funded 2-year interdisciplinary review of literature and expert opinion pertinent to making interfaces to large scientific databases accessible to and usable by precollege learners and their teachers. Project findings are grounded in the fundamentals of Cognitive Load Theory, Visual Perception, Schemata formation and Universal Design for Learning. The Knowledge Status Report (KSR) presents cross-cutting and visualization-specific guidelines that highlight how interface design features can address/ ameliorate challenges novice high school students face as they navigate complex databases to find data, and construct and look for patterns in maps, graphs, animations and other data visualizations. The guidelines present ways to make scientific databases more broadly accessible by: 1) adjusting the cognitive load imposed by the user

  15. Assessment of the structural and functional impact of in-frame mutations of the DMD gene, using the tools included in the eDystrophin online database.

    Science.gov (United States)

    Nicolas, Aurélie; Lucchetti-Miganeh, Céline; Yaou, Rabah Ben; Kaplan, Jean-Claude; Chelly, Jamel; Leturcq, France; Barloy-Hubler, Frédérique; Le Rumeur, Elisabeth

    2012-07-09

    Dystrophin is a large essential protein of skeletal and heart muscle. It is a filamentous scaffolding protein with numerous binding domains. Mutations in the DMD gene, which encodes dystrophin, mostly result in the deletion of one or several exons and cause Duchenne (DMD) and Becker (BMD) muscular dystrophies. The most common DMD mutations are frameshift mutations resulting in an absence of dystrophin from tissues. In-frame DMD mutations are less frequent and result in a protein with partial wild-type dystrophin function. The aim of this study was to highlight structural and functional modifications of dystrophin caused by in-frame mutations. We developed a dedicated database for dystrophin, the eDystrophin database. It contains 209 different non frame-shifting mutations found in 945 patients from a French cohort and previous studies. Bioinformatics tools provide models of the three-dimensional structure of the protein at deletion sites, making it possible to determine whether the mutated protein retains the typical filamentous structure of dystrophin. An analysis of the structure of mutated dystrophin molecules showed that hybrid repeats were reconstituted at the deletion site in some cases. These hybrid repeats harbored the typical triple coiled-coil structure of native repeats, which may be correlated with better function in muscle cells. This new database focuses on the dystrophin protein and its modification due to in-frame deletions in BMD patients. The observation of hybrid repeat reconstitution in some cases provides insight into phenotype-genotype correlations in dystrophin diseases and possible strategies for gene therapy. The eDystrophin database is freely available: http://edystrophin.genouest.org/.

  16. Navigating diabetes-related immune epitope data: resources and tools provided by the Immune Epitope Database (IEDB)

    Science.gov (United States)

    Vaughan, Kerrie; Peters, Bjoern; Mallone, Roberto; von Herrath, Matthias; Roep, Bart O.; Sette, Alessandro

    2014-01-01

    Background The Immune Epitope Database (IEDB), originally focused on infectious diseases, was recently expanded to allergy, transplantation and autoimmunity diseases. Here we focus on diabetes, chosen as a prototype autoimmune disease. We utilize a combined tutorial and meta-analysis format, which demonstrates how common questions, related to diabetes epitopes can be answered. Results A total of 409 references are captured in the IEDB describing >2,500 epitopes from diabetes associated antigens. The vast majority of data were derived from GAD, insulin, IA-2/PTPRN, IGRP, ZnT8, HSP, and ICA-1, and the experiments related to T cell epitopes and MHC binding far outnumbers B cell assays. We illustrate how to search by specific antigens, epitopes or host. Other examples include searching for tetramers or epitopes restricted by specific alleles or assays of interest, or searching based on the clinical status of the host. Conclusions The inventory of all published diabetes epitope data facilitates its access for the scientific community. While the global collection of primary data from the literature reflects potential investigational biases present in the literature, the flexible search approach allows users to perform queries tailored to their preferences, including or excluding data as appropriate. Moreover, the analysis highlights knowledge gaps and identifies areas for future investigation. PMID:25140192

  17. MAKER2: an annotation pipeline and genome-database management tool for second-generation genome projects.

    Science.gov (United States)

    Holt, Carson; Yandell, Mark

    2011-12-22

    Second-generation sequencing technologies are precipitating major shifts with regards to what kinds of genomes are being sequenced and how they are annotated. While the first generation of genome projects focused on well-studied model organisms, many of today's projects involve exotic organisms whose genomes are largely terra incognita. This complicates their annotation, because unlike first-generation projects, there are no pre-existing 'gold-standard' gene-models with which to train gene-finders. Improvements in genome assembly and the wide availability of mRNA-seq data are also creating opportunities to update and re-annotate previously published genome annotations. Today's genome projects are thus in need of new genome annotation tools that can meet the challenges and opportunities presented by second-generation sequencing technologies. We present MAKER2, a genome annotation and data management tool designed for second-generation genome projects. MAKER2 is a multi-threaded, parallelized application that can process second-generation datasets of virtually any size. We show that MAKER2 can produce accurate annotations for novel genomes where training-data are limited, of low quality or even non-existent. MAKER2 also provides an easy means to use mRNA-seq data to improve annotation quality; and it can use these data to update legacy annotations, significantly improving their quality. We also show that MAKER2 can evaluate the quality of genome annotations, and identify and prioritize problematic annotations for manual review. MAKER2 is the first annotation engine specifically designed for second-generation genome projects. MAKER2 scales to datasets of any size, requires little in the way of training data, and can use mRNA-seq data to improve annotation quality. It can also update and manage legacy genome annotation datasets.

  18. EST2uni: an open, parallel tool for automated EST analysis and database creation, with a data mining web interface and microarray expression data integration.

    Science.gov (United States)

    Forment, Javier; Gilabert, Francisco; Robles, Antonio; Conejero, Vicente; Nuez, Fernando; Blanca, Jose M

    2008-01-07

    Expressed sequence tag (EST) collections are composed of a high number of single-pass, redundant, partial sequences, which need to be processed, clustered, and annotated to remove low-quality and vector regions, eliminate redundancy and sequencing errors, and provide biologically relevant information. In order to provide a suitable way of performing the different steps in the analysis of the ESTs, flexible computation pipelines adapted to the local needs of specific EST projects have to be developed. Furthermore, EST collections must be stored in highly structured relational databases available to researchers through user-friendly interfaces which allow efficient and complex data mining, thus offering maximum capabilities for their full exploitation. We have created EST2uni, an integrated, highly-configurable EST analysis pipeline and data mining software package that automates the pre-processing, clustering, annotation, database creation, and data mining of EST collections. The pipeline uses standard EST analysis tools and the software has a modular design to facilitate the addition of new analytical methods and their configuration. Currently implemented analyses include functional and structural annotation, SNP and microsatellite discovery, integration of previously known genetic marker data and gene expression results, and assistance in cDNA microarray design. It can be run in parallel in a PC cluster in order to reduce the time necessary for the analysis. It also creates a web site linked to the database, showing collection statistics, with complex query capabilities and tools for data mining and retrieval. The software package presented here provides an efficient and complete bioinformatics tool for the management of EST collections which is very easy to adapt to the local needs of different EST projects. The code is freely available under the GPL license and can be obtained at http://bioinf.comav.upv.es/est2uni. This site also provides detailed instructions

  19. EST2uni: an open, parallel tool for automated EST analysis and database creation, with a data mining web interface and microarray expression data integration

    Directory of Open Access Journals (Sweden)

    Nuez Fernando

    2008-01-01

    Full Text Available Abstract Background Expressed sequence tag (EST collections are composed of a high number of single-pass, redundant, partial sequences, which need to be processed, clustered, and annotated to remove low-quality and vector regions, eliminate redundancy and sequencing errors, and provide biologically relevant information. In order to provide a suitable way of performing the different steps in the analysis of the ESTs, flexible computation pipelines adapted to the local needs of specific EST projects have to be developed. Furthermore, EST collections must be stored in highly structured relational databases available to researchers through user-friendly interfaces which allow efficient and complex data mining, thus offering maximum capabilities for their full exploitation. Results We have created EST2uni, an integrated, highly-configurable EST analysis pipeline and data mining software package that automates the pre-processing, clustering, annotation, database creation, and data mining of EST collections. The pipeline uses standard EST analysis tools and the software has a modular design to facilitate the addition of new analytical methods and their configuration. Currently implemented analyses include functional and structural annotation, SNP and microsatellite discovery, integration of previously known genetic marker data and gene expression results, and assistance in cDNA microarray design. It can be run in parallel in a PC cluster in order to reduce the time necessary for the analysis. It also creates a web site linked to the database, showing collection statistics, with complex query capabilities and tools for data mining and retrieval. Conclusion The software package presented here provides an efficient and complete bioinformatics tool for the management of EST collections which is very easy to adapt to the local needs of different EST projects. The code is freely available under the GPL license and can be obtained at http

  20. Sicily and southern Calabria focal mechanism database: a valuable tool for local and regional stress-field determination

    Directory of Open Access Journals (Sweden)

    Luciano Scarfì

    2013-04-01

    Full Text Available In this work, we present a new catalog of focal mechanisms calculated for earthquakes recorded in Sicily and southern Calabria. It comprises about 300 solutions, for events with magnitudes ranging from 2.7 to 4.8 that occurred from 1999 to 2011. We used P-wave polarities to compute the fault-plane solutions. Two main goals are achieved. For the first, the catalog allows the stress regime and kinematics characterizing the studied area to be depicted at a regional and more local scale. In particular, moving along the tectonic lineament that extends from the Aeolian Islands to the Ionian Sea, there is a change from a regime characterized by sub-horizontal P-axes, ca. NW-SE directed, to an extensive one in the Calabro-Peloritan Arc, where T-axes striking in a NW-SE direction prevail. Our results also show that part of the seismicity is clustered along the main active seismogenic structures, of which the focal mechanisms indicate the kinematics. Finally, in the Etna volcano area, different stress fields act at different depths due to the combination of the regional tectonics, the strong pressurization of the deep magmatic system, and the dynamics of the shallower portion of the volcano. As a second goal, we highlight that the catalog also represents a valuable tool, through the data distribution on the internet, for further studies directed towards improving our understanding of the geodynamic complexity of the region, and for a better characterization of the seismogenic sources.

  1. DATABASE AND ANALYTICAL TOOL DEVELOPMENT FOR THE MANAGEMENT OF DATA DERIVED FROM US DOE (NETL) FUNDED FINE PARTICULATE (PM 2.5) RESEARCH

    Energy Technology Data Exchange (ETDEWEB)

    Robinson P. Khosah; Charles G. Crawford

    2006-02-11

    Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analytical tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project is being conducted in two phases. Phase One includes the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. Phase Two, which is currently underway, involves the development of a platform for on-line data analysis. Phase Two includes the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now into its forty-second month of development activities.

  2. DATABASE AND ANALYTICAL TOOL DEVELOPMENT FOR THE MANAGEMENT OF DATA DERIVED FROM US DOE (NETL) FUNDED FINE PARTICULATE (PM2.5) RESEARCH

    Energy Technology Data Exchange (ETDEWEB)

    Robinson P. Khosah; Charles G. Crawford

    2003-03-13

    Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analytical tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project is being conducted in two phases. Phase 1, which is currently in progress and will take twelve months to complete, will include the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. In Phase 2, which will be completed in the second year of the project, a platform for on-line data analysis will be developed. Phase 2 will include the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now into its sixth month of Phase

  3. Database and Analytical Tool Development for the Management of Data Derived from US DOE (NETL) Funded Fine Particulate (PM2.5) Research

    Energy Technology Data Exchange (ETDEWEB)

    Robinson Khosah

    2007-07-31

    Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analytical tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project was conducted in two phases. Phase One included the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. Phase Two involved the development of a platform for on-line data analysis. Phase Two included the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now technically completed.

  4. Database and Analytical Tool Development for the Management of Data Derived from US DOE (NETL) Funded Fine Particulate (PM2.5) Research

    Energy Technology Data Exchange (ETDEWEB)

    Robinson P. Khosah; Frank T. Alex

    2007-02-11

    Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analytical tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project is being conducted in two phases. Phase One includes the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. Phase Two, which is currently underway, involves the development of a platform for on-line data analysis. Phase Two includes the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now into its forty-eighth month of development activities.

  5. Stackfile Database

    Science.gov (United States)

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  6. Visualization of multidimensional database

    Science.gov (United States)

    Lee, Chung

    2008-01-01

    The concept of multidimensional databases has been extensively researched and wildly used in actual database application. It plays an important role in contemporary information technology, but due to the complexity of its inner structure, the database design is a complicated process and users are having a hard time fully understanding and using the database. An effective visualization tool for higher dimensional information system helps database designers and users alike. Most visualization techniques focus on displaying dimensional data using spreadsheets and charts. This may be sufficient for the databases having three or fewer dimensions but for higher dimensions, various combinations of projection operations are needed and a full grasp of total database architecture is very difficult. This study reviews existing visualization techniques for multidimensional database and then proposes an alternate approach to visualize a database of any dimension by adopting the tool proposed by Kiviat for software engineering processes. In this diagramming method, each dimension is represented by one branch of concentric spikes. This paper documents a C++ based visualization tool with extensive use of OpenGL graphics library and GUI functions. Detailed examples of actual databases demonstrate the feasibility and effectiveness in visualizing multidimensional databases.

  7. Linkage between the Danish National Health Service Prescription Database, the Danish Fetal Medicine Database, and other Danish registries as a tool for the study of drug safety in pregnancy

    DEFF Research Database (Denmark)

    Pedersen, Lars H; Petersen, Olav B; Nørgaard, Mette

    2016-01-01

    A linked population-based database is being created in Denmark for research on drug safety during pregnancy. It combines information from the Danish National Health Service Prescription Database (with information on all prescriptions reimbursed in Denmark since 2004), the Danish Fetal Medicine...... Database, the Danish National Registry of Patients, and the Medical Birth Registry. The new linked database will provide validated information on malformations diagnosed both prenatally and postnatally. The cohort from 2008 to 2014 will comprise 589,000 pregnancies with information on 424,000 pregnancies...

  8. Organ Impairment—Drug–Drug Interaction Database: A Tool for Evaluating the Impact of Renal or Hepatic Impairment and Pharmacologic Inhibition on the Systemic Exposure of Drugs

    Science.gov (United States)

    Yeung, CK; Yoshida, K; Kusama, M; Zhang, H; Ragueneau-Majlessi, I; Argon, S; Li, L; Chang, P; Le, CD; Zhao, P; Zhang, L; Sugiyama, Y; Huang, S-M

    2015-01-01

    The organ impairment and drug–drug interaction (OI-DDI) database is the first rigorously assembled database of pharmacokinetic drug exposure data from publicly available renal and hepatic impairment studies presented together with the maximum change in drug exposure from drug interaction inhibition studies. The database was used to conduct a systematic comparison of the effect of renal/hepatic impairment and pharmacologic inhibition on drug exposure. Additional applications are feasible with the public availability of this database. PMID:26380158

  9. A construction of standardized near infrared hyper-spectral teeth database: a first step in the development of reliable diagnostic tool for quantification and early detection of caries

    Science.gov (United States)

    Bürmen, Miran; Usenik, Peter; Fidler, Aleš; Pernuš, Franjo; Likar, Boštjan

    2011-03-01

    Dental caries is a disease characterized by demineralization of enamel crystals leading to the penetration of bacteria into the dentin and pulp. If left untreated, the disease can lead to pain, infection and tooth loss. Early detection of enamel demineralization resulting in increased enamel porosity, commonly known as white spots, is a difficult diagnostic task. Several papers reported on near infrared (NIR) spectroscopy to be a potentially useful noninvasive spectroscopic technique for early detection of caries lesions. However, the conducted studies were mostly qualitative and did not include the critical assessment of the spectral variability of the sound and carious dental tissues and influence of the water content. Such assessment is essential for development and validation of reliable qualitative and especially quantitative diagnostic tools based on NIR spectroscopy. In order to characterize the described spectral variability, a standardized diffuse reflectance hyper-spectral database was constructed by imaging 12 extracted human teeth with natural lesions of various degrees in the spectral range from 900 to 1700 nm with spectral resolution of 10 nm. Additionally, all the teeth were imaged by digital color camera. The influence of water content on the acquired spectra was characterized by monitoring the teeth during the drying process. The images were assessed by an expert, thereby obtaining the gold standard. By analyzing the acquired spectra we were able to accurately model the spectral variability of the sound dental tissues and identify the advantages and limitations of NIR hyper-spectral imaging.

  10. Comparing genomes: databases and computational tools for comparative analysis of prokaryotic genomes - DOI: 10.3395/reciis.v1i2.Sup.105en

    Directory of Open Access Journals (Sweden)

    Marcos Catanho

    2007-12-01

    Full Text Available Since the 1990's, the complete genetic code of more than 600 living organisms has been deciphered, such as bacteria, yeasts, protozoan parasites, invertebrates and vertebrates, including Homo sapiens, and plants. More than 2,000 other genome projects representing medical, commercial, environmental and industrial interests, or comprising model organisms, important for the development of the scientific research, are currently in progress. The achievement of complete genome sequences of numerous species combined with the tremendous progress in computation that occurred in the last few decades allowed the use of new holistic approaches in the study of genome structure, organization and evolution, as well as in the field of gene prediction and functional classification. Numerous public or proprietary databases and computational tools have been created attempting to optimize the access to this information through the web. In this review, we present the main resources available through the web for comparative analysis of prokaryotic genomes. We concentrated on the group of mycobacteria that contains important human and animal pathogens. The birth of Bioinformatics and Computational Biology and the contributions of these disciplines to the scientific development of this field are also discussed.

  11. Computational tools and resources for metabolism-related property predictions. 1. Overview of publicly available (free and commercial) databases and software.

    Science.gov (United States)

    Peach, Megan L; Zakharov, Alexey V; Liu, Ruifeng; Pugliese, Angelo; Tawa, Gregory; Wallqvist, Anders; Nicklaus, Marc C

    2012-10-01

    Metabolism has been identified as a defining factor in drug development success or failure because of its impact on many aspects of drug pharmacology, including bioavailability, half-life and toxicity. In this article, we provide an outline and descriptions of the resources for metabolism-related property predictions that are currently either freely or commercially available to the public. These resources include databases with data on, and software for prediction of, several end points: metabolite formation, sites of metabolic transformation, binding to metabolizing enzymes and metabolic stability. We attempt to place each tool in historical context and describe, wherever possible, the data it was based on. For predictions of interactions with metabolizing enzymes, we show a typical set of results for a small test set of compounds. Our aim is to give a clear overview of the areas and aspects of metabolism prediction in which the currently available resources are useful and accurate, and the areas in which they are inadequate or missing entirely.

  12. Records Management Database

    Data.gov (United States)

    US Agency for International Development — The Records Management Database is tool created in Microsoft Access specifically for USAID use. It contains metadata in order to access and retrieve the information...

  13. OTI Activity Database

    Data.gov (United States)

    US Agency for International Development — OTI's worldwide activity database is a simple and effective information system that serves as a program management, tracking, and reporting tool. In each country,...

  14. Product Licenses Database Application

    CERN Document Server

    Tonkovikj, Petar

    2016-01-01

    The goal of this project is to organize and centralize the data about software tools available to CERN employees, as well as provide a system that would simplify the license management process by providing information about the available licenses and their expiry dates. The project development process is consisted of two steps: modeling the products (software tools), product licenses, legal agreements and other data related to these entities in a relational database and developing the front-end user interface so that the user can interact with the database. The result is an ASP.NET MVC web application with interactive views for displaying and managing the data in the underlying database.

  15. Linkage between the Danish National Health Service Prescription Database, the Danish Fetal Medicine Database, and other Danish registries as a tool for the study of drug safety in pregnancy

    Directory of Open Access Journals (Sweden)

    Pedersen LH

    2016-05-01

    Full Text Available Lars H Pedersen,1,2 Olav B Petersen,1,2 Mette Nørgaard,3 Charlotte Ekelund,4 Lars Pedersen,3 Ann Tabor,4 Henrik T Sørensen3 1Department of Clinical Medicine, Aarhus University, 2Department of Obstetrics and Gynecology, 3Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, 4Department of Fetal Medicine, Rigshospitalet, Copenhagen, Denmark Abstract: A linked population-based database is being created in Denmark for research on drug safety during pregnancy. It combines information from the Danish National Health Service Prescription Database (with information on all prescriptions reimbursed in Denmark since 2004, the Danish Fetal Medicine Database, the Danish National Registry of Patients, and the Medical Birth Registry. The new linked database will provide validated information on malformations diagnosed both prenatally and postnatally. The cohort from 2008 to 2014 will comprise 589,000 pregnancies with information on 424,000 pregnancies resulting in live-born children, ~420,000 pregnancies undergoing prenatal ultrasound scans, 65,000 miscarriages, and 92,000 terminations. It will be updated yearly with information on ~80,000 pregnancies. The cohort will enable identification of drug exposures associated with severe malformations, not only based on malformations diagnosed after birth but also including those having led to termination of pregnancy or miscarriage. Such combined data will provide a unique source of information for research on the safety of medications used during pregnancy. Keywords: malformations, teratology, therapeutic drug monitoring, epidemiological methods, registries

  16. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  17. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...

  18. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  19. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  20. Students are Confident Using Federated Search Tools as much as Single Databases. A Review of: Armstrong, A. (2009. Student perceptions of federated searching vs. single database searching. Reference Services Review, 37(3, 291-303. doi:10.1108/00907320910982785

    Directory of Open Access Journals (Sweden)

    Deena Yanofsky

    2011-09-01

    Full Text Available Objective – To measure students’ perceptions of the ease-of-use and efficacy of a federated search tool versus a single multidisciplinary database.Design – An evaluation worksheet, employing a combination of quantitative and qualitative questions.Setting – A required, first-year English composition course taught at the University of Illinois at Chicago (UIC.Subjects – Thirty-one undergraduate students completed and submitted the worksheet.Methods – Students attended two library instruction sessions. The first session introduced participants to basic Boolean searching (using AND only, selecting appropriate keywords and searching for books in the library catalogue. In the second library session, students were handed an evaluation worksheet and, with no introduction to the process of searching article databases, were asked to find relevant articles on a research topic of their own choosing using both a federated search tool and a single multidisciplinary database. The evaluation worksheet was divided into four sections: step-by-step instructions for accessing the single multidisciplinary database and the federated search tool; space to record search strings in both resources; space to record the titles of up to five relevant articles; and a series of quantitative and qualitative questions regarding ease-of-use, relevancy of results, overall preference (if any between the two resources, likeliness of future use and other preferred research tools. Half of the participants received a worksheet with instructions to search the federated search tool before the single database; the order was reversed for the other half of the students. The evaluation worksheet was designed to be completed in one hour.Participant responses to qualitative questions were analyzed, codified and grouped into thematic categories. If a student mentioned more than one factor in responding to a question, their response was recorded in multiple categories.Main Results

  1. The LHCb configuration database

    CERN Document Server

    Abadie, L; Van Herwijnen, Eric; Jacobsson, R; Jost, B; Neufeld, N

    2005-01-01

    The aim of the LHCb configuration database is to store information about all the controllable devices of the detector. The experiment's control system (that uses PVSS ) will configure, start up and monitor the detector from the information in the configuration database. The database will contain devices with their properties, connectivity and hierarchy. The ability to store and rapidly retrieve huge amounts of data, and the navigability between devices are important requirements. We have collected use cases to ensure the completeness of the design. Using the entity relationship modelling technique we describe the use cases as classes with attributes and links. We designed the schema for the tables using relational diagrams. This methodology has been applied to the TFC (switches) and DAQ system. Other parts of the detector will follow later. The database has been implemented using Oracle to benefit from central CERN database support. The project also foresees the creation of tools to populate, maintain, and co...

  2. Inorganic Crystal Structure Database (ICSD) and Standardized Data and Crystal Chemical Characterization of Inorganic Structure Types (TYPIX)—Two Tools for Inorganic Chemists and Crystallographers

    Science.gov (United States)

    Fluck, Ekkehard

    1996-01-01

    The two databases ICSD and TYPIX are described. ICSD is a comprehensive compilation of crystal structure data of inorganic compounds (about 39 000 entries). TYPIX contains 3600 critically evaluated data sets representative of structure types formed by inorganic compounds. PMID:27805158

  3. The Characterization Tool: A knowledge-based stem cell, differentiated cell, and tissue database with a web-based analysis front-end.

    NARCIS (Netherlands)

    I. Wohlers (Inken); H. Stachelscheid; J. Borstlap; K. Zeilinger; J.C. Gerlach

    2009-01-01

    htmlabstractIn the rapidly growing field of stem cell research, there is a need for universal databases and web-based applications that provide a common knowledge base on the characteristics of stem cells, differentiated cells, and tissues by collecting, processing, and making available diverse

  4. The Prediflood database. A new tool for an integrated approach to historical floods in Catalonia (NE Iberian Peninsula), AD 1033-2013

    Science.gov (United States)

    Barriendos, Mariano; Carles Balasch Solanes, Josep; Tuset, Jordi; Lluís Ruiz-Bellet, Josep

    2014-05-01

    Available information of historical floods can improve the management of hydroclimatic hazards. This approach is useful in ungauged basins or with short instrumental data series. On the other hand, flood risk is increasing due to both the expansion of human land occupation and the modification of rainfall patterns in the present global climatic change scenario. Within the Prediflood Project, we have designed an integrated database of historical floods in Catalonia with the aim to feed data to: 1) Meteorological reconstruction and modelling. 2) Hydrological and hydraulic reconstruction. 3) Human impacts evaluation, of these floods. The firsts steps of the database design focus on spatial location and on the quality of the data sources in three levels: 1) Historical documentary sources and newspapers contemporary with the floods. 2) Local historiography. 3) Technical reports. After the application of historiographical methodologies, more than 2300 flood records have been added to the database so far. Despite the completion of the database is still a work in progress, the firsts analyses are already underway and focus on the largest floods with catastrophic effects simultaneously on more than 15 catchments: November 1617, October 1787, September 1842, May 1853, September 1874, January 1898, October 1907, October 1940, September 1962, November 1982, October 1994 and others.

  5. Database Replication

    Directory of Open Access Journals (Sweden)

    Marius Cristian MAZILU

    2010-12-01

    Full Text Available For someone who has worked in an environment in which the same database is used for data entry and reporting, or perhaps managed a single database server that was utilized by too many users, the advantages brought by data replication are clear. The main purpose of this paper is to emphasize those advantages as well as presenting the different types of Database Replication and the cases in which their use is recommended.

  6. The power tool

    International Nuclear Information System (INIS)

    HAYFIELD, J.P.

    1999-01-01

    POWER Tool--Planning, Optimization, Waste Estimating and Resourcing tool, a hand-held field estimating unit and relational database software tool for optimizing disassembly and final waste form of contaminated systems and equipment

  7. Database of tsunami scenario simulations for Western Iberia: a tool for the TRIDEC Project Decision Support System for tsunami early warning

    Science.gov (United States)

    Armigliato, Alberto; Pagnoni, Gianluca; Zaniboni, Filippo; Tinti, Stefano

    2013-04-01

    TRIDEC is a EU-FP7 Project whose main goal is, in general terms, to develop suitable strategies for the management of crises possibly arising in the Earth management field. The general paradigms adopted by TRIDEC to develop those strategies include intelligent information management, the capability of managing dynamically increasing volumes and dimensionality of information in complex events, and collaborative decision making in systems that are typically very loosely coupled. The two areas where TRIDEC applies and tests its strategies are tsunami early warning and industrial subsurface development. In the field of tsunami early warning, TRIDEC aims at developing a Decision Support System (DSS) that integrates 1) a set of seismic, geodetic and marine sensors devoted to the detection and characterisation of possible tsunamigenic sources and to monitoring the time and space evolution of the generated tsunami, 2) large-volume databases of pre-computed numerical tsunami scenarios, 3) a proper overall system architecture. Two test areas are dealt with in TRIDEC: the western Iberian margin and the eastern Mediterranean. In this study, we focus on the western Iberian margin with special emphasis on the Portuguese coasts. The strategy adopted in TRIDEC plans to populate two different databases, called "Virtual Scenario Database" (VSDB) and "Matching Scenario Database" (MSDB), both of which deal only with earthquake-generated tsunamis. In the VSDB we simulate numerically few large-magnitude events generated by the major known tectonic structures in the study area. Heterogeneous slip distributions on the earthquake faults are introduced to simulate events as "realistically" as possible. The members of the VSDB represent the unknowns that the TRIDEC platform must be able to recognise and match during the early crisis management phase. On the other hand, the MSDB contains a very large number (order of thousands) of tsunami simulations performed starting from many different

  8. Specialist Bibliographic Databases

    Science.gov (United States)

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  9. International Society of Human and Animal Mycology (ISHAM)-ITS reference DNA barcoding database--the quality controlled standard tool for routine identification of human and animal pathogenic fungi.

    Science.gov (United States)

    Irinyi, Laszlo; Serena, Carolina; Garcia-Hermoso, Dea; Arabatzis, Michael; Desnos-Ollivier, Marie; Vu, Duong; Cardinali, Gianluigi; Arthur, Ian; Normand, Anne-Cécile; Giraldo, Alejandra; da Cunha, Keith Cassia; Sandoval-Denis, Marcelo; Hendrickx, Marijke; Nishikaku, Angela Satie; de Azevedo Melo, Analy Salles; Merseguel, Karina Bellinghausen; Khan, Aziza; Parente Rocha, Juliana Alves; Sampaio, Paula; da Silva Briones, Marcelo Ribeiro; e Ferreira, Renata Carmona; de Medeiros Muniz, Mauro; Castañón-Olivares, Laura Rosio; Estrada-Barcenas, Daniel; Cassagne, Carole; Mary, Charles; Duan, Shu Yao; Kong, Fanrong; Sun, Annie Ying; Zeng, Xianyu; Zhao, Zuotao; Gantois, Nausicaa; Botterel, Françoise; Robbertse, Barbara; Schoch, Conrad; Gams, Walter; Ellis, David; Halliday, Catriona; Chen, Sharon; Sorrell, Tania C; Piarroux, Renaud; Colombo, Arnaldo L; Pais, Célia; de Hoog, Sybren; Zancopé-Oliveira, Rosely Maria; Taylor, Maria Lucia; Toriello, Conchita; de Almeida Soares, Célia Maria; Delhaes, Laurence; Stubbe, Dirk; Dromer, Françoise; Ranque, Stéphane; Guarro, Josep; Cano-Lira, Jose F; Robert, Vincent; Velegraki, Aristea; Meyer, Wieland

    2015-05-01

    Human and animal fungal pathogens are a growing threat worldwide leading to emerging infections and creating new risks for established ones. There is a growing need for a rapid and accurate identification of pathogens to enable early diagnosis and targeted antifungal therapy. Morphological and biochemical identification methods are time-consuming and require trained experts. Alternatively, molecular methods, such as DNA barcoding, a powerful and easy tool for rapid monophasic identification, offer a practical approach for species identification and less demanding in terms of taxonomical expertise. However, its wide-spread use is still limited by a lack of quality-controlled reference databases and the evolving recognition and definition of new fungal species/complexes. An international consortium of medical mycology laboratories was formed aiming to establish a quality controlled ITS database under the umbrella of the ISHAM working group on "DNA barcoding of human and animal pathogenic fungi." A new database, containing 2800 ITS sequences representing 421 fungal species, providing the medical community with a freely accessible tool at http://www.isham.org/ and http://its.mycologylab.org/ to rapidly and reliably identify most agents of mycoses, was established. The generated sequences included in the new database were used to evaluate the variation and overall utility of the ITS region for the identification of pathogenic fungi at intra-and interspecies level. The average intraspecies variation ranged from 0 to 2.25%. This highlighted selected pathogenic fungal species, such as the dermatophytes and emerging yeast, for which additional molecular methods/genetic markers are required for their reliable identification from clinical and veterinary specimens. © The Author 2015. Published by Oxford University Press on behalf of The International Society for Human and Animal Mycology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  11. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  12. Dealer Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dealer reporting databases contain the primary data reported by federally permitted seafood dealers in the northeast. Electronic reporting was implemented May 1,...

  13. RDD Databases

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database was established to oversee documents issued in support of fishery research activities including experimental fishing permits (EFP), letters of...

  14. Snowstorm Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Snowstorm Database is a collection of over 500 snowstorms dating back to 1900 and updated operationally. Only storms having large areas of heavy snowfall (10-20...

  15. Brasilia’s Database Administrators

    Directory of Open Access Journals (Sweden)

    Jane Adriana

    2016-06-01

    Full Text Available Database administration has gained an essential role in the management of new database technologies. Different data models are being created for supporting the enormous data volume, from the traditional relational database. These new models are called NoSQL (Not only SQL databases. The adoption of best practices and procedures, has become essential for the operation of database management systems. Thus, this paper investigates some of the techniques and tools used by database administrators. The study highlights features and particularities in databases within the area of Brasilia, the Capital of Brazil. The results point to which new technologies regarding database management are currently the most relevant, as well as the central issues in this area.

  16. National database

    DEFF Research Database (Denmark)

    Kristensen, Helen Grundtvig; Stjernø, Henrik

    1995-01-01

    Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen.......Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen....

  17. Comprehensive comparison of in silico MS/MS fragmentation tools of the CASMI contest: database boosting is needed to achieve 93% accuracy.

    Science.gov (United States)

    Blaženović, Ivana; Kind, Tobias; Torbašinović, Hrvoje; Obrenović, Slobodan; Mehta, Sajjan S; Tsugawa, Hiroshi; Wermuth, Tobias; Schauer, Nicolas; Jahn, Martina; Biedendieck, Rebekka; Jahn, Dieter; Fiehn, Oliver

    2017-05-25

    In mass spectrometry-based untargeted metabolomics, rarely more than 30% of the compounds are identified. Without the true identity of these molecules it is impossible to draw conclusions about the biological mechanisms, pathway relationships and provenance of compounds. The only way at present to address this discrepancy is to use in silico fragmentation software to identify unknown compounds by comparing and ranking theoretical MS/MS fragmentations from target structures to experimental tandem mass spectra (MS/MS). We compared the performance of four publicly available in silico fragmentation algorithms (MetFragCL, CFM-ID, MAGMa+ and MS-FINDER) that participated in the 2016 CASMI challenge. We found that optimizing the use of metadata, weighting factors and the manner of combining different tools eventually defined the ultimate outcomes of each method. We comprehensively analysed how outcomes of different tools could be combined and reached a final success rate of 93% for the training data, and 87% for the challenge data, using a combination of MAGMa+, CFM-ID and compound importance information along with MS/MS matching. Matching MS/MS spectra against the MS/MS libraries without using any in silico tool yielded 60% correct hits, showing that the use of in silico methods is still important.

  18. ADANS database specification

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  19. Experiment Databases

    Science.gov (United States)

    Vanschoren, Joaquin; Blockeel, Hendrik

    Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.

  20. SpirPro: A Spirulina proteome database and web-based tools for the analysis of protein-protein interactions at the metabolic level in Spirulina (Arthrospira) platensis C1.

    Science.gov (United States)

    Senachak, Jittisak; Cheevadhanarak, Supapon; Hongsthong, Apiradee

    2015-07-29

    Spirulina (Arthrospira) platensis is the only cyanobacterium that in addition to being studied at the molecular level and subjected to gene manipulation, can also be mass cultivated in outdoor ponds for commercial use as a food supplement. Thus, encountering environmental changes, including temperature stresses, is common during the mass production of Spirulina. The use of cyanobacteria as an experimental platform, especially for photosynthetic gene manipulation in plants and bacteria, is becoming increasingly important. Understanding the mechanisms and protein-protein interaction networks that underlie low- and high-temperature responses is relevant to Spirulina mass production. To accomplish this goal, high-throughput techniques such as OMICs analyses are used. Thus, large datasets must be collected, managed and subjected to information extraction. Therefore, databases including (i) proteomic analysis and protein-protein interaction (PPI) data and (ii) domain/motif visualization tools are required for potential use in temperature response models for plant chloroplasts and photosynthetic bacteria. A web-based repository was developed including an embedded database, SpirPro, and tools for network visualization. Proteome data were analyzed integrated with protein-protein interactions and/or metabolic pathways from KEGG. The repository provides various information, ranging from raw data (2D-gel images) to associated results, such as data from interaction and/or pathway analyses. This integration allows in silico analyses of protein-protein interactions affected at the metabolic level and, particularly, analyses of interactions between and within the affected metabolic pathways under temperature stresses for comparative proteomic analysis. The developed tool, which is coded in HTML with CSS/JavaScript and depicted in Scalable Vector Graphics (SVG), is designed for interactive analysis and exploration of the constructed network. SpirPro is publicly available on the web

  1. Toward automating the database design process

    International Nuclear Information System (INIS)

    Asprey, P.L.

    1979-01-01

    One organization's approach to designing complex, interrelated databases is described. The problems encountered and the techniques developed are discussed. A set of software tools to aid the designer and to produce an initial database design directly is presented. 5 figures

  2. The CATH database

    Directory of Open Access Journals (Sweden)

    Knudsen Michael

    2010-02-01

    Full Text Available Abstract The CATH database provides hierarchical classification of protein domains based on their folding patterns. Domains are obtained from protein structures deposited in the Protein Data Bank and both domain identification and subsequent classification use manual as well as automated procedures. The accompanying website http://www.cathdb.info provides an easy-to-use entry to the classification, allowing for both browsing and downloading of data. Here, we give a brief review of the database, its corresponding website and some related tools.

  3. Solubility Database

    Science.gov (United States)

    SRD 106 IUPAC-NIST Solubility Database (Web, free access)   These solubilities are compiled from 18 volumes (Click here for List) of the International Union for Pure and Applied Chemistry(IUPAC)-NIST Solubility Data Series. The database includes liquid-liquid, solid-liquid, and gas-liquid systems. Typical solvents and solutes include water, seawater, heavy water, inorganic compounds, and a variety of organic compounds such as hydrocarbons, halogenated hydrocarbons, alcohols, acids, esters and nitrogen compounds. There are over 67,500 solubility measurements and over 1800 references.

  4. Are we safe? A tool to improve the knowledge of the risk areas: high-resolution floods database (MEDIFLOOD) for Spanish Mediterranean coast (1960 -2014)

    Science.gov (United States)

    Gil-Guirado, Salvador; Perez-Morales, Alfredo; Lopez-Martinez, Francisco; Barriendos-Vallve, Mariano

    2016-04-01

    The Mediterranean coast of the Iberian Peninsula concentrates an important part of the population and economic activities in Spain. Intensive agriculture, industry in the major urban centers, trade and tourism make this region the main center of economic dynamism and one of the highest rates of population and economic growth of southern Europe. This process accelerated after Franco regime started to be more open to the outside in the early sixties of the last century. The main responsible factor for this process is the climate because of warmer temperatures and a large number of sunny days, which has become in the economic slogan of the area. However, this growth process has happened without proper planning to reduce the impact of other climatic feature of the area, floods. Floods are the natural hazard that generates greater impacts in the area.One of the factors that facilitate the lack of strategic planning is the absence of a correct chronology of flood episodes. In this situation, land use plans, are based on inadequate chronologies that do not report the real risk of the population of this area. To reduce this deficit and contribute to a more efficient zoning of the Mediterranean coast according to their floods risk, we have prepared a high-resolution floods database (MEDIFLOOD) for all the municipalities of the Spanish Mediterranean coast since 1960 until 2013. The methodology consists on exploring the newspaper archives of all newspapers with a presence in the area. The searches have been made by typing the name of each of the 180 municipalities of the Spanish coast followed by 5 key terms. Each identified flood has been classified by dates and according to their level of intensity and type of damage. Additionally, we have consulted the specific bibliography to rule out any data gaps. The results are surprising and worrying. We have identified more than 3,600 cases where a municipality has been affected by floods. These cases are grouped into more than 700

  5. RODOS database adapter

    International Nuclear Information System (INIS)

    Xie Gang

    1995-11-01

    Integrated data management is an essential aspect of many automatical information systems such as RODOS, a real-time on-line decision support system for nuclear emergency management. In particular, the application software must provide access management to different commercial database systems. This report presents the tools necessary for adapting embedded SQL-applications to both HP-ALLBASE/SQL and CA-Ingres/SQL databases. The design of the database adapter and the concept of RODOS embedded SQL syntax are discussed by considering some of the most important features of SQL-functions and the identification of significant differences between SQL-implementations. Finally fully part of the software developed and the administrator's and installation guides are described. (orig.) [de

  6. The Danish Sarcoma Database

    Directory of Open Access Journals (Sweden)

    Jorgensen PH

    2016-10-01

    Full Text Available Peter Holmberg Jørgensen,1 Gunnar Schwarz Lausten,2 Alma B Pedersen3 1Tumor Section, Department of Orthopedic Surgery, Aarhus University Hospital, Aarhus, 2Tumor Section, Department of Orthopedic Surgery, Rigshospitalet, Copenhagen, 3Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, Denmark Aim: The aim of the database is to gather information about sarcomas treated in Denmark in order to continuously monitor and improve the quality of sarcoma treatment in a local, a national, and an international perspective. Study population: Patients in Denmark diagnosed with a sarcoma, both skeletal and ekstraskeletal, are to be registered since 2009. Main variables: The database contains information about appearance of symptoms; date of receiving referral to a sarcoma center; date of first visit; whether surgery has been performed elsewhere before referral, diagnosis, and treatment; tumor characteristics such as location, size, malignancy grade, and growth pattern; details on treatment (kind of surgery, amount of radiation therapy, type and duration of chemotherapy; complications of treatment; local recurrence and metastases; and comorbidity. In addition, several quality indicators are registered in order to measure the quality of care provided by the hospitals and make comparisons between hospitals and with international standards. Descriptive data: Demographic patient-specific data such as age, sex, region of living, comorbidity, World Health Organization's International Classification of Diseases – tenth edition codes and TNM Classification of Malignant Tumours, and date of death (after yearly coupling to the Danish Civil Registration System. Data quality and completeness are currently secured. Conclusion: The Danish Sarcoma Database is population based and includes sarcomas occurring in Denmark since 2009. It is a valuable tool for monitoring sarcoma incidence and quality of treatment and its improvement, postoperative

  7. Real Time Baseball Database

    Science.gov (United States)

    Fukue, Yasuhiro

    The author describes the system outline, features and operations of "Nikkan Sports Realtime Basaball Database" which was developed and operated by Nikkan Sports Shimbun, K. K. The system enables to input numerical data of professional baseball games as they proceed simultaneously, and execute data updating at realtime, just-in-time. Other than serving as supporting tool for prepareing newspapers it is also available for broadcasting media, general users through NTT dial Q2 and others.

  8. Immune epitope database analysis resource

    DEFF Research Database (Denmark)

    Kim, Yohan; Ponomarenko, Julia; Zhu, Zhanyang

    2012-01-01

    The immune epitope database analysis resource (IEDB-AR: http://tools.iedb.org) is a collection of tools for prediction and analysis of molecular targets of T- and B-cell immune responses (i.e. epitopes). Since its last publication in the NAR webserver issue in 2008, a new generation of peptide...

  9. The CAPEC Database

    DEFF Research Database (Denmark)

    Nielsen, Thomas Lund; Abildskov, Jens; Harper, Peter Mathias

    2001-01-01

    The Computer-Aided Process Engineering Center (CAPEC) database of measured data was established with the aim to promote greater data exchange in the chemical engineering community. The target properties are pure component properties, mixture properties, and special drug solubility data. The datab......The Computer-Aided Process Engineering Center (CAPEC) database of measured data was established with the aim to promote greater data exchange in the chemical engineering community. The target properties are pure component properties, mixture properties, and special drug solubility data....... The database divides pure component properties into primary, secondary, and functional properties. Mixture properties are categorized in terms of the number of components in the mixture and the number of phases present. The compounds in the database have been classified on the basis of the functional groups...... in the compound. This classification makes the CAPEC database a very useful tool, for example, in the development of new property models, since properties of chemically similar compounds are easily obtained. A program with efficient search and retrieval functions of properties has been developed....

  10. The Neotoma Paleoecology Database

    Science.gov (United States)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  11. The CEBAF Element Database

    Energy Technology Data Exchange (ETDEWEB)

    Theodore Larrieu, Christopher Slominski, Michele Joyce

    2011-03-01

    With the inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting control computers to building controls screens. A requirement influencing the CED design is that it provide access to not only present, but also future and past configurations of the accelerator. To accomplish this, an introspective database schema was designed that allows new elements, types, and properties to be defined on-the-fly with no changes to table structure. Used in conjunction with Oracle Workspace Manager, it allows users to query data from any time in the database history with the same tools used to query the present configuration. Users can also check-out workspaces to use as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented Application Programming Interface (API) that is translated automatically from original C++ source code into native libraries for scripting languages such as perl, php, and TCL making access to the CED easy and ubiquitous.

  12. Database on wind characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, K.S. [The Technical Univ. of Denmark (Denmark); Courtney, M.S. [Risoe National Lab., (Denmark)

    1999-08-01

    The organisations that participated in the project consists of five research organisations: MIUU (Sweden), ECN (The Netherlands), CRES (Greece), DTU (Denmark), Risoe (Denmark) and one wind turbine manufacturer: Vestas Wind System A/S (Denmark). The overall goal was to build a database consisting of a large number of wind speed time series and create tools for efficiently searching through the data to select interesting data. The project resulted in a database located at DTU, Denmark with online access through the Internet. The database contains more than 50.000 hours of measured wind speed measurements. A wide range of wind climates and terrain types are represented with significant amounts of time series. Data have been chosen selectively with a deliberate over-representation of high wind and complex terrain cases. This makes the database ideal for wind turbine design needs but completely unsuitable for resource studies. Diversity has also been an important aim and this is realised with data from a large range of terrain types; everything from offshore to mountain, from Norway to Greece. (EHS)

  13. LHCb distributed conditions database

    International Nuclear Information System (INIS)

    Clemencic, M

    2008-01-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCG library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica of the Conditions Database have been performed and the results will be summarized here

  14. Small Business Innovations (Integrated Database)

    Science.gov (United States)

    1992-01-01

    Because of the diversity of NASA's information systems, it was necessary to develop DAVID as a central database management system. Under a Small Business Innovation Research (SBIR) grant, Ken Wanderman and Associates, Inc. designed software tools enabling scientists to interface with DAVID and commercial database management systems, as well as artificial intelligence programs. The software has been installed at a number of data centers and is commercially available.

  15. A Simulation Tool for Distributed Databases.

    Science.gov (United States)

    1981-09-01

    11-8 . Reed’s multiversion system [RE1T8] may also be viewed aa updating only copies until the commit is made. The decision to make the changes...distributed voting, and Ellis’ ring algorithm. Other, significantly different algorithms not covered in his work include Reed’s multiversion algorithm, the

  16. Q-bank phytoplasma database

    DEFF Research Database (Denmark)

    Contaldo, Nicoletta; Bertaccini, Assunta; Nicolaisen, Mogens

    2014-01-01

    The setting of the Q-Bank database free available on line for quarantine phytoplasma and also for general phytoplasma identification is described. The tool was developed in the frame of the EU-FP7 project Qbol and is linked with a new project Q-collect in order to made widely available the identi......The setting of the Q-Bank database free available on line for quarantine phytoplasma and also for general phytoplasma identification is described. The tool was developed in the frame of the EU-FP7 project Qbol and is linked with a new project Q-collect in order to made widely available...

  17. Extending Database Integration Technology

    National Research Council Canada - National Science Library

    Buneman, Peter

    1999-01-01

    Formal approaches to the semantics of databases and database languages can have immediate and practical consequences in extending database integration technologies to include a vastly greater range...

  18. Open Geoscience Database

    Science.gov (United States)

    Bashev, A.

    2012-04-01

    treatment could be conducted in other programs after extraction the filtered data into *.csv file. It makes the database understandable for non-experts. The database employs open data format (*.csv) and wide spread tools: PHP as the program language, MySQL as database management system, JavaScript for interaction with GoogleMaps and JQueryUI for create user interface. The database is multilingual: there are association tables, which connect with elements of the database. In total the development required about 150 hours. The database still has several problems. The main problem is the reliability of the data. Actually it needs an expert system for estimation the reliability, but the elaboration of such a system would take more resources than the database itself. The second problem is the problem of stream selection - how to select the stations that are connected with each other (for example, belong to one water stream) and indicate their sequence. Currently the interface is English and Russian. However it can be easily translated to your language. But some problems we decided. For example problem "the problem of the same station" (sometimes the distance between stations is smaller, than the error of position): when you adding new station to the database our application automatically find station near this place. Also we decided problem of object and parameter type (how to regard "EC" and "electrical conductivity" as the same parameter). This problem has been solved using "associative tables". If you would like to see the interface on your language, just contact us. We should send you the list of terms and phrases for translation on your language. The main advantage of the database is that it is totally open: everybody can see, extract the data from the database and use them for non-commercial purposes with no charge. Registered users can contribute to the database without getting paid. We hope, that it will be widely used first of all for education purposes, but

  19. LHCb Distributed Conditions Database

    CERN Document Server

    Clemencic, Marco

    2007-01-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCB library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica o...

  20. EMU Lessons Learned Database

    Science.gov (United States)

    Matthews, Kevin M., Jr.; Crocker, Lori; Cupples, J. Scott

    2011-01-01

    As manned space exploration takes on the task of traveling beyond low Earth orbit, many problems arise that must be solved in order to make the journey possible. One major task is protecting humans from the harsh space environment. The current method of protecting astronauts during Extravehicular Activity (EVA) is through use of the specially designed Extravehicular Mobility Unit (EMU). As more rigorous EVA conditions need to be endured at new destinations, the suit will need to be tailored and improved in order to accommodate the astronaut. The Objective behind the EMU Lessons Learned Database(LLD) is to be able to create a tool which will assist in the development of next-generation EMUs, along with maintenance and improvement of the current EMU, by compiling data from Failure Investigation and Analysis Reports (FIARs) which have information on past suit failures. FIARs use a system of codes that give more information on the aspects of the failure, but if one is unfamiliar with the EMU they will be unable to decipher the information. A goal of the EMU LLD is to not only compile the information, but to present it in a user-friendly, organized, searchable database accessible to all familiarity levels with the EMU; both newcomers and veterans alike. The EMU LLD originally started as an Excel database, which allowed easy navigation and analysis of the data through pivot charts. Creating an entry requires access to the Problem Reporting And Corrective Action database (PRACA), which contains the original FIAR data for all hardware. FIAR data are then transferred to, defined, and formatted in the LLD. Work is being done to create a web-based version of the LLD in order to increase accessibility to all of Johnson Space Center (JSC), which includes converting entries from Excel to the HTML format. FIARs related to the EMU have been completed in the Excel version, and now focus has shifted to expanding FIAR data in the LLD to include EVA tools and support hardware such as

  1. Databases and their application

    NARCIS (Netherlands)

    Grimm, E.C.; Bradshaw, R.H.W; Brewer, S.; Flantua, S.; Giesecke, T.; Lézine, A.M.; Takahara, H.; Williams, J.W.,Jr; Elias, S.A.; Mock, C.J.

    2013-01-01

    During the past 20 years, several pollen database cooperatives have been established. These databases are now constituent databases of the Neotoma Paleoecology Database, a public domain, multiproxy, relational database designed for Quaternary-Pliocene fossil data and modern surface samples. The

  2. XML databases and the semantic web

    CERN Document Server

    Thuraisingham, Bhavani

    2002-01-01

    Efficient access to data, sharing data, extracting information from data, and making use of the information have become urgent needs for today''s corporations. With so much data on the Web, managing it with conventional tools is becoming almost impossible. New tools and techniques are necessary to provide interoperability as well as warehousing between multiple data sources and systems, and to extract information from the databases. XML Databases and the Semantic Web focuses on critical and new Web technologies needed for organizations to carry out transactions on the Web, to understand how to use the Web effectively, and to exchange complex documents on the Web.This reference for database administrators, database designers, and Web designers working in tandem with database technologists covers three emerging technologies of significant impact for electronic business: Extensible Markup Language (XML), semi-structured databases, and the semantic Web. The first two parts of the book explore these emerging techn...

  3. Databases of the marine metagenomics

    KAUST Repository

    Mineta, Katsuhiko

    2015-10-28

    The metagenomic data obtained from marine environments is significantly useful for understanding marine microbial communities. In comparison with the conventional amplicon-based approach of metagenomics, the recent shotgun sequencing-based approach has become a powerful tool that provides an efficient way of grasping a diversity of the entire microbial community at a sampling point in the sea. However, this approach accelerates accumulation of the metagenome data as well as increase of data complexity. Moreover, when metagenomic approach is used for monitoring a time change of marine environments at multiple locations of the seawater, accumulation of metagenomics data will become tremendous with an enormous speed. Because this kind of situation has started becoming of reality at many marine research institutions and stations all over the world, it looks obvious that the data management and analysis will be confronted by the so-called Big Data issues such as how the database can be constructed in an efficient way and how useful knowledge should be extracted from a vast amount of the data. In this review, we summarize the outline of all the major databases of marine metagenome that are currently publically available, noting that database exclusively on marine metagenome is none but the number of metagenome databases including marine metagenome data are six, unexpectedly still small. We also extend our explanation to the databases, as reference database we call, that will be useful for constructing a marine metagenome database as well as complementing important information with the database. Then, we would point out a number of challenges to be conquered in constructing the marine metagenome database.

  4. PostgreSQL database performance optimization

    OpenAIRE

    Wang, Qiang

    2011-01-01

    The thesis was request by Marlevo software Oy for a general description of the PostgreSQL database and its performance optimization technics. Its purpose was to help new PostgreSQL users to quickly understand the system and to assist DBAs to improve the database performance. The thesis was divided into two parts. The first part described PostgreSQL database optimization technics in theory. In additional popular tools were also introduced. This part was based on PostgreSQL documentation, r...

  5. A database for coconut crop improvement.

    Science.gov (United States)

    Rajagopal, Velamoor; Manimekalai, Ramaswamy; Devakumar, Krishnamurthy; Rajesh; Karun, Anitha; Niral, Vittal; Gopal, Murali; Aziz, Shamina; Gunasekaran, Marimuthu; Kumar, Mundappurathe Ramesh; Chandrasekar, Arumugam

    2005-12-08

    Coconut crop improvement requires a number of biotechnology and bioinformatics tools. A database containing information on CG (coconut germplasm), CCI (coconut cultivar identification), CD (coconut disease), MIFSPC (microbial information systems in plantation crops) and VO (vegetable oils) is described. The database was developed using MySQL and PostgreSQL running in Linux operating system. The database interface is developed in PHP, HTML and JAVA. http://www.bioinfcpcri.org.

  6. Dietary Supplement Ingredient Database

    Science.gov (United States)

    ... and US Department of Agriculture Dietary Supplement Ingredient Database Toggle navigation Menu Home About DSID Mission Current ... values can be saved to build a small database or add to an existing database for national, ...

  7. Oracle database 12c the complete reference

    CERN Document Server

    Bryla, Bob

    2014-01-01

    Maintain a scalable, highly available enterprise platform and reduce complexity by leveraging the powerful new tools and cloud enhancements of Oracle Database 12c. This authoritative Oracle Press guide offers complete coverage of installation, configuration, tuning, and administration. Find out how to build and populate Oracle databases, perform effective queries, design applications, and secure your enterprise data

  8. A Sandia telephone database system

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, S.D.; Tolendino, L.F.

    1991-08-01

    Sandia National Laboratories, Albuquerque, may soon have more responsibility for the operation of its own telephone system. The processes that constitute providing telephone service can all be improved through the use of a central data information system. We studied these processes, determined the requirements for a database system, then designed the first stages of a system that meets our needs for work order handling, trouble reporting, and ISDN hardware assignments. The design was based on an extensive set of applications that have been used for five years to manage the Sandia secure data network. The system utilizes an Ingres database management system and is programmed using the Application-By-Forms tools.

  9. NoSQL Databases

    OpenAIRE

    PANYKO, Tomáš

    2013-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  10. Collecting Taxes Database

    Data.gov (United States)

    US Agency for International Development — The Collecting Taxes Database contains performance and structural indicators about national tax systems. The database contains quantitative revenue performance...

  11. USAID Anticorruption Projects Database

    Data.gov (United States)

    US Agency for International Development — The Anticorruption Projects Database (Database) includes information about USAID projects with anticorruption interventions implemented worldwide between 2007 and...

  12. SmallSat Database

    Science.gov (United States)

    Petropulos, Dolores; Bittner, David; Murawski, Robert; Golden, Bert

    2015-01-01

    required. When completed it will interface with the SCENIC environment to allow modeling of smallSats. The SmallSat Relational Database can also be integrated with the SCENIC Simulation modeling system that is currently in development. The SmallSat Relational Database simulation will be of great significance in assisting the NASA SCaN group to understand the impact the smallSats have made which have populated the lower orbit around our mother earth. What I have created and worked on this summer session 2015, is the basis for a tool that will be of value to the NASA SCaN SCENIC Simulation Environment for years to come.

  13. The Comprehensive Antibiotic Resistance Database

    Science.gov (United States)

    McArthur, Andrew G.; Waglechner, Nicholas; Nizam, Fazmin; Yan, Austin; Azad, Marisa A.; Baylay, Alison J.; Bhullar, Kirandeep; Canova, Marc J.; De Pascale, Gianfranco; Ejim, Linda; Kalan, Lindsay; King, Andrew M.; Koteva, Kalinka; Morar, Mariya; Mulvey, Michael R.; O'Brien, Jonathan S.; Pawlowski, Andrew C.; Piddock, Laura J. V.; Spanogiannopoulos, Peter; Sutherland, Arlene D.; Tang, Irene; Taylor, Patricia L.; Thaker, Maulik; Wang, Wenliang; Yan, Marie; Yu, Tennison

    2013-01-01

    The field of antibiotic drug discovery and the monitoring of new antibiotic resistance elements have yet to fully exploit the power of the genome revolution. Despite the fact that the first genomes sequenced of free living organisms were those of bacteria, there have been few specialized bioinformatic tools developed to mine the growing amount of genomic data associated with pathogens. In particular, there are few tools to study the genetics and genomics of antibiotic resistance and how it impacts bacterial populations, ecology, and the clinic. We have initiated development of such tools in the form of the Comprehensive Antibiotic Research Database (CARD; http://arpcard.mcmaster.ca). The CARD integrates disparate molecular and sequence data, provides a unique organizing principle in the form of the Antibiotic Resistance Ontology (ARO), and can quickly identify putative antibiotic resistance genes in new unannotated genome sequences. This unique platform provides an informatic tool that bridges antibiotic resistance concerns in health care, agriculture, and the environment. PMID:23650175

  14. Managing the BABAR Object Oriented Database

    International Nuclear Information System (INIS)

    Hasan, Adil

    2002-01-01

    The BaBar experiment stores its data in an Object Oriented federated database supplied by Objectivity/DB(tm). This database is currently 350TB in size and is expected to increase considerably as the experiment matures. Management of this database requires careful planning and specialized tools in order to make the data available to physicists in an efficient and timely manner. We discuss the operational issues and management tools that were developed during the previous run to deal with this vast quantity of data at SLAC

  15. ADASS Web Database XML Project

    Science.gov (United States)

    Barg, M. I.; Stobie, E. B.; Ferro, A. J.; O'Neil, E. J.

    In the spring of 2000, at the request of the ADASS Program Organizing Committee (POC), we began organizing information from previous ADASS conferences in an effort to create a centralized database. The beginnings of this database originated from data (invited speakers, participants, papers, etc.) extracted from HyperText Markup Language (HTML) documents from past ADASS host sites. Unfortunately, not all HTML documents are well formed and parsing them proved to be an iterative process. It was evident at the beginning that if these Web documents were organized in a standardized way, such as XML (Extensible Markup Language), the processing of this information across the Web could be automated, more efficient, and less error prone. This paper will briefly review the many programming tools available for processing XML, including Java, Perl and Python, and will explore the mapping of relational data from our MySQL database to XML.

  16. The Danish Smoking Cessation Database

    DEFF Research Database (Denmark)

    Rasmussen, Mette; Tønnesen, Hanne

    2016-01-01

    Background: The Danish Smoking Cessation Database (SCDB) was established in 2001 as the first national healthcare register within the field of health promotion. Aim of the database: The aim of the SCDB is to document and evaluate smoking cessation (SC) interventions to assess and improve......, and prognostic factors. The outcome data are smoking status at the end of the programme and after six months and satisfaction with the SC intervention. Validity: Approximately 80-90% of all SC clinics offering systematic face-to-face SC interventions are reporting data to the SCDB. The data completeness...... of the SCDB is very high, at 95-100%. Validation checks have been implemented to ensure high data quality. Conclusion: The SCDB is a well-established clinical database and a priceless tool for monitoring and improving SC interventions in Denmark to identify the best solution to helping smokers become smoke...

  17. KALIMER database development

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  18. KALIMER database development

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.

  19. TWRS technical baseline database manager definition document

    International Nuclear Information System (INIS)

    Acree, C.D.

    1997-01-01

    This document serves as a guide for using the TWRS Technical Baseline Database Management Systems Engineering (SE) support tool in performing SE activities for the Tank Waste Remediation System (TWRS). This document will provide a consistent interpretation of the relationships between the TWRS Technical Baseline Database Management software and the present TWRS SE practices. The Database Manager currently utilized is the RDD-1000 System manufactured by the Ascent Logic Corporation. In other documents, the term RDD-1000 may be used interchangeably with TWRS Technical Baseline Database Manager

  20. DEPOT database: Reference manual and user's guide

    International Nuclear Information System (INIS)

    Clancey, P.; Logg, C.

    1991-03-01

    DEPOT has been developed to provide tracking for the Stanford Linear Collider (SLC) control system equipment. For each piece of equipment entered into the database, complete location, service, maintenance, modification, certification, and radiation exposure histories can be maintained. To facilitate data entry accuracy, efficiency, and consistency, barcoding technology has been used extensively. DEPOT has been an important tool in improving the reliability of the microsystems controlling SLC. This document describes the components of the DEPOT database, the elements in the database records, and the use of the supporting programs for entering data, searching the database, and producing reports from the information

  1. Logical database design principles

    CERN Document Server

    Garmany, John; Clark, Terry

    2005-01-01

    INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint

  2. Oracle database systems administration

    OpenAIRE

    Šilhavý, Dominik

    2017-01-01

    Master's thesis with the name Oracle database systems administration describes problems in databases and how to solve them, which is important for database administrators. It helps them in delivering faster solutions without the need to look for or figure out solutions on their own. Thesis describes database backup and recovery methods that are closely related to problems solutions. The main goal is to provide guidance and recommendations regarding database troubles and how to solve them. It ...

  3. SSC lattice database and graphical interface

    International Nuclear Information System (INIS)

    Trahern, C.G.; Zhou, J.

    1991-11-01

    When completed the Superconducting Super Collider will be the world's largest accelerator complex. In order to build this system on schedule, the use of database technologies will be essential. In this paper we discuss one of the database efforts underway at the SSC, the lattice database. The SSC lattice database provides a centralized source for the design of each major component of the accelerator complex. This includes the two collider rings, the High Energy Booster, Medium Energy Booster, Low Energy Booster, and the LINAC as well as transfer and test beam lines. These designs have been created using a menagerie of programs such as SYNCH, DIMAD, MAD, TRANSPORT, MAGIC, TRACE3D AND TEAPOT. However, once a design has been completed, it is entered into a uniform database schema in the database system. In this paper we discuss the reasons for creating the lattice database and its implementation via the commercial database system SYBASE. Each lattice in the lattice database is composed of a set of tables whose data structure can describe any of the SSC accelerator lattices. In order to allow the user community access to the databases, a programmatic interface known as dbsf (for database to several formats) has been written. Dbsf creates ascii input files appropriate to the above mentioned accelerator design programs. In addition it has a binary dataset output using the Self Describing Standard data discipline provided with the Integrated Scientific Tool Kit software tools. Finally we discuss the graphical interfaces to the lattice database. The primary interface, known as OZ, is a simulation environment as well as a database browser

  4. Database Description - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RED Database Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti...on The Rice Expression Database (RED) is a database that aggregates the gene expr...icroarray Project and other research groups. Features and manner of utilization of database

  5. Brain Tumor Database, a free relational database for collection and analysis of brain tumor patient information.

    Science.gov (United States)

    Bergamino, Maurizio; Hamilton, David J; Castelletti, Lara; Barletta, Laura; Castellan, Lucio

    2015-03-01

    In this study, we describe the development and utilization of a relational database designed to manage the clinical and radiological data of patients with brain tumors. The Brain Tumor Database was implemented using MySQL v.5.0, while the graphical user interface was created using PHP and HTML, thus making it easily accessible through a web browser. This web-based approach allows for multiple institutions to potentially access the database. The BT Database can record brain tumor patient information (e.g. clinical features, anatomical attributes, and radiological characteristics) and be used for clinical and research purposes. Analytic tools to automatically generate statistics and different plots are provided. The BT Database is a free and powerful user-friendly tool with a wide range of possible clinical and research applications in neurology and neurosurgery. The BT Database graphical user interface source code and manual are freely available at http://tumorsdatabase.altervista.org. © The Author(s) 2013.

  6. Database Description - BodyParts3D | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available simulation and data mapping. A tool called Anatomogram is also made available to the public ( http://lifesci...e.biosciencedbc.jp/en/bodyparts3d/download.html Referenced database Foundational Model of Anatomy Entry list

  7. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RMOS Database Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...e Microarray Opening Site is a database of comprehensive information for Rice Mic...es and manner of utilization of database You can refer to the information of the

  8. Kentucky geotechnical database.

    Science.gov (United States)

    2005-03-01

    Development of a comprehensive dynamic, geotechnical database is described. Computer software selected to program the client/server application in windows environment, components and structure of the geotechnical database, and primary factors cons...

  9. Directory of IAEA databases

    International Nuclear Information System (INIS)

    1991-11-01

    The first edition of the Directory of IAEA Databases is intended to describe the computerized information sources available to IAEA staff members. It contains a listing of all databases produced at the IAEA, together with information on their availability

  10. Cell Centred Database (CCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Cell Centered Database (CCDB) is a web accessible database for high resolution 2D, 3D and 4D data from light and electron microscopy, including correlated imaging.

  11. Physiological Information Database (PID)

    Science.gov (United States)

    EPA has developed a physiological information database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence as well as similar data for laboratory animal spec...

  12. E3 Staff Database

    Data.gov (United States)

    US Agency for International Development — E3 Staff database is maintained by E3 PDMS (Professional Development & Management Services) office. The database is Mysql. It is manually updated by E3 staff as...

  13. Database Urban Europe

    NARCIS (Netherlands)

    Sleutjes, B.; de Valk, H.A.G.

    2016-01-01

    Database Urban Europe: ResSegr database on segregation in The Netherlands. Collaborative research on residential segregation in Europe 2014–2016 funded by JPI Urban Europe (Joint Programming Initiative Urban Europe).

  14. Scopus database: a review.

    Science.gov (United States)

    Burnham, Judy F

    2006-03-08

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  15. Scopus database: a review

    OpenAIRE

    Burnham, Judy F

    2006-01-01

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  16. Automated Oracle database testing

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Ensuring database stability and steady performance in the modern world of agile computing is a major challenge. Various changes happening at any level of the computing infrastructure: OS parameters & packages, kernel versions, database parameters & patches, or even schema changes, all can potentially harm production services. This presentation shows how an automatic and regular testing of Oracle databases can be achieved in such agile environment.

  17. The CEBAF Element Database and Related Operational Software

    International Nuclear Information System (INIS)

    The newly commissioned 12GeV CEBAF accelerator relies on a flexible, scalable and comprehensive database to define the accelerator. This database delivers the configuration for CEBAF operational tools, including hardware checkout, the downloadable optics model, control screens, and much more. The presentation will describe the flexible design of the CEBAF Element Database (CED), its features and assorted use case examples.

  18. The CEBAF Element Database and Related Operational Software

    Energy Technology Data Exchange (ETDEWEB)

    Larrieu, Theodore [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Slominski, Christopher [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Keesee, Marie [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Turner, Dennison [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Joyce, Michele [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States)

    2015-09-01

    The newly commissioned 12GeV CEBAF accelerator relies on a flexible, scalable and comprehensive database to define the accelerator. This database delivers the configuration for CEBAF operational tools, including hardware checkout, the downloadable optics model, control screens, and much more. The presentation will describe the flexible design of the CEBAF Element Database (CED), its features and assorted use case examples.

  19. An extensible framework and database of infectious disease for biosurveillance.

    Science.gov (United States)

    Daughton, Ashlynn R; Priedhorsky, Reid; Fairchild, Geoffrey; Generous, Nicholas; Hengartner, Andrea; Abeyta, Esteban; Velappan, Nileena; Lillo, Antonietta; Stark, Karen; Deshpande, Alina

    2017-08-07

    Biosurveillance, a relatively young field, has recently increased in importance because of increasing emphasis on global health. Databases and tools describing particular subsets of disease are becoming increasingly common in the field. Here, we present an infectious disease database that includes diseases of biosurveillance relevance and an extensible framework for the easy expansion of the database.

  20. Electronic database of arterial aneurysms

    Directory of Open Access Journals (Sweden)

    Fabiano Luiz Erzinger

    2014-12-01

    Full Text Available Background:The creation of an electronic database facilitates the storage of information, as well as streamlines the exchange of data, making easier the exchange of knowledge for future research.Objective:To construct an electronic database containing comprehensive and up-to-date clinical and surgical data on the most common arterial aneurysms, to help advance scientific research.Methods:The most important specialist textbooks and articles found in journals and on internet databases were reviewed in order to define the basic structure of the protocol. Data were computerized using the SINPE© system for integrated electronic protocols and tested in a pilot study.Results:The data entered onto the system was first used to create a Master protocol, organized into a structure of top-level directories covering a large proportion of the content on vascular diseases as follows: patient history; physical examination; supplementary tests and examinations; diagnosis; treatment; and clinical course. By selecting items from the Master protocol, Specific protocols were then created for the 22 arterial sites most often involved by aneurysms. The program provides a method for collection of data on patients including clinical characteristics (patient history and physical examination, supplementary tests and examinations, treatments received and follow-up care after treatment. Any information of interest on these patients that is contained in the protocol can then be used to query the database and select data for studies.Conclusions:It proved possible to construct a database of clinical and surgical data on the arterial aneurysms of greatest interest and, by adapting the data to specific software, the database was integrated into the SINPE© system, thereby providing a standardized method for collection of data on these patients and tools for retrieving this information in an organized manner for use in scientific studies.

  1. YMDB: the Yeast Metabolome Database

    Science.gov (United States)

    Jewison, Timothy; Knox, Craig; Neveu, Vanessa; Djoumbou, Yannick; Guo, An Chi; Lee, Jacqueline; Liu, Philip; Mandal, Rupasri; Krishnamurthy, Ram; Sinelnikov, Igor; Wilson, Michael; Wishart, David S.

    2012-01-01

    The Yeast Metabolome Database (YMDB, http://www.ymdb.ca) is a richly annotated ‘metabolomic’ database containing detailed information about the metabolome of Saccharomyces cerevisiae. Modeled closely after the Human Metabolome Database, the YMDB contains >2000 metabolites with links to 995 different genes/proteins, including enzymes and transporters. The information in YMDB has been gathered from hundreds of books, journal articles and electronic databases. In addition to its comprehensive literature-derived data, the YMDB also contains an extensive collection of experimental intracellular and extracellular metabolite concentration data compiled from detailed Mass Spectrometry (MS) and Nuclear Magnetic Resonance (NMR) metabolomic analyses performed in our lab. This is further supplemented with thousands of NMR and MS spectra collected on pure, reference yeast metabolites. Each metabolite entry in the YMDB contains an average of 80 separate data fields including comprehensive compound description, names and synonyms, structural information, physico-chemical data, reference NMR and MS spectra, intracellular/extracellular concentrations, growth conditions and substrates, pathway information, enzyme data, gene/protein sequence data, as well as numerous hyperlinks to images, references and other public databases. Extensive searching, relational querying and data browsing tools are also provided that support text, chemical structure, spectral, molecular weight and gene/protein sequence queries. Because of S. cervesiae's importance as a model organism for biologists and as a biofactory for industry, we believe this kind of database could have considerable appeal not only to metabolomics researchers, but also to yeast biologists, systems biologists, the industrial fermentation industry, as well as the beer, wine and spirit industry. PMID:22064855

  2. Digitizing Olin Eggen's Card Database

    Science.gov (United States)

    Crast, J.; Silvis, G.

    2017-06-01

    The goal of the Eggen Card Database Project is to recover as many of the photometric observations from Olin Eggen's Card Database as possible and preserve these observations, in digital forms that are accessible by anyone. Any observations of interest to the AAVSO will be added to the AAVSO International Database (AID). Given to the AAVSO on long-term loan by the Cerro Tololo Inter-American Observatory, the database is a collection of over 78,000 index cards holding all Eggen's observations made between 1960 and 1990. The cards were electronically scanned and the resulting 108,000 card images have been published as a series of 2,216 PDF files, which are available from the AAVSO web site. The same images are also stored in an AAVSO online database where they are indexed by star name and card content. These images can be viewed using the eggen card portal online tool. Eggen made observations using filter bands from five different photometric systems. He documented these observations using 15 different data recording formats. Each format represents a combination of filter magnitudes and color indexes. These observations are being transcribed onto spreadsheets, from which observations of value to the AAVSO are added to the AID. A total of 506 U, B, V, R, and I observations were added to the AID for the variable stars S Car and l Car. We would like the reader to search through the card database using the eggen card portal for stars of particular interest. If such stars are found and retrieval of the observations is desired, e-mail the authors, and we will be happy to help retrieve those data for the reader.

  3. Keyword Search in Databases

    CERN Document Server

    Yu, Jeffrey Xu; Chang, Lijun

    2009-01-01

    It has become highly desirable to provide users with flexible ways to query/search information over databases as simple as keyword search like Google search. This book surveys the recent developments on keyword search over databases, and focuses on finding structural information among objects in a database using a set of keywords. Such structural information to be returned can be either trees or subgraphs representing how the objects, that contain the required keywords, are interconnected in a relational database or in an XML database. The structural keyword search is completely different from

  4. Nuclear power economic database

    International Nuclear Information System (INIS)

    Ding Xiaoming; Li Lin; Zhao Shiping

    1996-01-01

    Nuclear power economic database (NPEDB), based on ORACLE V6.0, consists of three parts, i.e., economic data base of nuclear power station, economic data base of nuclear fuel cycle and economic database of nuclear power planning and nuclear environment. Economic database of nuclear power station includes data of general economics, technique, capital cost and benefit, etc. Economic database of nuclear fuel cycle includes data of technique and nuclear fuel price. Economic database of nuclear power planning and nuclear environment includes data of energy history, forecast, energy balance, electric power and energy facilities

  5. Protein sequence databases.

    Science.gov (United States)

    Apweiler, Rolf; Bairoch, Amos; Wu, Cathy H

    2004-02-01

    A variety of protein sequence databases exist, ranging from simple sequence repositories, which store data with little or no manual intervention in the creation of the records, to expertly curated universal databases that cover all species and in which the original sequence data are enhanced by the manual addition of further information in each sequence record. As the focus of researchers moves from the genome to the proteins encoded by it, these databases will play an even more important role as central comprehensive resources of protein information. Several the leading protein sequence databases are discussed here, with special emphasis on the databases now provided by the Universal Protein Knowledgebase (UniProt) consortium.

  6. DIOS - database of formalized chemotherapeutic regimens.

    Science.gov (United States)

    Klimes, Daniel; Smid, Roman; Kubasek, Miroslav; Vyzula, Rostislav; Dušek, Ladislav

    2013-01-01

    Chemotherapeutic regimens (CHR) and their administration are routine practice in contemporary oncology. The development of a structured, electronic database of standard CHR can help the faster propagation of information about new CHR and at the same time enable assessment of their adherence in clinical practice. The goal was to develop a standardized way to describe a regimen using XML, fill the database with currently available regimens and develop tools to assess the adherence of the treatment to chosen regimen, compare the dose-intensity and recognize the regimen from existing data on drug administration. The data are being inserted in cooperation with expert oncologists and the database currently contains about 260 CHRs. Such system can be used to enhance decision support systems and interoperability of HIS. The database and tools are available online on the internet.

  7. The 2014 Nucleic Acids Research Database Issue and an updated NAR online Molecular Biology Database Collection.

    Science.gov (United States)

    Fernández-Suárez, Xosé M; Rigden, Daniel J; Galperin, Michael Y

    2014-01-01

    The 2014 Nucleic Acids Research Database Issue includes descriptions of 58 new molecular biology databases and recent updates to 123 databases previously featured in NAR or other journals. For convenience, the issue is now divided into eight sections that reflect major subject categories. Among the highlights of this issue are six databases of the transcription factor binding sites in various organisms and updates on such popular databases as CAZy, Database of Genomic Variants (DGV), dbGaP, DrugBank, KEGG, miRBase, Pfam, Reactome, SEED, TCDB and UniProt. There is a strong block of structural databases, which includes, among others, the new RNA Bricks database, updates on PDBe, PDBsum, ArchDB, Gene3D, ModBase, Nucleic Acid Database and the recently revived iPfam database. An update on the NCBI's MMDB describes VAST+, an improved tool for protein structure comparison. Two articles highlight the development of the Structural Classification of Proteins (SCOP) database: one describes SCOPe, which automates assignment of new structures to the existing SCOP hierarchy; the other one describes the first version of SCOP2, with its more flexible approach to classifying protein structures. This issue also includes a collection of articles on bacterial taxonomy and metagenomics, which includes updates on the List of Prokaryotic Names with Standing in Nomenclature (LPSN), Ribosomal Database Project (RDP), the Silva/LTP project and several new metagenomics resources. The NAR online Molecular Biology Database Collection, http://www.oxfordjournals.org/nar/database/c/, has been expanded to 1552 databases. The entire Database Issue is freely available online on the Nucleic Acids Research website (http://nar.oxfordjournals.org/).

  8. Database Description - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RPD Database Description General information of database Database name RPD Alternative name Rice Proteome Database...titute of Crop Science, National Agriculture and Food Research Organization Setsuko Komatsu E-mail: Database... classification Proteomics Resources Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database... description Rice Proteome Database contains information on protei...AGE) reference maps. Features and manner of utilization of database Proteins extracted from organs and subce

  9. Database Description - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us ASTRA Database Description General information of database Database name ASTRA Alternative n...tics Journal Search: Contact address Database classification Nucleotide Sequence Databases - Gene structure,...3702 Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The database represents classified p...mes. Features and manner of utilization of database This database enables to sear...ch and represent alternative splicing/transcriptional initiation genes and their patterns (ex: cassette) base

  10. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database... classification Protein sequence databases Organism Taxonom...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The Trypanosomes database... is a database providing the comprehensive information of proteins that is effective t

  11. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n...ame Arabidopsis Phenome Database Alternative name - DOI 10.18908/lsdba.nbdc01509-000 Creator Creator Name: H... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database

  12. CAR2 - Czech Database of Car Speech

    Directory of Open Access Journals (Sweden)

    P. Sovka

    1999-12-01

    Full Text Available This paper presents new Czech language two-channel (stereo speech database recorded in car environment. The created database was designed for experiments with speech enhancement for communication purposes and for the study and the design of a robust speech recognition systems. Tools for automated phoneme labelling based on Baum-Welch re-estimation were realised. The noise analysis of the car background environment was done.

  13. CAR2 - Czech Database of Car Speech

    OpenAIRE

    Pollak, P.; Vopicka, J.; Hanzl, V.; Sovka, Pavel

    1999-01-01

    This paper presents new Czech language two-channel (stereo) speech database recorded in car environment. The created database was designed for experiments with speech enhancement for communication purposes and for the study and the design of a robust speech recognition systems. Tools for automated phoneme labelling based on Baum-Welch re-estimation were realised. The noise analysis of the car background environment was done.

  14. Database activities at Brookhaven National Laboratory

    International Nuclear Information System (INIS)

    Trahern, C.G.

    1995-01-01

    Brookhaven National Laboratory is a multi-disciplinary lab in the DOE system of research laboratories. Database activities are correspondingly diverse within the restrictions imposed by the dominant relational database paradigm. The authors discuss related activities and tools used in RHIC and in the other major projects at BNL. The others are the Protein Data Bank being maintained by the Chemistry department, and a Geographical Information System (GIS)--a Superfund sponsored environmental monitoring project under development in the Office of Environmental Restoration

  15. Solid Waste Projection Model: Database User's Guide

    International Nuclear Information System (INIS)

    Blackburn, C.L.

    1993-10-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for using Version 1.4 of the SWPM database: system requirements and preparation, entering and maintaining data, and performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions and does not Provide instruction in the use of Paradox, the database management system in which the SWPM database is established

  16. Annotation and retrieval in protein interaction databases

    Science.gov (United States)

    Cannataro, Mario; Hiram Guzzi, Pietro; Veltri, Pierangelo

    2014-06-01

    Biological databases have been developed with a special focus on the efficient retrieval of single records or the efficient computation of specialized bioinformatics algorithms against the overall database, such as in sequence alignment. The continuos production of biological knowledge spread on several biological databases and ontologies, such as Gene Ontology, and the availability of efficient techniques to handle such knowledge, such as annotation and semantic similarity measures, enable the development on novel bioinformatics applications that explicitly use and integrate such knowledge. After introducing the annotation process and the main semantic similarity measures, this paper shows how annotations and semantic similarity can be exploited to improve the extraction and analysis of biologically relevant data from protein interaction databases. As case studies, the paper presents two novel software tools, OntoPIN and CytoSeVis, both based on the use of Gene Ontology annotations, for the advanced querying of protein interaction databases and for the enhanced visualization of protein interaction networks.

  17. Hazard Analysis Database Report

    CERN Document Server

    Grams, W H

    2000-01-01

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from t...

  18. National Database of Geriatrics

    DEFF Research Database (Denmark)

    Kannegaard, Pia Nimann; Vinding, Kirsten L; Hare-Bruun, Helle

    2016-01-01

    AIM OF DATABASE: The aim of the National Database of Geriatrics is to monitor the quality of interdisciplinary diagnostics and treatment of patients admitted to a geriatric hospital unit. STUDY POPULATION: The database population consists of patients who were admitted to a geriatric hospital unit....... Geriatric patients cannot be defined by specific diagnoses. A geriatric patient is typically a frail multimorbid elderly patient with decreasing functional ability and social challenges. The database includes 14-15,000 admissions per year, and the database completeness has been stable at 90% during the past......, percentage of discharges with a rehabilitation plan, and the part of cases where an interdisciplinary conference has taken place. Data are recorded by doctors, nurses, and therapists in a database and linked to the Danish National Patient Register. DESCRIPTIVE DATA: Descriptive patient-related data include...

  19. The Danish ventral hernia database

    DEFF Research Database (Denmark)

    Helgstrand, Frederik; Jorgensen, Lars Nannestad

    2016-01-01

    and beyond. A total of 80% of all ventral hernia repairs performed in Denmark were reported to the DVHD. Main variables: Demographic data (age, sex, and center), detailed hernia description (eg, type, size, surgical priority), and technical aspects (open/laparoscopic and mesh related factors) related...... of operations and is an excellent tool for observing changes over time, including adjustment of several confounders. This national database registry has impacted on clinical practice in Denmark and led to a high number of scientific publications in recent years....

  20. AMDD: Antimicrobial Drug Database

    OpenAIRE

    Danishuddin, Mohd; Kaushal, Lalima; Hassan Baig, Mohd; Khan, Asad U.

    2012-01-01

    Drug resistance is one of the major concerns for antimicrobial chemotherapy against any particular target. Knowledge of the primary structure of antimicrobial agents and their activities is essential for rational drug design. Thus, we developed a comprehensive database, anti microbial drug database (AMDD), of known synthetic antibacterial and antifungal compounds that were extracted from the available literature and other chemical databases, e.g., PubChem, PubChem BioAssay and ZINC, etc. The ...

  1. Molecular Biology Database List.

    Science.gov (United States)

    Burks, C

    1999-01-01

    Molecular Biology Database List (MBDL) includes brief descriptions and pointers to Web sites for the various databases described in this issue as well as other Web sites presenting data sets relevant to molecular biology. This information is compiled into a list (http://www.oup.co.uk/nar/Volume_27/Issue_01/summary/ gkc105_gml.html) which includes links both to source Web sites and to on-line versions of articles describing the databases. PMID:9847130

  2. Database principles programming performance

    CERN Document Server

    O'Neil, Patrick

    2014-01-01

    Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi

  3. LOWELL OBSERVATORY COMETARY DATABASE

    Data.gov (United States)

    National Aeronautics and Space Administration — The database presented here is comprised entirely of observations made utilizing conventional photoelectric photometers and narrowband filters isolating 5 emission...

  4. Transporter Classification Database (TCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  5. The Relational Database Dictionary

    CERN Document Server

    J, C

    2006-01-01

    Avoid misunderstandings that can affect the design, programming, and use of database systems. Whether you're using Oracle, DB2, SQL Server, MySQL, or PostgreSQL, The Relational Database Dictionary will prevent confusion about the precise meaning of database-related terms (e.g., attribute, 3NF, one-to-many correspondence, predicate, repeating group, join dependency), helping to ensure the success of your database projects. Carefully reviewed for clarity, accuracy, and completeness, this authoritative and comprehensive quick-reference contains more than 600 terms, many with examples, covering i

  6. Key health indicators database.

    Science.gov (United States)

    Menic, J L

    1990-01-01

    A new database developed by the Canadian Centre for Health Information (CCHI) contains 40 key health indicators and lets users select a range of disaggregations, categories and variables. The database can be accessed through CANSIM, Statistics Canada's electronic database and retrieval system, or through a package for personal computers. This package includes the database on diskettes, as well as software for retrieving and manipulating data and for producing graphics. A data dictionary, a user's guide and tables and graphs that highlight aspects of each indicator are also included.

  7. Intermodal Passenger Connectivity Database -

    Data.gov (United States)

    Department of Transportation — The Intermodal Passenger Connectivity Database (IPCD) is a nationwide data table of passenger transportation terminals, with data on the availability of connections...

  8. IVR EFP Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database contains trip-level reports submitted by vessels participating in Exempted Fishery projects with IVR reporting requirements.

  9. Residency Allocation Database

    Data.gov (United States)

    Department of Veterans Affairs — The Residency Allocation Database is used to determine allocation of funds for residency programs offered by Veterans Affairs Medical Centers (VAMCs). Information...

  10. Smart Location Database - Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block...

  11. Smart Location Database - Download

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block...

  12. Towards Sensor Database Systems

    DEFF Research Database (Denmark)

    Bonnet, Philippe; Gehrke, Johannes; Seshadri, Praveen

    2001-01-01

    . These systems lack flexibility because data is extracted in a predefined way; also, they do not scale to a large number of devices because large volumes of raw data are transferred regardless of the queries that are submitted. In our new concept of sensor database system, queries dictate which data is extracted...... from the sensors. In this paper, we define the concept of sensor databases mixing stored data represented as relations and sensor data represented as time series. Each long-running query formulated over a sensor database defines a persistent view, which is maintained during a given time interval. We...... also describe the design and implementation of the COUGAR sensor database system....

  13. Database Publication Practices

    DEFF Research Database (Denmark)

    Bernstein, P.A.; DeWitt, D.; Heuer, A.

    2005-01-01

    There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems.......There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems....

  14. Veterans Administration Databases

    Science.gov (United States)

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  15. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RMG Database Description General information of database Database name RMG Alternative name ...raki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database... classification Nucleotide Sequence Databases Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database... description This database contains information on the rice mitochondrial genome. You ca...sis results. Features and manner of utilization of database The mitochondrial genome information can be used

  16. Database Description - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us JSNP Database Description General information of database Database name JSNP Alternative nam...n Science and Technology Agency Creator Affiliation: Contact address E-mail : Database... classification Human Genes and Diseases - General polymorphism databases Organism Taxonomy Name: Homo ...sapiens Taxonomy ID: 9606 Database description A database of about 197,000 polymorphisms in Japanese populat... and manner of utilization of database Allele frequencies in Japanese populatoin are also available. License

  17. Update History of This Database - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27... Arabidopsis Phenome Database English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...

  18. Update History of This Database - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Update History of This Database Date Update contents 2017/03/13 SKIP Stemcell Database... English archive site is opened. 2013/03/29 SKIP Stemcell Database ( https://www.skip.med.k...eio.ac.jp/SKIPSearch/top?lang=en ) is opened. About This Database Database Description Download License Upda...te History of This Database Site Policy | Contact Us Update History of This Database - SKIP Stemcell Database | LSDB Archive ...

  19. Kazusa Marker DataBase: a database for genomics, genetics, and molecular breeding in plants

    Science.gov (United States)

    Shirasawa, Kenta; Isobe, Sachiko; Tabata, Satoshi; Hirakawa, Hideki

    2014-01-01

    In order to provide useful genomic information for agronomical plants, we have established a database, the Kazusa Marker DataBase (http://marker.kazusa.or.jp). This database includes information on DNA markers, e.g., SSR and SNP markers, genetic linkage maps, and physical maps, that were developed at the Kazusa DNA Research Institute. Keyword searches for the markers, sequence data used for marker development, and experimental conditions are also available through this database. Currently, 10 plant species have been targeted: tomato (Solanum lycopersicum), pepper (Capsicum annuum), strawberry (Fragaria × ananassa), radish (Raphanus sativus), Lotus japonicus, soybean (Glycine max), peanut (Arachis hypogaea), red clover (Trifolium pratense), white clover (Trifolium repens), and eucalyptus (Eucalyptus camaldulensis). In addition, the number of plant species registered in this database will be increased as our research progresses. The Kazusa Marker DataBase will be a useful tool for both basic and applied sciences, such as genomics, genetics, and molecular breeding in crops. PMID:25320561

  20. Toward An Unstructured Mesh Database

    Science.gov (United States)

    Rezaei Mahdiraji, Alireza; Baumann, Peter Peter

    2014-05-01

    Unstructured meshes are used in several application domains such as earth sciences (e.g., seismology), medicine, oceanography, cli- mate modeling, GIS as approximate representations of physical objects. Meshes subdivide a domain into smaller geometric elements (called cells) which are glued together by incidence relationships. The subdivision of a domain allows computational manipulation of complicated physical structures. For instance, seismologists model earthquakes using elastic wave propagation solvers on hexahedral meshes. The hexahedral con- tains several hundred millions of grid points and millions of hexahedral cells. Each vertex node in the hexahedrals stores a multitude of data fields. To run simulation on such meshes, one needs to iterate over all the cells, iterate over incident cells to a given cell, retrieve coordinates of cells, assign data values to cells, etc. Although meshes are used in many application domains, to the best of our knowledge there is no database vendor that support unstructured mesh features. Currently, the main tool for querying and manipulating unstructured meshes are mesh libraries, e.g., CGAL and GRAL. Mesh li- braries are dedicated libraries which includes mesh algorithms and can be run on mesh representations. The libraries do not scale with dataset size, do not have declarative query language, and need deep C++ knowledge for query implementations. Furthermore, due to high coupling between the implementations and input file structure, the implementations are less reusable and costly to maintain. A dedicated mesh database offers the following advantages: 1) declarative querying, 2) ease of maintenance, 3) hiding mesh storage structure from applications, and 4) transparent query optimization. To design a mesh database, the first challenge is to define a suitable generic data model for unstructured meshes. We proposed ImG-Complexes data model as a generic topological mesh data model which extends incidence graph model to multi

  1. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  2. Enhanced DIII-D Data Management Through a Relational Database

    Science.gov (United States)

    Burruss, J. R.; Peng, Q.; Schachter, J.; Schissel, D. P.; Terpstra, T. B.

    2000-10-01

    A relational database is being used to serve data about DIII-D experiments. The database is optimized for queries across multiple shots, allowing for rapid data mining by SQL-literate researchers. The relational database relates different experiments and datasets, thus providing a big picture of DIII-D operations. Users are encouraged to add their own tables to the database. Summary physics quantities about DIII-D discharges are collected and stored in the database automatically. Meta-data about code runs, MDSplus usage, and visualization tool usage are collected, stored in the database, and later analyzed to improve computing. Documentation on the database may be accessed through programming languages such as C, Java, and IDL, or through ODBC compliant applications such as Excel and Access. A database-driven web page also provides a convenient means for viewing database quantities through the World Wide Web. Demonstrations will be given at the poster.

  3. HIV Structural Database

    Science.gov (United States)

    SRD 102 HIV Structural Database (Web, free access)   The HIV Protease Structural Database is an archive of experimentally determined 3-D structures of Human Immunodeficiency Virus 1 (HIV-1), Human Immunodeficiency Virus 2 (HIV-2) and Simian Immunodeficiency Virus (SIV) Proteases and their complexes with inhibitors or products of substrate cleavage.

  4. Structural Ceramics Database

    Science.gov (United States)

    SRD 30 NIST Structural Ceramics Database (Web, free access)   The NIST Structural Ceramics Database (WebSCD) provides evaluated materials property data for a wide range of advanced ceramics known variously as structural ceramics, engineering ceramics, and fine ceramics.

  5. The international spinach database

    NARCIS (Netherlands)

    Treuren, van R.; Menting, F.B.J.

    2007-01-01

    The database concentrates on passport data of spinach of germplasm collections worldwide. All available passport data of accessions included in the International Spinach Database are downloadable as zipped Excel file. This zip file also contains the decoding tables, except for the FAO institutes

  6. Directory of IAEA databases

    International Nuclear Information System (INIS)

    1992-12-01

    This second edition of the Directory of IAEA Databases has been prepared within the Division of Scientific and Technical Information (NESI). Its main objective is to describe the computerized information sources available to staff members. This directory contains all databases produced at the IAEA, including databases stored on the mainframe, LAN's and PC's. All IAEA Division Directors have been requested to register the existence of their databases with NESI. For the second edition database owners were requested to review the existing entries for their databases and answer four additional questions. The four additional questions concerned the type of database (e.g. Bibliographic, Text, Statistical etc.), the category of database (e.g. Administrative, Nuclear Data etc.), the available documentation and the type of media used for distribution. In the individual entries on the following pages the answers to the first two questions (type and category) is always listed, but the answers to the second two questions (documentation and media) is only listed when information has been made available

  7. Atomic Spectra Database (ASD)

    Science.gov (United States)

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  8. Children's Culture Database (CCD)

    DEFF Research Database (Denmark)

    Wanting, Birgit

    a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996......a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996...

  9. Odense Pharmacoepidemiological Database (OPED)

    DEFF Research Database (Denmark)

    Hallas, Jesper; Poulsen, Maja Hellfritzsch; Hansen, Morten Rix

    2017-01-01

    The Odense University Pharmacoepidemiological Database (OPED) is a prescription database established in 1990 by the University of Southern Denmark, covering reimbursed prescriptions from the county of Funen in Denmark and the region of Southern Denmark (1.2 million inhabitants). It is still active...

  10. Consumer Product Category Database

    Science.gov (United States)

    The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use information is compiled from multiple sources while product information is gathered from publicly available Material Safety Data Sheets (MSDS). EPA researchers are evaluating the possibility of expanding the database with additional product and use information.

  11. NoSQL database scaling

    OpenAIRE

    Žardin, Norbert

    2017-01-01

    NoSQL database scaling is a decision, where system resources or financial expenses are traded for database performance or other benefits. By scaling a database, database performance and resource usage might increase or decrease, such changes might have a negative impact on an application that uses the database. In this work it is analyzed how database scaling affect database resource usage and performance. As a results, calculations are acquired, using which database scaling types and differe...

  12. Database Access through Java Technologies

    Directory of Open Access Journals (Sweden)

    Nicolae MERCIOIU

    2010-09-01

    Full Text Available As a high level development environment, the Java technologies offer support to the development of distributed applications, independent of the platform, providing a robust set of methods to access the databases, used to create software components on the server side, as well as on the client side. Analyzing the evolution of Java tools to access data, we notice that these tools evolved from simple methods that permitted the queries, the insertion, the update and the deletion of the data to advanced implementations such as distributed transactions, cursors and batch files. The client-server architectures allows through JDBC (the Java Database Connectivity the execution of SQL (Structured Query Language instructions and the manipulation of the results in an independent and consistent manner. The JDBC API (Application Programming Interface creates the level of abstractization needed to allow the call of SQL queries to any DBMS (Database Management System. In JDBC the native driver and the ODBC (Open Database Connectivity-JDBC bridge and the classes and interfaces of the JDBC API will be described. The four steps needed to build a JDBC driven application are presented briefly, emphasizing on the way each step has to be accomplished and the expected results. In each step there are evaluations on the characteristics of the database systems and the way the JDBC programming interface adapts to each one. The data types provided by SQL2 and SQL3 standards are analyzed by comparison with the Java data types, emphasizing on the discrepancies between those and the SQL types, but also the methods that allow the conversion between different types of data through the methods of the ResultSet object. Next, starting from the metadata role and studying the Java programming interfaces that allow the query of result sets, we will describe the advanced features of the data mining with JDBC. As alternative to result sets, the Rowsets add new functionalities that

  13. CubeSat Design Tool

    Data.gov (United States)

    National Aeronautics and Space Administration — The CubeSat Design Tool will assist CubeSat mission design teams in creating a feasible CubeSat proposal. An extensive CubeSat database contains data on CubeSat...

  14. Database Description - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us DGBY Database Description General information of database Database name DGBY Alternative name Database...EL: +81-29-838-8066 E-mail: Database classification Microarray Data and other Gene Expression Databases Orga...nism Taxonomy Name: Saccharomyces cerevisiae Taxonomy ID: 4932 Database descripti...-called phenomics). We uploaded these data on this website which is designated DGBY(Database for Gene expres...sion and function of Baker's yeast). Features and manner of utilization of database This database

  15. DataSpread: Unifying Databases and Spreadsheets.

    Science.gov (United States)

    Bendre, Mangesh; Sun, Bofan; Zhang, Ding; Zhou, Xinyan; Chang, Kevin ChenChuan; Parameswaran, Aditya

    2015-08-01

    Spreadsheet software is often the tool of choice for ad-hoc tabular data management, processing, and visualization, especially on tiny data sets. On the other hand, relational database systems offer significant power, expressivity, and efficiency over spreadsheet software for data management, while lacking in the ease of use and ad-hoc analysis capabilities. We demonstrate DataSpread, a data exploration tool that holistically unifies databases and spreadsheets. It continues to offer a Microsoft Excel-based spreadsheet front-end, while in parallel managing all the data in a back-end database, specifically, PostgreSQL. DataSpread retains all the advantages of spreadsheets, including ease of use, ad-hoc analysis and visualization capabilities, and a schema-free nature, while also adding the advantages of traditional relational databases, such as scalability and the ability to use arbitrary SQL to import, filter, or join external or internal tables and have the results appear in the spreadsheet. DataSpread needs to reason about and reconcile differences in the notions of schema, addressing of cells and tuples, and the current "pane" (which exists in spreadsheets but not in traditional databases), and support data modifications at both the front-end and the back-end. Our demonstration will center on our first and early prototype of the DataSpread, and will give the attendees a sense for the enormous data exploration capabilities offered by unifying spreadsheets and databases.

  16. GOTTCHA Database, Version 1

    Energy Technology Data Exchange (ETDEWEB)

    2015-08-03

    One major challenge in the field of shotgun metagenomics is the accurate identification of the organisms present within the community, based on classification of short sequence reads. Though microbial community profiling methods have emerged to attempt to rapidly classify the millions of reads output from contemporary sequencers, the combination of incomplete databases, similarity among otherwise divergent genomes, and the large volumes of sequencing data required for metagenome sequencing has led to unacceptably high false discovery rates (FDR). Here we present the application of a novel, gene-independent and signature-based metagenomic taxonomic profiling tool with significantly smaller FDR, which is also capable of classifying never-before seen genomes into the appropriate parent taxa.The algorithm is based upon three primary computational phases: (I) genomic decomposition into bit vectors, (II) bit vector intersections to identify shared regions, and (III) bit vector subtractions to remove shared regions and reveal unique, signature regions.In the Decomposition phase, genomic data is first masked to highlight only the valid (non-ambiguous) regions and then decomposed into overlapping 24-mers. The k-mers are sorted along with their start positions, de-replicated, and then prefixed, to minimize data duplication. The prefixes are indexed and an identical data structure is created for the start positions to mimic that of the k-mer data structure.During the Intersection phase -- which is the most computationally intensive phase -- as an all-vs-all comparison is made, the number of comparisons is first reduced by four methods: (a) Prefix restriction, (b) Overlap detection, (c) Overlap restriction, and (d) Result recording. In Prefix restriction, only k-mers of the same prefix are compared. Within that group, potential overlap of k-mer suffixes that would result in a non-empty set intersection are screened for. If such an overlap exists, the region which intersects is

  17. Integrated Radiation Analysis and Design Tools

    Data.gov (United States)

    National Aeronautics and Space Administration — The Integrated Radiation Analysis and Design Tools (IRADT) Project develops and maintains an integrated tool set that collects the current best practices, databases,...

  18. Immune epitope database analysis resource (IEDB-AR)

    DEFF Research Database (Denmark)

    Zhang, Qing; Wang, Peng; Kim, Yohan

    2008-01-01

    We present a new release of the immune epitope database analysis resource (IEDB-AR, http://tools.immuneepitope.org), a repository of web-based tools for the prediction and analysis of immune epitopes. New functionalities have been added to most of the previously implemented tools, and a total...

  19. SPECIES DATABASES AND THE BIOINFORMATICS REVOLUTION.

    Science.gov (United States)

    Biological databases are having a growth spurt. Much of this results from research in genetics and biodiversity, coupled with fast-paced developments in information technology. The revolution in bioinformatics, defined by Sugden and Pennisi (2000) as the "tools and techniques for...

  20. The PROSITE database, its status in 1997.

    Science.gov (United States)

    Bairoch, A; Bucher, P; Hofmann, K

    1997-01-01

    The PROSITE database consists of biologically significant patterns and profiles formulated in such a way that with appropriate computational tools it can help to determine to which known family of protein (if any) a new sequence belongs, or which known domain(s) it contains. PMID:9016539

  1. Why and How to Benchmark XML Databases

    NARCIS (Netherlands)

    A.R. Schmidt; F. Waas; M.L. Kersten (Martin); D. Florescu; M.J. Carey; I. Manolescu; R. Busse

    2001-01-01

    textabstractBenchmarks belong to the very standard repertory of tools deployed in database development. Assessing the capabilities of a system, analyzing actual and potential bottlenecks, and, naturally, comparing the pros and cons of different systems architectures have become indispensable tasks

  2. Hazard Analysis Database Report

    Energy Technology Data Exchange (ETDEWEB)

    GAULT, G.W.

    1999-10-13

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for the Tank Waste Remediation System (TWRS) Final Safety Analysis Report (FSAR). The FSAR is part of the approved TWRS Authorization Basis (AB). This document describes, identifies, and defines the contents and structure of the TWRS FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The TWRS Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The database supports the preparation of Chapters 3,4, and 5 of the TWRS FSAR and the USQ process and consists of two major, interrelated data sets: (1) Hazard Evaluation Database--Data from the results of the hazard evaluations; and (2) Hazard Topography Database--Data from the system familiarization and hazard identification.

  3. The UCSC Genome Browser Database: 2008 update

    DEFF Research Database (Denmark)

    Karolchik, D; Kuhn, R M; Baertsch, R

    2007-01-01

    and 21 invertebrate species as of September 2007. For each assembly, the GBD contains a collection of annotation data aligned to the genomic sequence. Highlights of this year's additions include a 28-species human-based vertebrate conservation annotation, an enhanced UCSC Genes set, and more human......The University of California, Santa Cruz, Genome Browser Database (GBD) provides integrated sequence and annotation data for a large collection of vertebrate and model organism genomes. Seventeen new assemblies have been added to the database in the past year, for a total coverage of 19 vertebrate...... variation, MGC, and ENCODE data. The database is optimized for fast interactive performance with a set of web-based tools that may be used to view, manipulate, filter and download the annotation data. New toolset features include the Genome Graphs tool for displaying genome-wide data sets, session saving...

  4. Incident Information Management Tool

    CERN Document Server

    Pejovic, Vladimir

    2015-01-01

    Flaws of\tcurrent incident information management at CMS and CERN\tare discussed. A new data\tmodel for future incident database is\tproposed and briefly described. Recently developed draft version of GIS-­‐based tool for incident tracking is presented.

  5. JICST Factual Database(2)

    Science.gov (United States)

    Araki, Keisuke

    The computer programme, which builds atom-bond connection tables from nomenclatures, is developed. Chemical substances with their nomenclature and varieties of trivial names or experimental code numbers are inputted. The chemical structures of the database are stereospecifically stored and are able to be searched and displayed according to stereochemistry. Source data are from laws and regulations of Japan, RTECS of US and so on. The database plays a central role within the integrated fact database service of JICST and makes interrelational retrieval possible.

  6. LandIT Database

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Pedersen, Torben Bach

    2010-01-01

    and reporting purposes. This paper presents the LandIT database; which is result of the LandIT project, which refers to an industrial collaboration project that developed technologies for communication and data integration between farming devices and systems. The LandIT database in principal is based...... on the ISOBUS standard; however the standard is extended with additional requirements, such as gradual data aggregation and flexible exchange of farming data. This paper describes the conceptual and logical schemas of the proposed database based on a real-life farming case study....

  7. NSWC Crane Aerospace Cell Test History Database

    Science.gov (United States)

    Brown, Harry; Moore, Bruce

    1994-01-01

    The Aerospace Cell Test History Database was developed to provide project engineers and scientists ready access to the data obtained from testing of aerospace cell designs at Naval Surface Warfare Center, Crane Division. The database is intended for use by all aerospace engineers and scientists involved in the design of power systems for satellites. Specifically, the database will provide a tool for project engineers to review the progress of their test at Crane and to have ready access to data for evaluation. Additionally, the database will provide a history of test results that designers can draw upon to answer questions about cell performance under certain test conditions and aid in selection of a cell for a satellite battery. Viewgraphs are included.

  8. Database Description - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SAHG Database Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...Protein structure Human and other Vertebrate Genomes - Human ORFs Protein sequence database...s - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description...42,577 domain-structure models in ~24900 unique human protein sequences from the RefSeq database. Features a

  9. Database Description - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PSCDB Database Description General information of database Database name PSCDB Alternative n...rial Science and Technology (AIST) Takayuki Amemiya E-mail: Database classification Structure Databases - Protein structure Database... description The purpose of this database is to represent the relationship between p... Features and manner of utilization of database - License CC BY-SA Detail Background and funding - Reference...(s) Article title: PSCDB: a database for protein structural change upon ligand binding. Author name(s): T. A

  10. Database Description - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PLACE Database Description General information of database Database name PLACE Alternative name A Database...Kannondai, Tsukuba, Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Databas...e classification Plant databases Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database... description PLACE is a database of motifs found in plant cis-acting regulatory DNA elements base...that have been identified in these motifs in other genes or in other plant species in later publications. The database

  11. Functionally Graded Materials Database

    Science.gov (United States)

    Kisara, Katsuto; Konno, Tomomi; Niino, Masayuki

    2008-02-01

    Functionally Graded Materials Database (hereinafter referred to as FGMs Database) was open to the society via Internet in October 2002, and since then it has been managed by the Japan Aerospace Exploration Agency (JAXA). As of October 2006, the database includes 1,703 research information entries with 2,429 researchers data, 509 institution data and so on. Reading materials such as "Applicability of FGMs Technology to Space Plane" and "FGMs Application to Space Solar Power System (SSPS)" were prepared in FY 2004 and 2005, respectively. The English version of "FGMs Application to Space Solar Power System (SSPS)" is now under preparation. This present paper explains the FGMs Database, describing the research information data, the sitemap and how to use it. From the access analysis, user access results and users' interests are discussed.

  12. Marine Jurisdictions Database

    National Research Council Canada - National Science Library

    Goldsmith, Roger

    1998-01-01

    The purpose of this project was to take the data gathered for the Maritime Claims chart and create a Maritime Jurisdictions digital database suitable for use with oceanographic mission planning objectives...

  13. Medicare Coverage Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicare Coverage Database (MCD) contains all National Coverage Determinations (NCDs) and Local Coverage Determinations (LCDs), local articles, and proposed NCD...

  14. Children's Culture Database (CCD)

    DEFF Research Database (Denmark)

    Wanting, Birgit

    a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996...

  15. The Danish Melanoma Database

    DEFF Research Database (Denmark)

    Hölmich, Lisbet Rosenkrantz; Klausen, Siri; Spaun, Eva

    2016-01-01

    AIM OF DATABASE: The aim of the database is to monitor and improve the treatment and survival of melanoma patients. STUDY POPULATION: All Danish patients with cutaneous melanoma and in situ melanomas must be registered in the Danish Melanoma Database (DMD). In 2014, 2,525 patients with invasive......-node-metastasis stage. Information about the date of diagnosis, treatment, type of surgery, including safety margins, results of lymphoscintigraphy in patients for whom this was indicated (tumors > T1a), results of sentinel node biopsy, pathological evaluation hereof, and follow-up information, including recurrence......, nature, and treatment hereof is registered. In case of death, the cause and date are included. Currently, all data are entered manually; however, data catchment from the existing registries is planned to be included shortly. DESCRIPTIVE DATA: The DMD is an old research database, but new as a clinical...

  16. Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Hansen, Ulla Darling; Gradel, Kim Oren; Larsen, Michael Due

    2016-01-01

    The Danish Urogynaecological Database is established in order to ensure high quality of treatment for patients undergoing urogynecological surgery. The database contains details of all women in Denmark undergoing incontinence surgery or pelvic organ prolapse surgery amounting to ~5,200 procedures...... per year. The variables are collected along the course of treatment of the patient from the referral to a postoperative control. Main variables are prior obstetrical and gynecological history, symptoms, symptom-related quality of life, objective urogynecological findings, type of operation......, complications if relevant, implants used if relevant, 3-6-month postoperative recording of symptoms, if any. A set of clinical quality indicators is being maintained by the steering committee for the database and is published in an annual report which also contains extensive descriptive statistics. The database...

  17. Danish Gynecological Cancer Database

    DEFF Research Database (Denmark)

    Sørensen, Sarah Mejer; Bjørn, Signe Frahm; Jochumsen, Kirsten Marie

    2016-01-01

    AIM OF DATABASE: The Danish Gynecological Cancer Database (DGCD) is a nationwide clinical cancer database and its aim is to monitor the treatment quality of Danish gynecological cancer patients, and to generate data for scientific purposes. DGCD also records detailed data on the diagnostic measures...... for gynecological cancer. STUDY POPULATION: DGCD was initiated January 1, 2005, and includes all patients treated at Danish hospitals for cancer of the ovaries, peritoneum, fallopian tubes, cervix, vulva, vagina, and uterus, including rare histological types. MAIN VARIABLES: DGCD data are organized within separate...... is the registration of oncological treatment data, which is incomplete for a large number of patients. CONCLUSION: The very complete collection of available data from more registries form one of the unique strengths of DGCD compared to many other clinical databases, and provides unique possibilities for validation...

  18. Reach Address Database (RAD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Reach Address Database (RAD) stores the reach address of each Water Program feature that has been linked to the underlying surface water features (streams,...

  19. Atomicity for XML Databases

    Science.gov (United States)

    Biswas, Debmalya; Jiwane, Ashwin; Genest, Blaise

    With more and more data stored into XML databases, there is a need to provide the same level of failure resilience and robustness that users have come to expect from relational database systems. In this work, we discuss strategies to provide the transactional aspect of atomicity to XML databases. The main contribution of this paper is to propose a novel approach for performing updates-in-place on XML databases, with the undo statements stored in the same high level language as the update statements. Finally, we give experimental results to study the performance/storage trade-off of the updates-in-place strategy (based on our undo proposal) against the deferred updates strategy to providing atomicity.

  20. Mouse Phenome Database (MPD)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Mouse Phenome Database (MPD) has characterizations of hundreds of strains of laboratory mice to facilitate translational discoveries and to assist in selection...

  1. Ganymede Crater Database

    Data.gov (United States)

    National Aeronautics and Space Administration — This web page leads to a database of images and information about the 150 major impact craters on Ganymede and is updated semi-regularly based on continuing analysis...

  2. Dissolution Methods Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — For a drug product that does not have a dissolution test method in the United States Pharmacopeia (USP), the FDA Dissolution Methods Database provides information on...

  3. Toxicity Reference Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Toxicity Reference Database (ToxRefDB) contains approximately 30 years and $2 billion worth of animal studies. ToxRefDB allows scientists and the interested...

  4. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1994-05-27

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  5. Database for West Africa

    African Journals Online (AJOL)

    NCRS USDA English Morphology and analytical. ISIS ISRIC English ..... problems. The compilation of the database cannot be carried out without adequate funding It also needs a strong and firm management. It is important that all participants ...

  6. Venus Crater Database

    Data.gov (United States)

    National Aeronautics and Space Administration — This web page leads to a database of images and information about the 900 or so impact craters on the surface of Venus by diameter, latitude, and name.

  7. Kansas Cartographic Database (KCD)

    Data.gov (United States)

    Kansas Data Access and Support Center — The Kansas Cartographic Database (KCD) is an exact digital representation of selected features from the USGS 7.5 minute topographic map series. Features that are...

  8. Drycleaner Database - Region 7

    Data.gov (United States)

    U.S. Environmental Protection Agency — THIS DATA ASSET NO LONGER ACTIVE: This is metadata documentation for the Region 7 Drycleaner Database (R7DryClnDB) which tracks all Region7 drycleaners who notify...

  9. National Assessment Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The National Assessment Database stores and tracks state water quality assessment decisions, Total Maximum Daily Loads (TMDLs) and other watershed plans designed to...

  10. Rat Genome Database (RGD)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Rat Genome Database (RGD) is a collaborative effort between leading research institutions involved in rat genetic and genomic research to collect, consolidate,...

  11. Medicaid CHIP ESPC Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Environmental Scanning and Program Characteristic (ESPC) Database is in a Microsoft (MS) Access format and contains Medicaid and CHIP data, for the 50 states and...

  12. Global Volcano Locations Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NGDC maintains a database of over 1,500 volcano locations obtained from the Smithsonian Institution Global Volcanism Program, Volcanoes of the World publication. The...

  13. IVR RSA Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database contains trip-level reports submitted by vessels participating in Research Set-Aside projects with IVR reporting requirements.

  14. NLCD 2011 database

    Data.gov (United States)

    U.S. Environmental Protection Agency — National Land Cover Database 2011 (NLCD 2011) is the most recent national land cover product created by the Multi-Resolution Land Characteristics (MRLC) Consortium....

  15. Livestock Anaerobic Digester Database

    Science.gov (United States)

    The Anaerobic Digester Database provides basic information about anaerobic digesters on livestock farms in the United States, organized in Excel spreadsheets. It includes projects that are under construction, operating, or shut down.

  16. Food Habits Database (FHDBS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NEFSC Food Habits Database has two major sources of data. The first, and most extensive, is the standard NEFSC Bottom Trawl Surveys Program. During these...

  17. 1988 Spitak Earthquake Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 1988 Spitak Earthquake database is an extensive collection of geophysical and geological data, maps, charts, images and descriptive text pertaining to the...

  18. Consumer Product Category Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use...

  19. Callisto Crater Database

    Data.gov (United States)

    National Aeronautics and Space Administration — This web page leads to a database of images and information about the 150 major impact craters on Callisto and is updated semi-regularly based on continuing analysis...

  20. Uranium Location Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — A GIS compiled locational database in Microsoft Access of ~15,000 mines with uranium occurrence or production, primarily in the western United States. The metadata...

  1. Household Products Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — This database links over 4,000 consumer brands to health effects from Material Safety Data Sheets (MSDS) provided by the manufacturers and allows scientists and...

  2. Fine Arts Database (FAD)

    Data.gov (United States)

    General Services Administration — The Fine Arts Database records information on federally owned art in the control of the GSA; this includes the location, current condition and information on artists.

  3. The Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Guldberg, Rikke; Brostrøm, Søren; Hansen, Jesper Kjær

    2013-01-01

    INTRODUCTION AND HYPOTHESIS: The Danish Urogynaecological Database (DugaBase) is a nationwide clinical database established in 2006 to monitor, ensure and improve the quality of urogynaecological surgery. We aimed to describe its establishment and completeness and to validate selected variables....... This is the first study based on data from the DugaBase. METHODS: The database completeness was calculated as a comparison between urogynaecological procedures reported to the Danish National Patient Registry and to the DugaBase. Validity was assessed for selected variables from a random sample of 200 women...... in the DugaBase from 1 January 2009 to 31 October 2010, using medical records as a reference. RESULTS: A total of 16,509 urogynaecological procedures were registered in the DugaBase by 31 December 2010. The database completeness has increased by calendar time, from 38.2 % in 2007 to 93.2 % in 2010 for public...

  4. The Brain Database: A Multimedia Neuroscience Database for Research and Teaching

    OpenAIRE

    Wertheim, Steven L.

    1989-01-01

    The Brain Database is an information tool designed to aid in the integration of clinical and research results in neuroanatomy and regional biochemistry. It can handle a wide range of data types including natural images, 2 and 3-dimensional graphics, video, numeric data and text. It is organized around three main entities: structures, substances and processes. The database will support a wide variety of graphical interfaces. Two sample interfaces have been made.

  5. SWISS-PROT: connecting biomolecular knowledge via a protein database.

    Science.gov (United States)

    Gasteiger, E; Jung, E; Bairoch, A

    2001-07-01

    With the explosive growth of biological data, the development of new means of data storage was needed. More and more often biological information is no longer published in the conventional way via a publication in a scientific journal, but only deposited into a database. In the last two decades these databases have become essential tools for researchers in biological sciences. Biological databases can be classified according to the type of information they contain. There are basically three types of sequence-related databases (nucleic acid sequences, protein sequences and protein tertiary structures) as well as various specialized data collections. It is important to provide the users of biomolecular databases with a degree of integration between these databases as by nature all of these databases are connected in a scientific sense and each one of them is an important piece to biological complexity. In this review we will highlight our effort in connecting biological information as demonstrated in the SWISS-PROT protein database.

  6. Saccharomyces genome database: underlying principles and organisation.

    Science.gov (United States)

    Dwight, Selina S; Balakrishnan, Rama; Christie, Karen R; Costanzo, Maria C; Dolinski, Kara; Engel, Stacia R; Feierbach, Becket; Fisk, Dianna G; Hirschman, Jodi; Hong, Eurie L; Issel-Tarver, Laurie; Nash, Robert S; Sethuraman, Anand; Starr, Barry; Theesfeld, Chandra L; Andrada, Rey; Binkley, Gail; Dong, Qing; Lane, Christopher; Schroeder, Mark; Weng, Shuai; Botstein, David; Cherry, J Michael

    2004-03-01

    A scientific database can be a powerful tool for biologists in an era where large-scale genomic analysis, combined with smaller-scale scientific results, provides new insights into the roles of genes and their products in the cell. However, the collection and assimilation of data is, in itself, not enough to make a database useful. The data must be incorporated into the database and presented to the user in an intuitive and biologically significant manner. Most importantly, this presentation must be driven by the user's point of view; that is, from a biological perspective. The success of a scientific database can therefore be measured by the response of its users - statistically, by usage numbers and, in a less quantifiable way, by its relationship with the community it serves and its ability to serve as a model for similar projects. Since its inception ten years ago, the Saccharomyces Genome Database (SGD) has seen a dramatic increase in its usage, has developed and maintained a positive working relationship with the yeast research community, and has served as a template for at least one other database. The success of SGD, as measured by these criteria, is due in large part to philosophies that have guided its mission and organisation since it was established in 1993. This paper aims to detail these philosophies and how they shape the organisation and presentation of the database.

  7. Development, deployment and operations of ATLAS databases

    International Nuclear Information System (INIS)

    Vaniachine, A. V.; von der Schmitt, J. G.

    2008-01-01

    In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services

  8. International Nuclear Safety Center (INSC) database

    International Nuclear Information System (INIS)

    Sofu, T.; Ley, H.; Turski, R.B.

    1997-01-01

    As an integral part of DOE's International Nuclear Safety Center (INSC) at Argonne National Laboratory, the INSC Database has been established to provide an interactively accessible information resource for the world's nuclear facilities and to promote free and open exchange of nuclear safety information among nations. The INSC Database is a comprehensive resource database aimed at a scope and level of detail suitable for safety analysis and risk evaluation for the world's nuclear power plants and facilities. It also provides an electronic forum for international collaborative safety research for the Department of Energy and its international partners. The database is intended to provide plant design information, material properties, computational tools, and results of safety analysis. Initial emphasis in data gathering is given to Soviet-designed reactors in Russia, the former Soviet Union, and Eastern Europe. The implementation is performed under the Oracle database management system, and the World Wide Web is used to serve as the access path for remote users. An interface between the Oracle database and the Web server is established through a custom designed Web-Oracle gateway which is used mainly to perform queries on the stored data in the database tables

  9. Database on Wind Characteristics

    DEFF Research Database (Denmark)

    Højstrup, J.; Ejsing Jørgensen, Hans; Lundtang Petersen, Erik

    1999-01-01

    his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061......his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061...

  10. The NorWeST Summer Stream Temperature Model and Scenarios for the Western U.S.: A Crowd-Sourced Database and New Geospatial Tools Foster a User Community and Predict Broad Climate Warming of Rivers and Streams

    Science.gov (United States)

    Isaak, Daniel J.; Wenger, Seth J.; Peterson, Erin E.; Ver Hoef, Jay M.; Nagel, David E.; Luce, Charles H.; Hostetler, Steven W.; Dunham, Jason B.; Roper, Brett B.; Wollrab, Sherry P.; Chandler, Gwynne L.; Horan, Dona L.; Parkes-Payne, Sharon

    2017-11-01

    Thermal regimes are fundamental determinants of aquatic ecosystems, which makes description and prediction of temperatures critical during a period of rapid global change. The advent of inexpensive temperature sensors dramatically increased monitoring in recent decades, and although most monitoring is done by individuals for agency-specific purposes, collectively these efforts constitute a massive distributed sensing array that generates an untapped wealth of data. Using the framework provided by the National Hydrography Dataset, we organized temperature records from dozens of agencies in the western U.S. to create the NorWeST database that hosts >220,000,000 temperature recordings from >22,700 stream and river sites. Spatial-stream-network models were fit to a subset of those data that described mean August water temperatures (AugTw) during 63,641 monitoring site-years to develop accurate temperature models (r2 = 0.91; RMSPE = 1.10°C; MAPE = 0.72°C), assess covariate effects, and make predictions at 1 km intervals to create summer climate scenarios. AugTw averaged 14.2°C (SD = 4.0°C) during the baseline period of 1993-2011 in 343,000 km of western perennial streams but trend reconstructions also indicated warming had occurred at the rate of 0.17°C/decade (SD = 0.067°C/decade) during the 40 year period of 1976-2015. Future scenarios suggest continued warming, although variation will occur within and among river networks due to differences in local climate forcing and stream responsiveness. NorWeST scenarios and data are available online in user-friendly digital formats and are widely used to coordinate monitoring efforts among agencies, for new research, and for conservation planning.

  11. Applications of GIS and database technologies to manage a Karst Feature Database

    Science.gov (United States)

    Gao, Y.; Tipping, R.G.; Alexander, E.C.

    2006-01-01

    This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.

  12. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  13. Database Description - TMFunction | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available sidue (or mutant) in a protein. The experimental data are collected from the literature both by searching th...the sequence database, UniProt, structural database, PDB, and literature database

  14. XML: James Webb Space Telescope Database Issues, Lessons, and Status

    Science.gov (United States)

    Detter, Ryan; Mooney, Michael; Fatig, Curtis

    2003-01-01

    This paper will present the current concept using extensible Markup Language (XML) as the underlying structure for the James Webb Space Telescope (JWST) database. The purpose of using XML is to provide a JWST database, independent of any portion of the ground system, yet still compatible with the various systems using a variety of different structures. The testing of the JWST Flight Software (FSW) started in 2002, yet the launch is scheduled for 2011 with a planned 5-year mission and a 5-year follow on option. The initial database and ground system elements, including the commands, telemetry, and ground system tools will be used for 19 years, plus post mission activities. During the Integration and Test (I&T) phases of the JWST development, 24 distinct laboratories, each geographically dispersed, will have local database tools with an XML database. Each of these laboratories database tools will be used for the exporting and importing of data both locally and to a central database system, inputting data to the database certification process, and providing various reports. A centralized certified database repository will be maintained by the Space Telescope Science Institute (STScI), in Baltimore, Maryland, USA. One of the challenges for the database is to be flexible enough to allow for the upgrade, addition or changing of individual items without effecting the entire ground system. Also, using XML should allow for the altering of the import and export formats needed by the various elements, tracking the verification/validation of each database item, allow many organizations to provide database inputs, and the merging of the many existing database processes into one central database structure throughout the JWST program. Many National Aeronautics and Space Administration (NASA) projects have attempted to take advantage of open source and commercial technology. Often this causes a greater reliance on the use of Commercial-Off-The-Shelf (COTS), which is often limiting

  15. O-ODM Framework for Object-Relational Databases

    Directory of Open Access Journals (Sweden)

    Carlos Alberto Rombaldo Jr

    2012-09-01

    Full Text Available Object-Relational Databases introduce new features which allow manipulating objects in databases. At present, many DBMS offer resources to manipulate objects in database, but most application developers just map class to relations tables, failing to exploit the O-R model strength. The lack of tools that aid the database project contributes to this situation. This work presents O-ODM (Object-Object Database Mapping, a persistent framework that maps objects from OO applications to database objects. Persistent Frameworks have been used to aid developers, managing all access to DBMS. This kind of tool allows developers to persist objects without solid knowledge about DBMSs and specific languages, improving the developers’ productivity, mainly when a different DBMS is used. The results of some experiments using O-ODM are shown.

  16. A database of charged cosmic rays

    Science.gov (United States)

    Maurin, D.; Melot, F.; Taillet, R.

    2014-09-01

    Aims: This paper gives a description of a new online database and associated online tools (data selection, data export, plots, etc.) for charged cosmic-ray measurements. The experimental setups (type, flight dates, techniques) from which the data originate are included in the database, along with the references to all relevant publications. Methods: The database relies on the MySQL5 engine. The web pages and queries are based on PHP, AJAX and the jquery, jquery.cluetip, jquery-ui, and table-sorter third-party libraries. Results: In this first release, we restrict ourselves to Galactic cosmic rays with Z ≤ 30 and a kinetic energy per nucleon up to a few tens of TeV/n. This corresponds to more than 200 different sub-experiments (i.e., different experiments, or data from the same experiment flying at different times) in as many publications. Conclusions: We set up a cosmic-ray database (CRDB) and provide tools to sort and visualise the data. New data can be submitted, providing the community with a collaborative tool to archive past and future cosmic-ray measurements. http://lpsc.in2p3.fr/crdb; Contact: crdatabase@lpsc.in2p3.fr

  17. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us KOME Database Description General information of database Database name KOME Alternative name Knowledge-base... Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice ...Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description Information about approximately ...ngth cDNA project is shown in the database. The full-length cDNA clones were collected from various tissues ...treated under various stress conditions. The database contains not only information about complete nucleotid

  18. [Validation of interaction databases in psychopharmacotherapy].

    Science.gov (United States)

    Hahn, M; Roll, S C

    2018-03-01

    Drug-drug interaction databases are an important tool to increase drug safety in polypharmacy. There are several drug interaction databases available but it is unclear which one shows the best results and therefore increases safety for the user of the databases and the patients. So far, there has been no validation of German drug interaction databases. Validation of German drug interaction databases regarding the number of hits, mechanisms of drug interaction, references, clinical advice, and severity of the interaction. A total of 36 drug interactions which were published in the last 3-5 years were checked in 5 different databases. Besides the number of hits, it was also documented if the mechanism was correct, clinical advice was given, primary literature was cited, and the severity level of the drug-drug interaction was given. All databases showed weaknesses regarding the hit rate of the tested drug interactions, with a maximum of 67.7% hits. The highest score in this validation was achieved by MediQ with 104 out of 180 points. PsiacOnline achieved 83 points, arznei-telegramm® 58, ifap index® 54 and the ABDA-database 49 points. Based on this validation MediQ seems to be the most suitable databank for the field of psychopharmacotherapy. The best results in this comparison were achieved by MediQ but this database also needs improvement with respect to the hit rate so that the users can rely on the results and therefore increase drug therapy safety.

  19. A Sustainable Spacecraft Component Database Solution, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Numerous spacecraft component databases have been developed to support NASA, DoD, and contractor design centers and design tools. Despite the clear utility of...

  20. The CUTLASS database facilities

    International Nuclear Information System (INIS)

    Jervis, P.; Rutter, P.

    1988-09-01

    The enhancement of the CUTLASS database management system to provide improved facilities for data handling is seen as a prerequisite to its effective use for future power station data processing and control applications. This particularly applies to the larger projects such as AGR data processing system refurbishments, and the data processing systems required for the new Coal Fired Reference Design stations. In anticipation of the need for improved data handling facilities in CUTLASS, the CEGB established a User Sub-Group in the early 1980's to define the database facilities required by users. Following the endorsement of the resulting specification and a detailed design study, the database facilities have been implemented as an integral part of the CUTLASS system. This paper provides an introduction to the range of CUTLASS Database facilities, and emphasises the role of Database as the central facility around which future Kit 1 and (particularly) Kit 6 CUTLASS based data processing and control systems will be designed and implemented. (author)

  1. The World Bacterial Biogeography and Biodiversity through Databases: A Case Study of NCBI Nucleotide Database and GBIF Database

    Directory of Open Access Journals (Sweden)

    Okba Selama

    2013-01-01

    Full Text Available Databases are an essential tool and resource within the field of bioinformatics. The primary aim of this study was to generate an overview of global bacterial biodiversity and biogeography using available data from the two largest public online databases, NCBI Nucleotide and GBIF. The secondary aim was to highlight the contribution each geographic area has to each database. The basis for data analysis of this study was the metadata provided by both databases, mainly, the taxonomy and the geographical area origin of isolation of the microorganism (record. These were directly obtained from GBIF through the online interface, while E-utilities and Python were used in combination with a programmatic web service access to obtain data from the NCBI Nucleotide Database. Results indicate that the American continent, and more specifically the USA, is the top contributor, while Africa and Antarctica are less well represented. This highlights the imbalance of exploration within these areas rather than any reduction in biodiversity. This study describes a novel approach to generating global scale patterns of bacterial biodiversity and biogeography and indicates that the Proteobacteria are the most abundant and widely distributed phylum within both databases.

  2. Database Description - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Open TG-GATEs Pathological Image Database Database Description General information of database Database... name Open TG-GATEs Pathological Image Database Alternative name - DOI 10.18908/lsdba.nbdc00954-0...iomedical Innovation 7-6-8, Saito-asagi, Ibaraki-city, Osaka 567-0085, Japan TEL:81-72-641-9826 Email: Database... classification Toxicogenomics Database Organism Taxonomy Name: Rattus norvegi...cus Taxonomy ID: 10116 Database description On the pathological image database, over 53,000 high-resolution

  3. The STRING database in 2011

    DEFF Research Database (Denmark)

    Szklarczyk, Damian; Franceschini, Andrea; Kuhn, Michael

    2011-01-01

    and computational prediction techniques. However, public efforts to collect and present protein interaction information have struggled to keep up with the pace of interaction discovery, partly because protein-protein interaction information can be error-prone and require considerable effort to annotate. Here, we...... present an update on the online database resource Search Tool for the Retrieval of Interacting Genes (STRING); it provides uniquely comprehensive coverage and ease of access to both experimental as well as predicted interaction information. Interactions in STRING are provided with a confidence score......, and accessory information such as protein domains and 3D structures is made available, all within a stable and consistent identifier space. New features in STRING include an interactive network viewer that can cluster networks on demand, updated on-screen previews of structural information including homology...

  4. Are ‘Agent’ Exclusion Clauses a Legitimate Application of the EU Database Directive?

    Directory of Open Access Journals (Sweden)

    Jimi Groom

    2004-03-01

    Full Text Available This article explores the implications of the implementation of the European Database Directive in the area of autonomous agents and the use of exclusion tools in the part of database owners to stop agents accessing their works.

  5. Are Agent Exclusion Clauses a Legitimate Application of the EU Database Directive?

    OpenAIRE

    Jimi Groom

    2004-01-01

    This article explores the implications of the implementation of the European Database Directive in the area of autonomous agents and the use of exclusion tools in the part of database owners to stop agents accessing their works.

  6. Data from clinical database on septic shock are valid

    DEFF Research Database (Denmark)

    Grønlykke, Lars; Brandstrup, Sofie Louise Rygaard; Perner, Anders

    2012-01-01

    Clinical databases have become important tools in intensive care. Disease severity and organ dysfunction scoring systems are registered in the databases, including the Simplified Acute Physiology Score II (SAPS II) and the Sequential Organ Failure Assessment (SOFA) score. The purpose of this stud...

  7. Danish Pancreatic Cancer Database

    DEFF Research Database (Denmark)

    Fristrup, Claus; Detlefsen, Sönke; Palnæs Hansen, Carsten

    2016-01-01

    AIM OF DATABASE: The Danish Pancreatic Cancer Database aims to prospectively register the epidemiology, diagnostic workup, diagnosis, treatment, and outcome of patients with pancreatic cancer in Denmark at an institutional and national level. STUDY POPULATION: Since May 1, 2011, all patients...... with microscopically verified ductal adenocarcinoma of the pancreas have been registered in the database. As of June 30, 2014, the total number of patients registered was 2,217. All data are cross-referenced with the Danish Pathology Registry and the Danish Patient Registry to ensure the completeness of registrations....... MAIN VARIABLES: The main registered variables are patient demographics, performance status, diagnostic workup, histological and/or cytological diagnosis, and clinical tumor stage. The following data on treatment are registered: type of operation, date of first adjuvant, neoadjuvant, and first...

  8. Database Vs Data Warehouse

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Data warehouse technology includes a set of concepts and methods that offer the users useful information for decision making. The necessity to build a data warehouse arises from the necessity to improve the quality of information in the organization. The date proceeding from different sources, having a variety of forms - both structured and unstructured, are filtered according to business rules and are integrated in a single large data collection. Using informatics solutions, managers have understood that data stored in operational systems - including databases, are an informational gold mine that must be exploited. Data warehouses have been developed to answer the increasing demands for complex analysis, which could not be properly achieved with operational databases. The present paper emphasizes some of the criteria that information application developers can use in order to choose between a database solution or a data warehouse one.

  9. The Danish Sarcoma Database

    DEFF Research Database (Denmark)

    Jørgensen, Peter Holmberg; Lausten, Gunnar Schwarz; Pedersen, Alma B

    2016-01-01

    AIM: The aim of the database is to gather information about sarcomas treated in Denmark in order to continuously monitor and improve the quality of sarcoma treatment in a local, a national, and an international perspective. STUDY POPULATION: Patients in Denmark diagnosed with a sarcoma, both...... skeletal and ekstraskeletal, are to be registered since 2009. MAIN VARIABLES: The database contains information about appearance of symptoms; date of receiving referral to a sarcoma center; date of first visit; whether surgery has been performed elsewhere before referral, diagnosis, and treatment; tumor...... of Diseases - tenth edition codes and TNM Classification of Malignant Tumours, and date of death (after yearly coupling to the Danish Civil Registration System). Data quality and completeness are currently secured. CONCLUSION: The Danish Sarcoma Database is population based and includes sarcomas occurring...

  10. The PROSITE database.

    Science.gov (United States)

    Hulo, Nicolas; Bairoch, Amos; Bulliard, Virginie; Cerutti, Lorenzo; De Castro, Edouard; Langendijk-Genevaux, Petra S; Pagni, Marco; Sigrist, Christian J A

    2006-01-01

    The PROSITE database consists of a large collection of biologically meaningful signatures that are described as patterns or profiles. Each signature is linked to a documentation that provides useful biological information on the protein family, domain or functional site identified by the signature. The PROSITE database is now complemented by a series of rules that can give more precise information about specific residues. During the last 2 years, the documentation and the ScanProsite web pages were redesigned to add more functionalities. The latest version of PROSITE (release 19.11 of September 27, 2005) contains 1329 patterns and 552 profile entries. Over the past 2 years more than 200 domains have been added, and now 52% of UniProtKB/Swiss-Prot entries (release 48.1 of September 27, 2005) have a cross-reference to a PROSITE entry. The database is accessible at http://www.expasy.org/prosite/.

  11. Towards Sensor Database Systems

    DEFF Research Database (Denmark)

    Bonnet, Philippe; Gehrke, Johannes; Seshadri, Praveen

    2001-01-01

    Sensor networks are being widely deployed for measurement, detection and surveillance applications. In these new applications, users issue long-running queries over a combination of stored data and sensor data. Most existing applications rely on a centralized system for collecting sensor data....... These systems lack flexibility because data is extracted in a predefined way; also, they do not scale to a large number of devices because large volumes of raw data are transferred regardless of the queries that are submitted. In our new concept of sensor database system, queries dictate which data is extracted...... from the sensors. In this paper, we define the concept of sensor databases mixing stored data represented as relations and sensor data represented as time series. Each long-running query formulated over a sensor database defines a persistent view, which is maintained during a given time interval. We...

  12. Database Application Schema Forensics

    Directory of Open Access Journals (Sweden)

    Hector Quintus Beyers

    2014-12-01

    Full Text Available The application schema layer of a Database Management System (DBMS can be modified to deliver results that may warrant a forensic investigation. Table structures can be corrupted by changing the metadata of a database or operators of the database can be altered to deliver incorrect results when used in queries. This paper will discuss categories of possibilities that exist to alter the application schema with some practical examples. Two forensic environments are introduced where a forensic investigation can take place in. Arguments are provided why these environments are important. Methods are presented how these environments can be achieved for the application schema layer of a DBMS. A process is proposed on how forensic evidence should be extracted from the application schema layer of a DBMS. The application schema forensic evidence identification process can be applied to a wide range of forensic settings.

  13. DistiLD Database

    DEFF Research Database (Denmark)

    Palleja, Albert; Horn, Heiko; Eliasson, Sabrina

    2012-01-01

    Genome-wide association studies (GWAS) have identified thousands of single nucleotide polymorphisms (SNPs) associated with the risk of hundreds of diseases. However, there is currently no database that enables non-specialists to answer the following simple questions: which SNPs associated...... blocks, so that SNPs in LD with each other are preferentially in the same block, whereas SNPs not in LD are in different blocks. By projecting SNPs and genes onto LD blocks, the DistiLD database aims to increase usage of existing GWAS results by making it easy to query and visualize disease......-associated SNPs and genes in their chromosomal context. The database is available at http://distild.jensenlab.org/....

  14. A Noisy 10GB Provenance Database

    Energy Technology Data Exchange (ETDEWEB)

    Cheah, You-Wei; Plale, Beth; Kendall-Morwick, Joey; Leake, David; Ramakrishnan, Lavanya

    2011-06-06

    Provenance of scientific data is a key piece of the metadata record for the data's ongoing discovery and reuse. Provenance collection systems capture provenance on the fly, however, the protocol between application and provenance tool may not be reliable. Consequently, the provenance record can be partial, partitioned, and simply inaccurate. We use a workflow emulator that models faults to construct a large 10GB database of provenance that we know is noisy (that is, has errors). We discuss the process of generating the provenance database, and show early results on the kinds of provenance analysis enabled by the large provenance.

  15. 600 MW nuclear power database

    International Nuclear Information System (INIS)

    Cao Ruiding; Chen Guorong; Chen Xianfeng; Zhang Yishu

    1996-01-01

    600 MW Nuclear power database, based on ORACLE 6.0, consists of three parts, i.e. nuclear power plant database, nuclear power position database and nuclear power equipment database. In the database, there are a great deal of technique data and picture of nuclear power, provided by engineering designing units and individual. The database can give help to the designers of nuclear power

  16. C# Database Basics

    CERN Document Server

    Schmalz, Michael

    2012-01-01

    Working with data and databases in C# certainly can be daunting if you're coming from VB6, VBA, or Access. With this hands-on guide, you'll shorten the learning curve considerably as you master accessing, adding, updating, and deleting data with C#-basic skills you need if you intend to program with this language. No previous knowledge of C# is necessary. By following the examples in this book, you'll learn how to tackle several database tasks in C#, such as working with SQL Server, building data entry forms, and using data in a web service. The book's code samples will help you get started

  17. The Danish Anaesthesia Database

    DEFF Research Database (Denmark)

    Antonsen, Kristian; Rosenstock, Charlotte Vallentin; Lundstrøm, Lars Hyldborg

    2016-01-01

    AIM OF DATABASE: The aim of the Danish Anaesthesia Database (DAD) is the nationwide collection of data on all patients undergoing anesthesia. Collected data are used for quality assurance, quality development, and serve as a basis for research projects. STUDY POPULATION: The DAD was founded in 2004...... direct patient-related lifestyle factors enabling a quantification of patients' comorbidity as well as variables that are strictly related to the type, duration, and safety of the anesthesia. Data and specific data combinations can be extracted within each department in order to monitor patient treatment...

  18. The Danish Depression Database

    DEFF Research Database (Denmark)

    Videbech, Poul Bror Hemming; Deleuran, Anette

    2016-01-01

    AIM OF DATABASE: The purpose of the Danish Depression Database (DDD) is to monitor and facilitate the improvement of the quality of the treatment of depression in Denmark. Furthermore, the DDD has been designed to facilitate research. STUDY POPULATION: Inpatients as well as outpatients...... with depression, aged above 18 years, and treated in the public psychiatric hospital system were enrolled. MAIN VARIABLES: Variables include whether the patient has been thoroughly somatically examined and has been interviewed about the psychopathology by a specialist in psychiatry. The Hamilton score as well...

  19. Yucca Mountain digital database

    International Nuclear Information System (INIS)

    Daudt, C.R.; Hinze, W.J.

    1992-01-01

    This paper discusses the Yucca Mountain Digital Database (DDB) which is a digital, PC-based geographical database of geoscience-related characteristics of the proposed high-level waste (HLW) repository site of Yucca Mountain, Nevada. It was created to provide the US Nuclear Regulatory Commission's (NRC) Advisory Committee on Nuclear Waste (ACNW) and its staff with a visual perspective of geological, geophysical, and hydrological features at the Yucca Mountain site as discussed in the Department of Energy's (DOE) pre-licensing reports

  20. Database Management System

    Science.gov (United States)

    1990-01-01

    In 1981 Wayne Erickson founded Microrim, Inc, a company originally focused on marketing a microcomputer version of RIM (Relational Information Manager). Dennis Comfort joined the firm and is now vice president, development. The team developed an advanced spinoff from the NASA system they had originally created, a microcomputer database management system known as R:BASE 4000. Microrim added many enhancements and developed a series of R:BASE products for various environments. R:BASE is now the second largest selling line of microcomputer database management software in the world.

  1. Rett networked database

    DEFF Research Database (Denmark)

    Grillo, Elisa; Villard, Laurent; Clarke, Angus

    2012-01-01

    underlie some (usually variant) cases. There is only limited correlation between genotype and phenotype. The Rett Networked Database (http://www.rettdatabasenetwork.org/) has been established to share clinical and genetic information. Through an "adaptor" process of data harmonization, a set of 293...... clinical items and 16 genetic items was generated; 62 clinical and 7 genetic items constitute the core dataset; 23 clinical items contain longitudinal information. The database contains information on 1838 patients from 11 countries (December 2011), with or without mutations in known genes. These numbers...

  2. Accessing and using chemical databases

    DEFF Research Database (Denmark)

    Nikolov, Nikolai Georgiev; Pavlov, Todor; Niemelä, Jay Russell

    2013-01-01

    , and dissemination. Structure and functionality of chemical databases are considered. The typical kinds of information found in a chemical database are considered-identification, structural, and associated data. Functionality of chemical databases is presented, with examples of search and access types. More details...... are included about the OASIS database and platform and the Danish (Q)SAR Database online. Various types of chemical database resources are discussed, together with a list of examples.......Computer-based representation of chemicals makes it possible to organize data in chemical databases-collections of chemical structures and associated properties. Databases are widely used wherever efficient processing of chemical information is needed, including search, storage, retrieval...

  3. Surgery Risk Assessment (SRA) Database

    Data.gov (United States)

    Department of Veterans Affairs — The Surgery Risk Assessment (SRA) database is part of the VA Surgical Quality Improvement Program (VASQIP). This database contains assessments of selected surgical...

  4. Embedded Systems Programming: Accessing Databases from Esterel

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available A current limitation in embedded controller design and programming is the lack of database support in development tools such as Esterel Studio. This article proposes a way of integrating databases and Esterel by providing two application programming interfaces (APIs which enable the use of relational databases inside Esterel programs. As databases and Esterel programs are often executed on different machines, result sets returned as responses to database queries may be processed either locally and according to Esterel’s synchrony hypothesis, or remotely along several of Esterel’s execution cycles. These different scenarios are reflected in the design and usage rules of the two APIs presented in this article, which rely on Esterel’s facilities for extending the language by external data types, external functions, and procedures, as well as tasks. The APIs’ utility is demonstrated by means of a case study modelling an automated warehouse storage system, which is constructed using Lego Mindstorms robotics kits. The robot’s controller is programmed in Esterel in a way that takes dynamic ordering information and the warehouse’s floor layout into account, both of which are stored in a MySQL database.

  5. Embedded Systems Programming: Accessing Databases from Esterel

    Directory of Open Access Journals (Sweden)

    White David

    2008-01-01

    Full Text Available Abstract A current limitation in embedded controller design and programming is the lack of database support in development tools such as Esterel Studio. This article proposes a way of integrating databases and Esterel by providing two application programming interfaces (APIs which enable the use of relational databases inside Esterel programs. As databases and Esterel programs are often executed on different machines, result sets returned as responses to database queries may be processed either locally and according to Esterel's synchrony hypothesis, or remotely along several of Esterel's execution cycles. These different scenarios are reflected in the design and usage rules of the two APIs presented in this article, which rely on Esterel's facilities for extending the language by external data types, external functions, and procedures, as well as tasks. The APIs' utility is demonstrated by means of a case study modelling an automated warehouse storage system, which is constructed using Lego Mindstorms robotics kits. The robot's controller is programmed in Esterel in a way that takes dynamic ordering information and the warehouse's floor layout into account, both of which are stored in a MySQL database.

  6. Federated Database Services for Wind Tunnel Experiment Workflows

    Directory of Open Access Journals (Sweden)

    A. Paventhan

    2006-01-01

    Full Text Available Enabling the full life cycle of scientific and engineering workflows requires robust middleware and services that support effective data management, near-realtime data movement and custom data processing. Many existing solutions exploit the database as a passive metadata catalog. In this paper, we present an approach that makes use of federation of databases to host data-centric wind tunnel application workflows. The user is able to compose customized application workflows based on database services. We provide a reference implementation that leverages typical business tools and technologies: Microsoft SQL Server for database services and Windows Workflow Foundation for workflow services. The application data and user's code are both hosted in federated databases. With the growing interest in XML Web Services in scientific Grids, and with databases beginning to support native XML types and XML Web services, we can expect the role of databases in scientific computation to grow in importance.

  7. MARKS ON ART database

    DEFF Research Database (Denmark)

    van Vlierden, Marieke; Wadum, Jørgen; Wolters, Margreet

    2016-01-01

    Mestermærker, monogrammer og kvalitetsmærker findes ofte præget eller stemplet på kunstværker fra 1300-1700. En illustreret database med denne typer mræker er under etablering på Nederlands Kunsthistoriske Institut (RKD) i Den Haag....

  8. Relational Database and Retrieval

    African Journals Online (AJOL)

    Computer Aided Design for Soil Classification. Relational Database and Retrieval. Techniques ... also presents algorithms showing the procedure for generating various soil classifications, retrieval techniques for ... In engineering discipline, for instance, design choices are a compromise,'shaped by many competing factors.

  9. Relational database telemanagement.

    Science.gov (United States)

    Swinney, A R

    1988-05-01

    Dallas-based Baylor Health Care System recognized the need for a way to control and track responses to their marketing programs. To meet the demands of data management and analysis, and build a useful database of current customers and future prospects, the marketing department developed a system to capture, store and manage these responses.

  10. From database to normbase

    NARCIS (Netherlands)

    Stamper, R.K.; Liu, Kecheng; Liu, K.; Kolkman, M.; Kolkman, M.; Klarenberg, P.; Ades, Y.; van Slooten, C.; van Slooten, F.; Ades, Y.

    1991-01-01

    After the database concept, we are ready for the normbase concept. The object is to decouple organizational and technical knowledge that are now mixed inextricably together in the application programs we write today. The underlying principle is to find a way of specifying a social system as a system

  11. The International Lactuca database

    NARCIS (Netherlands)

    Treuren, van R.; Menting, F.B.J.

    2014-01-01

    The International Lactuca Database includes accessions of species belonging to the genus Lactuca, but also a few accessions belonging to related genera. Passport data can be searched on-line or downloaded. Characterization and evaluation data can be accessed via the downloading section. Requests for

  12. Oversigt over databaser

    DEFF Research Database (Denmark)

    Krogh Graversen, Brian

    Dette er en oversigt over registre, som kan anvendes til at beslyse situationen og udviklingen på det sociale område. Oversigten er anden fase i et dataprojekt, som har til formål at etablere en database, som kan danne basis for en løbende overvågning, udredning, evaluering og forskning på det...

  13. Harmonization of Databases

    DEFF Research Database (Denmark)

    Charlifue, Susan; Tate, Denise; Biering-Sorensen, Fin

    2016-01-01

    The objectives of this article are to (1) provide an overview of existing spinal cord injury (SCI) clinical research databases-their purposes, characteristics, and accessibility to users; and (2) present a vision for future collaborations required for cross-cutting research in SCI. This vision hi...

  14. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RPSD Database Description General information of database Database name RPSD Alternative nam...e Rice Protein Structure Database DOI 10.18908/lsdba.nbdc00749-000 Creator Creator Name: Toshimasa Yamazaki ... Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Databas...e classification Structure Databases - Protein structure Organism Taxonomy Name: Or...max Taxonomy ID: 3847 Database description We have determined the three-dimensional structures of the protei

  15. Database Description - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us GETDB Database Description General information of database Database name GETDB Alternative n...ame Gal4 Enhancer Trap Insertion Database DOI 10.18908/lsdba.nbdc00236-000 Creator Creator Name: Shigeo Haya... Chuo-ku, Kobe 650-0047 Tel: +81-78-306-3185 FAX: +81-78-306-3183 E-mail: Database classification Expression... Invertebrate genome database Organism Taxonomy Name: Drosophila melanogaster Taxonomy ID: 7227 Database des...cription About 4,600 insertion lines of enhancer trap lines based on the Gal4-UAS

  16. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  17. Database Description - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Database Description General information of database Database... name Yeast Interacting Proteins Database Alternative name - DOI 10.18908/lsdba.nbdc00742-000 Creator C...-ken 277-8561 Tel: +81-4-7136-3989 FAX: +81-4-7136-3979 E-mail : Database classif...s cerevisiae Taxonomy ID: 4932 Database description Information on interactions and related information obta...atures and manner of utilization of database Protein-protein interaction data obtained by the comprehensive

  18. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  19. Firebird Database Backup by Serialized Database Table Dump

    OpenAIRE

    Ling, Maurice HT

    2007-01-01

    This paper presents a simple data dump and load utility for Firebird databases which mimics mysqldump in MySQL. This utility, fb_dump and fb_load, for dumping and loading respectively, retrieves each database table using kinterbasdb and serializes the data using marshal module. This utility has two advantages over the standard Firebird database backup utility, gbak. Firstly, it is able to backup and restore single database tables which might help to recover corrupted databases. Secondly, the ...

  20. LHCb Conditions Database Operation Assistance Systems

    CERN Multimedia

    Shapoval, Illya

    2012-01-01

    The Conditions Database of the LHCb experiment (CondDB) provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger, reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues: - an extension to the automatic content validation done by the “Oracle Streams” replication technology, to trap cases when the replication was unsuccessful; - an automated distribution process for the S...

  1. An online database for brain disease research

    Directory of Open Access Journals (Sweden)

    Richman Sam

    2006-04-01

    Full Text Available Abstract Background The Stanley Medical Research Institute online genomics database (SMRIDB is a comprehensive web-based system for understanding the genetic effects of human brain disease (i.e. bipolar, schizophrenia, and depression. This database contains fully annotated clinical metadata and gene expression patterns generated within 12 controlled studies across 6 different microarray platforms. Description A thorough collection of gene expression summaries are provided, inclusive of patient demographics, disease subclasses, regulated biological pathways, and functional classifications. Conclusion The combination of database content, structure, and query speed offers researchers an efficient tool for data mining of brain disease complete with information such as: cross-platform comparisons, biomarkers elucidation for target discovery, and lifestyle/demographic associations to brain diseases.

  2. Sequence History Update Tool

    Science.gov (United States)

    Khanampompan, Teerapat; Gladden, Roy; Fisher, Forest; DelGuercio, Chris

    2008-01-01

    The Sequence History Update Tool performs Web-based sequence statistics archiving for Mars Reconnaissance Orbiter (MRO). Using a single UNIX command, the software takes advantage of sequencing conventions to automatically extract the needed statistics from multiple files. This information is then used to populate a PHP database, which is then seamlessly formatted into a dynamic Web page. This tool replaces a previous tedious and error-prone process of manually editing HTML code to construct a Web-based table. Because the tool manages all of the statistics gathering and file delivery to and from multiple data sources spread across multiple servers, there is also a considerable time and effort savings. With the use of The Sequence History Update Tool what previously took minutes is now done in less than 30 seconds, and now provides a more accurate archival record of the sequence commanding for MRO.

  3. The Institute for Rock Magnetism Facility Database

    Science.gov (United States)

    Jackson, M. J.; Sølheid, P.; Bowles, J. A.; Moskowitz, B. M.; Feinberg, J. M.

    2012-12-01

    The Institute for Rock Magnetism (IRM) is one of 19 National Multi-User Facilities supported by the Instruments and Facilities program of NSF for geoscience research that requires complex, expensive and advanced instrumentation. Visiting and in-house researchers at the IRM have access to sensitive laboratory instruments for magnetometry, magnetic microscopy and Mössbauer spectroscopy, for carrying out a wide variety of experiments under a range of applied field and temperature conditions. Results are used to gain insight into a very diverse assortment of natural materials and phenomena including biomagnetism, environmental magnetism, petrofabrics, nanophase materials, shocked materials, and paleomagnetism of terrestrial and extraterrestrial materials. A comprehensive laboratory database has been in operation since 2004, storing detailed experimental data and metadata for more than 250 facility users, with measurements on over 50,000 specimens, including over one million remanence measurements and 45,000 hysteresis loops. Custom software tools provide consistent and reliable handling of basic data processing (e.g., mass normalization and unit conversion), as well as more advanced interactive analysis (e.g., deconvolution of u-channel paleomagnetic data; filtering and statistical tests for high-field nonlinearity in calculating hysteresis loop parameters; thermal fluctuation tomography using T-dependent switching-field distributions from backfield remanence measurements or hysteresis loops). Users are also able to access their data and the custom software tools remotely once they leave the IRM for their home institutions. A key advantage of an integrated database/software system for a facility like the IRM is that it provides a rapid and automatic means of combining different kinds of data measured on different instruments. An important design consideration in the development of the facility database has been structural compatibility with the community-wide Mag

  4. The Danish Melanoma Database

    Directory of Open Access Journals (Sweden)

    Hölmich Lr

    2016-10-01

    Full Text Available Lisbet Rosenkrantz Hölmich,1 Siri Klausen,2 Eva Spaun,3 Grethe Schmidt,4 Dorte Gad,5 Inge Marie Svane,6,7 Henrik Schmidt,8 Henrik Frank Lorentzen,9 Else Helene Ibfelt10 1Department of Plastic Surgery, 2Department of Pathology, Herlev-Gentofte Hospital, University of Copenhagen, Herlev, 3Institute of Pathology, Aarhus University Hospital, Aarhus, 4Department of Plastic and Reconstructive Surgery, Breast Surgery and Burns, Rigshospitalet – Glostrup, University of Copenhagen, Copenhagen, 5Department of Plastic Surgery, Odense University Hospital, Odense, 6Center for Cancer Immune Therapy, Department of Hematology, 7Department of Oncology, Herlev-Gentofte Hospital, University of Copenhagen, Herlev, 8Department of Oncology, 9Department of Dermatology, Aarhus University Hospital, Aarhus, 10Registry Support Centre (East – Epidemiology and Biostatistics, Research Centre for Prevention and Health, Glostrup – Rigshospitalet, University of Copenhagen, Glostrup, Denmark Aim of database: The aim of the database is to monitor and improve the treatment and survival of melanoma patients.Study population: All Danish patients with cutaneous melanoma and in situ melanomas must be registered in the Danish Melanoma Database (DMD. In 2014, 2,525 patients with invasive melanoma and 780 with in situ tumors were registered. The coverage is currently 93% compared with the Danish Pathology Register.Main variables: The main variables include demographic, clinical, and pathological characteristics, including Breslow’s tumor thickness, ± ulceration, mitoses, and tumor–node–metastasis stage. Information about the date of diagnosis, treatment, type of surgery, including safety margins, results of lymphoscintigraphy in patients for whom this was indicated (tumors > T1a, results of sentinel node biopsy, pathological evaluation hereof, and follow-up information, including recurrence, nature, and treatment hereof is registered. In case of death, the cause and date

  5. DataBase on Demand

    International Nuclear Information System (INIS)

    Aparicio, R Gaspar; Gomez, D; Wojcik, D; Coz, I Coterillo

    2012-01-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  6. DATABASES DEVELOPED IN INDIA FOR BIOLOGICAL SCIENCES

    Directory of Open Access Journals (Sweden)

    Gitanjali Yadav

    2017-09-01

    Full Text Available The complexity of biological systems requires use of a variety of experimental methods with ever increasing sophistication to probe various cellular processes at molecular and atomic resolution. The availability of technologies for determining nucleic acid sequences of genes and atomic resolution structures of biomolecules prompted development of major biological databases like GenBank and PDB almost four decades ago. India was one of the few countries to realize early, the utility of such databases for progress in modern biology/biotechnology. Department of Biotechnology (DBT, India established Biotechnology Information System (BTIS network in late eighties. Starting with the genome sequencing revolution at the turn of the century, application of high-throughput sequencing technologies in biology and medicine for analysis of genomes, transcriptomes, epigenomes and microbiomes have generated massive volumes of sequence data. BTIS network has not only provided state of the art computational infrastructure to research institutes and universities for utilizing various biological databases developed abroad in their research, it has also actively promoted research and development (R&D projects in Bioinformatics to develop a variety of biological databases in diverse areas. It is encouraging to note that, a large number of biological databases or data driven software tools developed in India, have been published in leading peer reviewed international journals like Nucleic Acids Research, Bioinformatics, Database, BMC, PLoS and NPG series publication. Some of these databases are not only unique, they are also highly accessed as reflected in number of citations. Apart from databases developed by individual research groups, BTIS has initiated consortium projects to develop major India centric databases on Mycobacterium tuberculosis, Rice and Mango, which can potentially have practical applications in health and agriculture. Many of these biological

  7. A user's manual for managing database system of tensile property

    International Nuclear Information System (INIS)

    Ryu, Woo Seok; Park, S. J.; Kim, D. H.; Jun, I.

    2003-06-01

    This manual is written for the management and maintenance of the tensile database system for managing the tensile property test data. The data base constructed the data produced from tensile property test can increase the application of test results. Also, we can get easily the basic data from database when we prepare the new experiment and can produce better result by compare the previous data. To develop the database we must analyze and design carefully application and after that, we can offer the best quality to customers various requirements. The tensile database system was developed by internet method using Java, PL/SQL, JSP(Java Server Pages) tool

  8. A web-based database for EPR centers in semiconductors

    International Nuclear Information System (INIS)

    Umeda, T.; Hagiwara, S.; Katagiri, M.; Mizuochi, N.; Isoya, J.

    2006-01-01

    We develop a web-based database system for electron paramagnetic resonance (EPR) centers in semiconductors. This database is available to anyone at http://www.kc.tsukuba.ac.jp/div-media/epr/. It currently has more than 300 records of the spin-Hamiltonian parameters for major known EPR centers. One can upload own new records to the database or can use simulation tools powered by EPR-NMR(C). Here, we describe the features and objectives of this database, and mention some future plans

  9. DEPOT database: Reference manual and user's guide

    Energy Technology Data Exchange (ETDEWEB)

    Clancey, P.; Logg, C.

    1991-03-01

    DEPOT has been developed to provide tracking for the Stanford Linear Collider (SLC) control system equipment. For each piece of equipment entered into the database, complete location, service, maintenance, modification, certification, and radiation exposure histories can be maintained. To facilitate data entry accuracy, efficiency, and consistency, barcoding technology has been used extensively. DEPOT has been an important tool in improving the reliability of the microsystems controlling SLC. This document describes the components of the DEPOT database, the elements in the database records, and the use of the supporting programs for entering data, searching the database, and producing reports from the information.

  10. Solid Waste Projection Model: Database (Version 1.3)

    International Nuclear Information System (INIS)

    Blackburn, C.L.

    1991-11-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.3 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement

  11. REPLIKASI UNIDIRECTIONAL PADA HETEROGEN DATABASE

    OpenAIRE

    Hendro Nindito; Evaristus Didik Madyatmadja; Albert Verasius Dian Sano

    2013-01-01

    The use of diverse database technology in enterprise today can not be avoided. Thus, technology is needed to generate information in real time. The purpose of this research is to discuss a database replication technology that can be applied in heterogeneous database environments. In this study we use Windows-based MS SQL Server database to Linux-based Oracle database as the goal. The research method used is prototyping where development can be done quickly and testing of working models of the...

  12. Database on aircraft accidents

    International Nuclear Information System (INIS)

    Nishio, Masahide; Koriyama, Tamio

    2013-11-01

    The Reactor Safety Subcommittee in the Nuclear Safety and Preservation Committee published 'The criteria on assessment of probability of aircraft crash into light water reactor facilities' as the standard method for evaluating probability of aircraft crash into nuclear reactor facilities in July 2002. In response to this issue, Japan Nuclear Energy Safety Organization has been collecting open information on aircraft accidents of commercial airplanes, self-defense force (SDF) airplanes and US force airplanes every year since 2003, sorting out them and developing the database of aircraft accidents for the latest 20 years to evaluate probability of aircraft crash into nuclear reactor facilities. In this report the database was revised by adding aircraft accidents in 2011 to the existing database and deleting aircraft accidents in 1991 from it, resulting in development of the revised 2012 database for the latest 20 years from 1992 to 2011. Furthermore, the flight information on commercial aircrafts was also collected to develop the flight database for the latest 20 years from 1992 to 2011 to evaluate probability of aircraft crash into reactor facilities. The method for developing the database of aircraft accidents to evaluate probability of aircraft crash into reactor facilities is based on the report 'The criteria on assessment of probability of aircraft crash into light water reactor facilities' described above. The 2012 revised database for the latest 20 years from 1992 to 2011 shows the followings. The trend of the 2012 database changes little as compared to the last year's report. (1) The data of commercial aircraft accidents is based on 'Aircraft accident investigation reports of Japan transport safety board' of Ministry of Land, Infrastructure, Transport and Tourism. The number of commercial aircraft accidents is 4 for large fixed-wing aircraft, 58 for small fixed-wing aircraft, 5 for large bladed aircraft and 99 for small bladed aircraft. The relevant accidents

  13. Danish Palliative Care Database

    DEFF Research Database (Denmark)

    Grønvold, Mogens; Adsersen, Mathilde; Hansen, Maiken Bang

    2016-01-01

    Aims: The aim of the Danish Palliative Care Database (DPD) is to monitor, evaluate, and improve the clinical quality of specialized palliative care (SPC) (ie, the activity of hospital-based palliative care teams/departments and hospices) in Denmark. Study population: The study population is all...... patients in Denmark referred to and/or in contact with SPC after January 1, 2010. Main variables: The main variables in DPD are data about referral for patients admitted and not admitted to SPC, type of the first SPC contact, clinical and sociodemographic factors, multidisciplinary conference...... patients were registered in DPD during the 5 years 2010–2014. Of those registered, 96% had cancer. Conclusion: DPD is a national clinical quality database for SPC having clinically relevant variables and high data and patient completeness....

  14. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1992-11-09

    The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R- 717 (ammonia), ethers, and others as well as azeotropic and zeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents on compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. A computerized version is available that includes retrieval software.

  15. Geologic Field Database

    Directory of Open Access Journals (Sweden)

    Katarina Hribernik

    2002-12-01

    Full Text Available The purpose of the paper is to present the field data relational database, which was compiled from data, gathered during thirty years of fieldwork on the Basic Geologic Map of Slovenia in scale1:100.000. The database was created using MS Access software. The MS Access environment ensures its stability and effective operation despite changing, searching, and updating the data. It also enables faster and easier user-friendly access to the field data. Last but not least, in the long-term, with the data transferred into the GISenvironment, it will provide the basis for the sound geologic information system that will satisfy a broad spectrum of geologists’ needs.

  16. Database on aircraft accidents

    International Nuclear Information System (INIS)

    Nishio, Masahide; Koriyama, Tamio

    2012-09-01

    The Reactor Safety Subcommittee in the Nuclear Safety and Preservation Committee published the report 'The criteria on assessment of probability of aircraft crash into light water reactor facilities' as the standard method for evaluating probability of aircraft crash into nuclear reactor facilities in July 2002. In response to the report, Japan Nuclear Energy Safety Organization has been collecting open information on aircraft accidents of commercial airplanes, self-defense force (SDF) airplanes and US force airplanes every year since 2003, sorting out them and developing the database of aircraft accidents for latest 20 years to evaluate probability of aircraft crash into nuclear reactor facilities. This year, the database was revised by adding aircraft accidents in 2010 to the existing database and deleting aircraft accidents in 1991 from it, resulting in development of the revised 2011 database for latest 20 years from 1991 to 2010. Furthermore, the flight information on commercial aircrafts was also collected to develop the flight database for latest 20 years from 1991 to 2010 to evaluate probability of aircraft crash into reactor facilities. The method for developing the database of aircraft accidents to evaluate probability of aircraft crash into reactor facilities is based on the report 'The criteria on assessment of probability of aircraft crash into light water reactor facilities' described above. The 2011 revised database for latest 20 years from 1991 to 2010 shows the followings. The trend of the 2011 database changes little as compared to the last year's one. (1) The data of commercial aircraft accidents is based on 'Aircraft accident investigation reports of Japan transport safety board' of Ministry of Land, Infrastructure, Transport and Tourism. 4 large fixed-wing aircraft accidents, 58 small fixed-wing aircraft accidents, 5 large bladed aircraft accidents and 114 small bladed aircraft accidents occurred. The relevant accidents for evaluating

  17. THE EXTRAGALACTIC DISTANCE DATABASE

    International Nuclear Information System (INIS)

    Tully, R. Brent; Courtois, Helene M.; Jacobs, Bradley A.; Rizzi, Luca; Shaya, Edward J.; Makarov, Dmitry I.

    2009-01-01

    A database can be accessed on the Web at http://edd.ifa.hawaii.edu that was developed to promote access to information related to galaxy distances. The database has three functional components. First, tables from many literature sources have been gathered and enhanced with links through a distinct galaxy naming convention. Second, comparisons of results both at the levels of parameters and of techniques have begun and are continuing, leading to increasing homogeneity and consistency of distance measurements. Third, new material is presented arising from ongoing observational programs at the University of Hawaii 2.2 m telescope, radio telescopes at Green Bank, Arecibo, and Parkes and with the Hubble Space Telescope. This new observational material is made available in tandem with related material drawn from archives and passed through common analysis pipelines.

  18. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  19. Evolution and applications of plant pathway resources and databases

    DEFF Research Database (Denmark)

    Sucaet, Yves; Deva, Taru

    2011-01-01

    Plants are important sources of food and plant products are essential for modern human life. Plants are increasingly gaining importance as drug and fuel resources, bioremediation tools and as tools for recombinant technology. Considering these applications, database infrastructure for plant model...... systems deserves much more attention. Study of plant biological pathways, the interconnection between these pathways and plant systems biology on the whole has in general lagged behind human systems biology. In this article we review plant pathway databases and the resources that are currently available....... We lay out trends and challenges in the ongoing efforts to integrate plant pathway databases and the applications of database integration. We also discuss how progress in non-plant communities can serve as an example for the improvement of the plant pathway database landscape and thereby allow...

  20. The PEP-II project-wide database

    International Nuclear Information System (INIS)

    Chan, A.; Calish, S.; Crane, G.; MacGregor, I.; Meyer, S.; Wong, J.

    1995-05-01

    The PEP-II Project Database is a tool for monitoring the technical and documentation aspects of this accelerator construction. It holds the PEP-II design specifications, fabrication and installation data in one integrated system. Key pieces of the database include the machine parameter list, magnet and vacuum fabrication data. CAD drawings, publications and documentation, survey and alignment data and property control. The database can be extended to contain information required for the operations phase of the accelerator and detector. Features such as viewing CAD drawing graphics from the database will be implemented in the future. This central Oracle database on a UNIX server is built using ORACLE Case tools. Users at the three collaborating laboratories (SLAC, LBL, LLNL) can access the data remotely, using various desktop computer platforms and graphical interfaces

  1. Developing customer databases.

    Science.gov (United States)

    Rao, S K; Shenbaga, S

    2000-01-01

    There is a growing consensus among pharmaceutical companies that more product and customer-specific approaches to marketing and selling a new drug can result in substantial increases in sales. Marketers and researchers taking a proactive micro-marketing approach to identifying, profiling, and communicating with target customers are likely to facilitate such approaches and outcomes. This article provides a working framework for creating customer databases that can be effectively mined to achieve a variety of such marketing and sales force objectives.

  2. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1996-07-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  3. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1999-01-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilities access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  4. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1996-11-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  5. Teradata Database System Optimization

    OpenAIRE

    Krejčík, Jan

    2008-01-01

    The Teradata database system is specially designed for data warehousing environment. This thesis explores the use of Teradata in this environment and describes its characteristics and potential areas for optimization. The theoretical part is tended to be a user study material and it shows the main principles Teradata system operation and describes factors significantly affecting system performance. Following sections are based on previously acquired information which is used for analysis and ...

  6. The CYATAXO Database

    Czech Academy of Sciences Publication Activity Database

    Komárková, Jaroslava; Nedoma, Jiří

    2006-01-01

    Roč. 6, - (2006), s. 49-54 ISSN 1213-3434 R&D Projects: GA AV ČR(CZ) IAA6005308; GA AV ČR(CZ) IBS6017004 Grant - others:EC(XE) EVK2-CT-1999-00026 Institutional research plan: CEZ:AV0Z60170517 Keywords : Database CYATAXO * cyanobacteria * taxonomy * water- blooms Subject RIV: DJ - Water Pollution ; Quality

  7. A student database

    OpenAIRE

    Kemaloğlu, Turgut

    1990-01-01

    Ankara : Department of Management and Graduate School of Business Administration, Bilkent Univ., 1990. Thesis (Master's) -- Bilkent University, 1990. Includes bibliographical refences. Tiiis tfiesia is a design of student database systeia whicii will manage the data of university students. The aim of the pi"ogram is to obtain sorted lists of students according to / several parameters,/ to obtain frequency of grades for the specified course, to design a suitable sheet w...

  8. BDVC (Bimodal Database of Violent Content): A database of violent audio and video

    Science.gov (United States)

    Rivera Martínez, Jose Luis; Mijes Cruz, Mario Humberto; Rodríguez Vázqu, Manuel Antonio; Rodríguez Espejo, Luis; Montoya Obeso, Abraham; García Vázquez, Mireya Saraí; Ramírez Acosta, Alejandro Álvaro

    2017-09-01

    Nowadays there is a trend towards the use of unimodal databases for multimedia content description, organization and retrieval applications of a single type of content like text, voice and images, instead bimodal databases allow to associate semantically two different types of content like audio-video, image-text, among others. The generation of a bimodal database of audio-video implies the creation of a connection between the multimedia content through the semantic relation that associates the actions of both types of information. This paper describes in detail the used characteristics and methodology for the creation of the bimodal database of violent content; the semantic relationship is stablished by the proposed concepts that describe the audiovisual information. The use of bimodal databases in applications related to the audiovisual content processing allows an increase in the semantic performance only and only if these applications process both type of content. This bimodal database counts with 580 audiovisual annotated segments, with a duration of 28 minutes, divided in 41 classes. Bimodal databases are a tool in the generation of applications for the semantic web.

  9. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1997-02-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alterative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on various refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  10. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1998-08-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufactures and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on many refrigerants including propane, ammonia, water, carbon dioxide, propylene, ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  11. MEROPS: the peptidase database.

    Science.gov (United States)

    Rawlings, Neil D; Morton, Fraser R; Kok, Chai Yin; Kong, Jun; Barrett, Alan J

    2008-01-01

    Peptidases (proteolytic enzymes or proteases), their substrates and inhibitors are of great relevance to biology, medicine and biotechnology. The MEROPS database (http://merops.sanger.ac.uk) aims to fulfil the need for an integrated source of information about these. The organizational principle of the database is a hierarchical classification in which homologous sets of peptidases and protein inhibitors are grouped into protein species, which are grouped into families and in turn grouped into clans. Important additions to the database include newly written, concise text annotations for peptidase clans and the small molecule inhibitors that are outside the scope of the standard classification; displays to show peptidase specificity compiled from our collection of known substrate cleavages; tables of peptidase-inhibitor interactions; and dynamically generated alignments of representatives of each protein species at the family level. New ways to compare peptidase and inhibitor complements between any two organisms whose genomes have been completely sequenced, or between different strains or subspecies of the same organism, have been devised.

  12. Curcumin Resource Database

    Science.gov (United States)

    Kumar, Anil; Chetia, Hasnahana; Sharma, Swagata; Kabiraj, Debajyoti; Talukdar, Narayan Chandra; Bora, Utpal

    2015-01-01

    Curcumin is one of the most intensively studied diarylheptanoid, Curcuma longa being its principal producer. This apart, a class of promising curcumin analogs has been generated in laboratories, aptly named as Curcuminoids which are showing huge potential in the fields of medicine, food technology, etc. The lack of a universal source of data on curcumin as well as curcuminoids has been felt by the curcumin research community for long. Hence, in an attempt to address this stumbling block, we have developed Curcumin Resource Database (CRDB) that aims to perform as a gateway-cum-repository to access all relevant data and related information on curcumin and its analogs. Currently, this database encompasses 1186 curcumin analogs, 195 molecular targets, 9075 peer reviewed publications, 489 patents and 176 varieties of C. longa obtained by extensive data mining and careful curation from numerous sources. Each data entry is identified by a unique CRDB ID (identifier). Furnished with a user-friendly web interface and in-built search engine, CRDB provides well-curated and cross-referenced information that are hyperlinked with external sources. CRDB is expected to be highly useful to the researchers working on structure as well as ligand-based molecular design of curcumin analogs. Database URL: http://www.crdb.in PMID:26220923

  13. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Cain, J.M. [Calm (James M.), Great Falls, VA (United States)

    1993-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents to accelerate availability of the information and will be completed or replaced in future updates.

  14. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1996-04-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates. Citations in this report are divided into the following topics: thermophysical properties; materials compatibility; lubricants and tribology; application data; safety; test and analysis methods; impacts; regulatory actions; substitute refrigerants; identification; absorption and adsorption; research programs; and miscellaneous documents. Information is also presented on ordering instructions for the computerized version.

  15. Interactive database management (IDM).

    Science.gov (United States)

    Othman, R

    1995-08-01

    Interactive database management (IDM) is a data editing software that provides complete data editing at the time of initial data entry when information is 'fresh at hand'. Under the new interactive system, initial data recording is subjected to instant data editing by the interactive computer software logic. Data are immediately entered in final form to the database and are available for analysis. IDM continuously checks all variables for acceptability, completeness, and consistency. IDM does not allow form duplication. Many functions including backups have been automated. The interactive system can export the database to other systems. The software has been implemented for two Department of Veterans Affairs Cooperative Studies (CCSHS #5 and CSP #385) which collect data for 1400 and 1000 variables, respectively at 28 VA medical centers. IDM is extremely user friendly and simple to operate. Researchers with no computer background can be trained quickly and easily to use the system. IDM is deployed on notebook microcomputers making it portable for use anywhere in the hospital setting.

  16. The Cambridge Structural Database.

    Science.gov (United States)

    Groom, Colin R; Bruno, Ian J; Lightfoot, Matthew P; Ward, Suzanna C

    2016-04-01

    The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal-organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface.

  17. Dynamic delivery of the National Transit Database Sampling Manual.

    Science.gov (United States)

    2013-02-01

    This project improves the National Transit Database (NTD) Sampling Manual and develops an Internet-based, WordPress-powered interactive Web tool to deliver the new NTD Sampling Manual dynamically. The new manual adds guidance and a tool for transit a...

  18. WOVOdat Progress 2012: Installable DB template for Volcano Monitoring Database

    Science.gov (United States)

    Ratdomopurbo, A.; Widiwijayanti, C.; Win, N.-T.-Z.; Chen, L.-D.; Newhall, C.

    2012-04-01

    WOVOdat is the World Organization of Volcano Observatories' (WOVO) Database of Volcanic Unrest. Volcanoes are frequently restless but only a fraction of unrest leads to eruptions. We aim to compile and make the data of historical volcanic unrest available as a reference tool during volcanic crises, for observatory or other user to compare or look for systematic in many unrest episodes, and also provide educational tools for teachers and students on understanding volcanic processes. Furthermore, we promote the use of relational databases for countries that are still planning to develop their own monitoring database. We are now in the process of populating WOVOdat in collaboration with volcano observatories worldwide. Proprietary data remains at the observatories where the data originally from. Therefore, users who wish to use the data for publication or to obtain detail information about the data should directly contact the observatories. To encourage the use of relational database system in volcano observatories with no monitoring database, WOVOdat project is preparing an installable standalone package. This package is freely downloadable through our website (www.wovodat.org), ready to install and serve as database system in the local domain to host various types of volcano monitoring data. The WOVOdat project is now hosted at Earth Observatory of Singapore (Nanyang Technological University). In the current stage of data population, our website supports interaction between WOVOdat developers, observatories, and other partners in building the database, e.g. accessing schematic design, information and documentation, and also data submission. As anticipation of various data formats coming from different observatories, we provide an interactive tools for user to convert their data into standard WOVOdat format file before then able to upload and store in the database system. We are also developing various visualization tools that will be integrated in the system to ease

  19. Wilmar Planning Tool, user guide

    International Nuclear Information System (INIS)

    Larsen, Helge V.

    2006-01-01

    This is a short user guide to the Wilmar Planning Tool developed in the project Wind Power Integration in Liberalised Electricity Markets (WILMAR) supported by EU (Contract No. ENK5-CT-2002-00663). A User Shell implemented in an Excel workbook controls the Wilmar Planning Tool. All data are contained in Access databases that communicate with various sub-models through text files that are exported from or imported to the databases. In the User Shell various scenario variables and control parameters are set, and export of model data from the input database, activation of the models, as well as import of model results to the output database are triggered from the shell. (au)

  20. JICST Factual DatabaseJICST Chemical Substance Safety Regulation Database

    Science.gov (United States)

    Abe, Atsushi; Sohma, Tohru

    JICST Chemical Substance Safety Regulation Database is based on the Database of Safety Laws for Chemical Compounds constructed by Japan Chemical Industry Ecology-Toxicology & Information Center (JETOC) sponsored by the Sience and Technology Agency in 1987. JICST has modified JETOC database system, added data and started the online service through JOlS-F (JICST Online Information Service-Factual database) in January 1990. JICST database comprises eighty-three laws and fourteen hundred compounds. The authors outline the database, data items, files and search commands. An example of online session is presented.

  1. Database Description - DMPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us DMPD Database Description General information of database Database name DMPD Alternative nam...e Dynamic Macrophage Pathway CSML Database DOI 10.18908/lsdba.nbdc00558-000 Creator Creator Name: Masao Naga...ty of Tokyo 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639 Tel: +81-3-5449-5615 FAX: +83-3-5449-5442 E-mail: Database...606 Taxonomy Name: Mammalia Taxonomy ID: 40674 Database description DMPD collects... pathway models of transcriptional regulation and signal transduction in CSML format for dymamic simulation base

  2. WGDB: Wood Gene Database with search interface.

    Science.gov (United States)

    Goyal, Neha; Ginwal, H S

    2014-01-01

    Wood quality can be defined in terms of particular end use with the involvement of several traits. Over the last fifteen years researchers have assessed the wood quality traits in forest trees. The wood quality was categorized as: cell wall biochemical traits, fibre properties include the microfibril angle, density and stiffness in loblolly pine [1]. The user friendly and an open-access database has been developed named Wood Gene Database (WGDB) for describing the wood genes along the information of protein and published research articles. It contains 720 wood genes from species namely Pinus, Deodar, fast growing trees namely Poplar, Eucalyptus. WGDB designed to encompass the majority of publicly accessible genes codes for cellulose, hemicellulose and lignin in tree species which are responsive to wood formation and quality. It is an interactive platform for collecting, managing and searching the specific wood genes; it also enables the data mining relate to the genomic information specifically in Arabidopsis thaliana, Populus trichocarpa, Eucalyptus grandis, Pinus taeda, Pinus radiata, Cedrus deodara, Cedrus atlantica. For user convenience, this database is cross linked with public databases namely NCBI, EMBL & Dendrome with the search engine Google for making it more informative and provides bioinformatics tools named BLAST,COBALT. The database is freely available on www.wgdb.in.

  3. Wilmar Planning Tool, VBA documentation

    International Nuclear Information System (INIS)

    Larsen, Helge V.

    2006-01-01

    This is a documentation of the VBA (Visual Basic for Applications) in the Wilmar Planning Tool. VBA is used in the Wilmar User Shell (an Excel workbook) and in the three Access databases that hold input, scenario and output data. The Wilmar Planning Tool is developed in the project Wind Power Integration in Liberalised Electricity Markets (WILMAR) supported by EU (contract ENK5-CT-2002-00663). The User Shell controls the operation of the Wilmar Planning Tool. In the User Shell various control parameters are set, and then a macro in the Input Database is run that writes input files for the Joint market Model and the Long Term Model. Afterwards these models can be started from the User Shell. Finally, the User Shell can start a macro in the Output Database that imports the output files from the models. (LN)

  4. Comparing the Performance of Object Databases and ORM Tools

    CSIR Research Space (South Africa)

    Van Zyl, P

    2006-10-01

    Full Text Available Source products. Because of its general popularity, reflected by the fact that most of the large persistence mechanism providers provide persistence for Java objects, it was decided to use Java objects for our studies. A consequence of this decision... is that the OO7 Benchmark, currently available in C++, has had to be re-implemented in Java. The results reported here are based on a partial re-implementation of the benchmark. Work is currently underway to develop a fuller implementation, but the results...

  5. North American Transportation Statistics Database - Data Mining Tool

    Data.gov (United States)

    Department of Transportation — Contains tables of data for the United States, Canada, and Mexico. Data tables are divided up into 12 categories, including a country overview, transportation flows,...

  6. ASFA database: A tool for marine science researchers

    Digital Repository Service at National Institute of Oceanography (India)

    Tapaswi, M.P.

    stream_size 5 stream_content_type text/plain stream_name Trg_Course_Coastal_Zone_Manage_1993_65.pdf.txt stream_source_info Trg_Course_Coastal_Zone_Manage_1993_65.pdf.txt Content-Encoding ISO-8859-1 Content-Type text/plain; charset...

  7. PLAST: parallel local alignment search tool for database comparison

    Directory of Open Access Journals (Sweden)

    Lavenier Dominique

    2009-10-01

    Full Text Available Abstract Background Sequence similarity searching is an important and challenging task in molecular biology and next-generation sequencing should further strengthen the need for faster algorithms to process such vast amounts of data. At the same time, the internal architecture of current microprocessors is tending towards more parallelism, leading to the use of chips with two, four and more cores integrated on the same die. The main purpose of this work was to design an effective algorithm to fit with the parallel capabilities of modern microprocessors. Results A parallel algorithm for comparing large genomic banks and targeting middle-range computers has been developed and implemented in PLAST software. The algorithm exploits two key parallel features of existing and future microprocessors: the SIMD programming model (SSE instruction set and the multithreading concept (multicore. Compared to multithreaded BLAST software, tests performed on an 8-processor server have shown speedup ranging from 3 to 6 with a similar level of accuracy. Conclusion A parallel algorithmic approach driven by the knowledge of the internal microprocessor architecture allows significant speedup to be obtained while preserving standard sensitivity for similarity search problems.

  8. PLAST: parallel local alignment search tool for database comparison.

    Science.gov (United States)

    Nguyen, Van Hoa; Lavenier, Dominique

    2009-10-12

    Sequence similarity searching is an important and challenging task in molecular biology and next-generation sequencing should further strengthen the need for faster algorithms to process such vast amounts of data. At the same time, the internal architecture of current microprocessors is tending towards more parallelism, leading to the use of chips with two, four and more cores integrated on the same die. The main purpose of this work was to design an effective algorithm to fit with the parallel capabilities of modern microprocessors. A parallel algorithm for comparing large genomic banks and targeting middle-range computers has been developed and implemented in PLAST software. The algorithm exploits two key parallel features of existing and future microprocessors: the SIMD programming model (SSE instruction set) and the multithreading concept (multicore). Compared to multithreaded BLAST software, tests performed on an 8-processor server have shown speedup ranging from 3 to 6 with a similar level of accuracy. A parallel algorithmic approach driven by the knowledge of the internal microprocessor architecture allows significant speedup to be obtained while preserving standard sensitivity for similarity search problems.

  9. Models, Tools, and Databases for Land and Waste Management Research

    Science.gov (United States)

    These publicly available resources can be used for such tasks as simulating biodegradation or remediation of contaminants such as hydrocarbons, measuring sediment accumulation at superfund sites, or assessing toxicity and risk.

  10. Developing of impact and fatigue property test database system

    International Nuclear Information System (INIS)

    Park, S. J.; Jun, I.; Kim, D. H.; Ryu, W. S.

    2003-01-01

    The impact and fatigue characteristics database systems were constructed using the data produced from impact and fatigue test and designed to hold in common the data and programs of tensile characteristics database that was constructed on 2001 and others characteristics databases that will be constructed in future. We can easily get the basic data from the impact and fatigue characteristics database systems when we prepare the new experiment and can produce high quality result by compare the previous data. The development part must be analysis and design more specific to construct the database and after that, we can offer the best quality to customers various requirements. In this thesis, we describe the procedure about analysis, design and development of the impact and fatigue characteristics database systems developed by internet method using jsp(Java Server pages) tool

  11. Developing of corrosion and creep property test database system

    International Nuclear Information System (INIS)

    Park, S. J.; Jun, I.; Kim, J. S.; Ryu, W. S.

    2004-01-01

    The corrosion and creep characteristics database systems were constructed using the data produced from corrosion and creep test and designed to hold in common the data and programs of tensile, impact, fatigue characteristics database that was constructed since 2001 and others characteristics databases that will be constructed in future. We can easily get the basic data from the corrosion and creep characteristics database systems when we prepare the new experiment and can produce high quality result by compare the previous test result. The development part must be analysis and design more specific to construct the database and after that, we can offer the best quality to customers various requirements. In this thesis, we describe the procedure about analysis, design and development of the impact and fatigue characteristics database systems developed by internet method using jsp(Java Server pages) tool

  12. Survey of Machine Learning Methods for Database Security

    Science.gov (United States)

    Kamra, Ashish; Ber, Elisa

    Application of machine learning techniques to database security is an emerging area of research. In this chapter, we present a survey of various approaches that use machine learning/data mining techniques to enhance the traditional security mechanisms of databases. There are two key database security areas in which these techniques have found applications, namely, detection of SQL Injection attacks and anomaly detection for defending against insider threats. Apart from the research prototypes and tools, various third-party commercial products are also available that provide database activity monitoring solutions by profiling database users and applications. We present a survey of such products. We end the chapter with a primer on mechanisms for responding to database anomalies.

  13. Solid Waste Projection Model: Database (Version 1.4)

    International Nuclear Information System (INIS)

    Blackburn, C.; Cillan, T.

    1993-09-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.4 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement. Those interested in using the SWPM database should refer to the SWPM Database User's Guide. This document is available from the PNL Task M Project Manager (D. L. Stiles, 509-372-4358), the PNL Task L Project Manager (L. L. Armacost, 509-372-4304), the WHC Restoration Projects Section Manager (509-372-1443), or the WHC Waste Characterization Manager (509-372-1193)

  14. An XCT image database system

    International Nuclear Information System (INIS)

    Komori, Masaru; Minato, Kotaro; Koide, Harutoshi; Hirakawa, Akina; Nakano, Yoshihisa; Itoh, Harumi; Torizuka, Kanji; Yamasaki, Tetsuo; Kuwahara, Michiyoshi.

    1984-01-01

    In this paper, an expansion of X-ray CT (XCT) examination history database to XCT image database is discussed. The XCT examination history database has been constructed and used for daily examination and investigation in our hospital. This database consists of alpha-numeric information (locations, diagnosis and so on) of more than 15,000 cases, and for some of them, we add tree structured image data which has a flexibility for various types of image data. This database system is written by MUMPS database manipulation language. (author)

  15. The Danish Fetal Medicine database

    DEFF Research Database (Denmark)

    Ekelund, Charlotte; Kopp, Tine Iskov; Tabor, Ann

    2016-01-01

    trimester ultrasound scan performed at all public hospitals in Denmark are registered in the database. Main variables/descriptive data: Data on maternal characteristics, ultrasonic, and biochemical variables are continuously sent from the fetal medicine units’Astraia databases to the central database via...... analyses are sent to the database. Conclusion: It has been possible to establish a fetal medicine database, which monitors first-trimester screening for chromosomal abnormalities and second-trimester screening for major fetal malformations with the input from already collected data. The database...

  16. Dansk kolorektal Cancer Database

    DEFF Research Database (Denmark)

    Harling, Henrik; Nickelsen, Thomas

    2005-01-01

    The Danish Colorectal Cancer Database was established in 1994 with the purpose of monitoring whether diagnostic and surgical principles specified in the evidence-based national guidelines of good clinical practice were followed. Twelve clinical indicators have been listed by the Danish Colorectal...... Cancer Group, and the performance of each hospital surgical department with respect to these indicators is reported annually. In addition, the register contains a large collection of data that provide valuable information on the influence of comorbidity and lifestyle factors on disease outcome...

  17. Usability in Scientific Databases

    Directory of Open Access Journals (Sweden)

    Ana-Maria Suduc

    2012-07-01

    Full Text Available Usability, most often defined as the ease of use and acceptability of a system, affects the users' performance and their job satisfaction when working with a machine. Therefore, usability is a very important aspect which must be considered in the process of a system development. The paper presents several numerical data related to the history of the scientific research of the usability of information systems, as it is viewed in the information provided by three important scientific databases, Science Direct, ACM Digital Library and IEEE Xplore Digital Library, at different queries related to this field.

  18. Social Capital Database

    DEFF Research Database (Denmark)

    Paldam, Martin; Svendsen, Gert Tinggaard

    2005-01-01

      This report has two purposes: The first purpose is to present our 4-page question­naire, which measures social capital. It is close to the main definitions of social capital and contains the most successful measures from the literature. Also it is easy to apply as discussed. The second purpose ...... is to present the social capital database we have collected for 21 countries using the question­naire. We do this by comparing the level of social capital in the countries covered. That is, the report compares the marginals from the 21 surveys....

  19. Machine tool

    International Nuclear Information System (INIS)

    Kang, Myeong Sun

    1981-01-01

    This book indicates machine tool, which includes cutting process and processing by cutting process, theory of cutting like tool angle and chip molding, cutting tool such as milling cutter and drill, summary and introduction of following machine ; spindle drive and feed drive, pivot and pivot bearing, frame, guide way and table, drilling machine, boring machine, shaper and planer, milling machine, machine tool for precision finishing like lapping machine and super finishing machine gear cutter.

  20. License - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available pliance with the terms and conditions of the license described below. The license s...List Contact us Trypanosomes Database License License to Use This Database Last updated : 2014/02/04 You may use this database in com