WorldWideScience

Sample records for current database systems

  1. Current trends and new challenges of databases and web applications for systems driven biological research

    Directory of Open Access Journals (Sweden)

    Pradeep Kumar eSreenivasaiah

    2010-12-01

    Full Text Available Dynamic and rapidly evolving nature of systems driven research imposes special requirements on the technology, approach, design and architecture of computational infrastructure including database and web application. Several solutions have been proposed to meet the expectations and novel methods have been developed to address the persisting problems of data integration. It is important for researchers to understand different technologies and approaches. Having familiarized with the pros and cons of the existing technologies, researchers can exploit its capabilities to the maximum potential for integrating data. In this review we discuss the architecture, design and key technologies underlying some of the prominent databases (DBs and web applications. We will mention their roles in integration of biological data and investigate some of the emerging design concepts and computational technologies that are likely to have a key role in the future of systems driven biomedical research.

  2. rasdaman Array Database: current status

    Science.gov (United States)

    Merticariu, George; Toader, Alexandru

    2015-04-01

    rasdaman (Raster Data Manager) is a Free Open Source Array Database Management System which provides functionality for storing and processing massive amounts of raster data in the form of multidimensional arrays. The user can access, process and delete the data using SQL. The key features of rasdaman are: flexibility (datasets of any dimensionality can be processed with the help of SQL queries), scalability (rasdaman's distributed architecture enables it to seamlessly run on cloud infrastructures while offering an increase in performance with the increase of computation resources), performance (real-time access, processing, mixing and filtering of arrays of any dimensionality) and reliability (legacy communication protocol replaced with a new one based on cutting edge technology - Google Protocol Buffers and ZeroMQ). Among the data with which the system works, we can count 1D time series, 2D remote sensing imagery, 3D image time series, 3D geophysical data, and 4D atmospheric and climate data. Most of these representations cannot be stored only in the form of raw arrays, as the location information of the contents is also important for having a correct geoposition on Earth. This is defined by ISO 19123 as coverage data. rasdaman provides coverage data support through the Petascope service. Extensions were added on top of rasdaman in order to provide support for the Geoscience community. The following OGC standards are currently supported: Web Map Service (WMS), Web Coverage Service (WCS), and Web Coverage Processing Service (WCPS). The Web Map Service is an extension which provides zoom and pan navigation over images provided by a map server. Starting with version 9.1, rasdaman supports WMS version 1.3. The Web Coverage Service provides capabilities for downloading multi-dimensional coverage data. Support is also provided for several extensions of this service: Subsetting Extension, Scaling Extension, and, starting with version 9.1, Transaction Extension, which

  3. MPlus Database system

    Energy Technology Data Exchange (ETDEWEB)

    1989-01-20

    The MPlus Database program was developed to keep track of mail received. This system was developed by TRESP for the Department of Energy/Oak Ridge Operations. The MPlus Database program is a PC application, written in dBase III+'' and compiled with Clipper'' into an executable file. The files you need to run the MPLus Database program can be installed on a Bernoulli, or a hard drive. This paper discusses the use of this database.

  4. GEISA-97 spectroscopic database system related information resources: current status and perspectives

    Science.gov (United States)

    Chursin, Alexei A.; Jacquinet-Husson, N.; Lefevre, G.; Scott, Noelle A.; Chedin, Alain

    2000-01-01

    This paper presents the recently developed information content diffusion facilities, e.g. the WWW-server of GEISA, MS DOS, WINDOWS-95/NT, and UNIX software packages, associated with the 1997 version of the GEISA-(Gestion et Etude des Informations Spectroscopiques Atmospheriques; word translation: Management and Study of Atmospheric Spectroscopic Information) infrared spectroscopic databank developed at LMD (Laboratoire de Meteorologie Dynamique, France). GEISA-97 individual lines file involves 42 molecules (96 isotopic species) and contains 1,346,266 entries, between 0 and 22,656 cm-1. GEISA-97 also has a catalog of cross-sections at different temperatures and pressures for species (such as chlorofluorocarbons) with complex spectra. The current version of the GEISA-97 cross- section databank contains 4,716,743 entries related to 23 molecules between 555 and 1700 cm-1.

  5. A Quality System Database

    Science.gov (United States)

    Snell, William H.; Turner, Anne M.; Gifford, Luther; Stites, William

    2010-01-01

    A quality system database (QSD), and software to administer the database, were developed to support recording of administrative nonconformance activities that involve requirements for documentation of corrective and/or preventive actions, which can include ISO 9000 internal quality audits and customer complaints.

  6. Towards Sensor Database Systems

    DEFF Research Database (Denmark)

    Bonnet, Philippe; Gehrke, Johannes; Seshadri, Praveen

    2001-01-01

    Sensor networks are being widely deployed for measurement, detection and surveillance applications. In these new applications, users issue long-running queries over a combination of stored data and sensor data. Most existing applications rely on a centralized system for collecting sensor data....... These systems lack flexibility because data is extracted in a predefined way; also, they do not scale to a large number of devices because large volumes of raw data are transferred regardless of the queries that are submitted. In our new concept of sensor database system, queries dictate which data is extracted...... from the sensors. In this paper, we define the concept of sensor databases mixing stored data represented as relations and sensor data represented as time series. Each long-running query formulated over a sensor database defines a persistent view, which is maintained during a given time interval. We...

  7. Database and Expert Systems Applications

    DEFF Research Database (Denmark)

    Viborg Andersen, Kim; Debenham, John; Wagner, Roland

    submissions. The papers are organized in topical sections on workflow automation, database queries, data classification and recommendation systems, information retrieval in multimedia databases, Web applications, implementational aspects of databases, multimedia databases, XML processing, security, XML...... schemata, query evaluation, semantic processing, information retrieval, temporal and spatial databases, querying XML, organisational aspects of databases, natural language processing, ontologies, Web data extraction, semantic Web, data stream management, data extraction, distributed database systems...

  8. The MAO NASU Plate Archive Database. Current Status and Perspectives

    Science.gov (United States)

    Pakuliak, L. K.; Sergeeva, T. P.

    2006-04-01

    The preliminary online version of the database of the MAO NASU plate archive is constructed on the basis of the relational database management system MySQL and permits an easy supplement of database with new collections of astronegatives, provides a high flexibility in constructing SQL-queries for data search optimization, PHP Basic Authorization protected access to administrative interface and wide range of search parameters. The current status of the database will be reported and the brief description of the search engine and means of the database integrity support will be given. Methods and means of the data verification and tasks for the further development will be discussed.

  9. An organic database system

    NARCIS (Netherlands)

    M.L. Kersten (Martin); A.P.J.M. Siebes (Arno)

    1999-01-01

    textabstractThe pervasive penetration of database technology may suggest that we have reached the end of the database research era. The contrary is true. Emerging technology, in hardware, software, and connectivity, brings a wealth of opportunities to push technology to a new level of maturity.

  10. Current Status of NASDA Terminology Database

    Science.gov (United States)

    Kato, Akira

    2002-01-01

    NASDA Terminology Database System provides the English and Japanese terms, abbreviations, definition and reference documents. Recent progress includes a service to provide abbreviation data from the NASDA Home Page, and publishing a revised NASDA bilingual dictionary. Our next efforts to improve the system are (1) to combine our data with the data of NASA THESAURUS, (2) to add terms from new academic and engineering fields that have begun to have relations with space activities, and (3) to revise the NASDA Definition List. To combine our data with the NASA THESAURUS database we must consider the difference between the database concepts. Further effort to select adequate terms is thus required. Terms must be added from other fields to deal with microgravity experiments, human factors and so on. Some examples of new terms to be added have been collected. To revise the NASDA terms definition list, NASA and ESA definition lists were surveyed and a general concept to revise the NASDA definition list was proposed. I expect these activities will contribute to the IAA dictionary.

  11. Cloud Database Management System (CDBMS

    Directory of Open Access Journals (Sweden)

    Snehal B. Shende

    2015-10-01

    Full Text Available Cloud database management system is a distributed database that delivers computing as a service. It is sharing of web infrastructure for resources, software and information over a network. The cloud is used as a storage location and database can be accessed and computed from anywhere. The large number of web application makes the use of distributed storage solution in order to scale up. It enables user to outsource the resource and services to the third party server. This paper include, the recent trend in cloud service based on database management system and offering it as one of the services in cloud. The advantages and disadvantages of database as a service will let you to decide either to use database as a service or not. This paper also will highlight the architecture of cloud based on database management system.

  12. Distributed Database Management Systems A Practical Approach

    CERN Document Server

    Rahimi, Saeed K

    2010-01-01

    This book addresses issues related to managing data across a distributed database system. It is unique because it covers traditional database theory and current research, explaining the difficulties in providing a unified user interface and global data dictionary. The book gives implementers guidance on hiding discrepancies across systems and creating the illusion of a single repository for users. It also includes three sample frameworksâ€"implemented using J2SE with JMS, J2EE, and Microsoft .Netâ€"that readers can use to learn how to implement a distributed database management system. IT and

  13. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  14. Human Exposure Database System (HEDS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Human Exposure Database System (HEDS) provides public access to data sets, documents, and metadata from EPA on human exposure. It is primarily intended for...

  15. The magnet components database system

    Energy Technology Data Exchange (ETDEWEB)

    Baggett, M.J. (Brookhaven National Lab., Upton, NY (USA)); Leedy, R.; Saltmarsh, C.; Tompkins, J.C. (Superconducting Supercollider Lab., Dallas, TX (USA))

    1990-01-01

    The philosophy, structure, and usage MagCom, the SSC magnet components database, are described. The database has been implemented in Sybase (a powerful relational database management system) on a UNIX-based workstation at the Superconducting Super Collider Laboratory (SSCL); magnet project collaborators can access the database via network connections. The database was designed to contain the specifications and measured values of important properties for major materials, plus configuration information (specifying which individual items were used in each cable, coil, and magnet) and the test results on completed magnets. These data will facilitate the tracking and control of the production process as well as the correlation of magnet performance with the properties of its constituents. 3 refs., 10 figs.

  16. Global Ocean Currents Database (GOCD) (NCEI Accession 0093183)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Global Ocean Currents Database (GOCD) is a collection of quality controlled ocean current measurements such as observed current direction and speed obtained from...

  17. The ATLAS Distributed Data Management System & Databases

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Barisits, M; Beermann, T; Vigne, R; Serfon, C

    2013-01-01

    The ATLAS Distributed Data Management (DDM) System is responsible for the global management of petabytes of high energy physics data. The current system, DQ2, has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle. RDBMS are well-suited to enforcing data integrity in online transaction processing applications, however, concerns have been raised about the scalability of its data warehouse-like workload. In particular, analysis of archived data or aggregation of transactional data for summary purposes is problematic. Therefore, we have evaluated new approaches to handle vast amounts of data. We have investigated a class of database technologies commonly referred to as NoSQL databases. This includes distributed filesystems, like HDFS, that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value stores, like HBase. In this talk we will describe our use cases in ATLAS, share our experiences with various databases used ...

  18. Jelly Views : Extending Relational Database Systems Toward Deductive Database Systems

    Directory of Open Access Journals (Sweden)

    Igor Wojnicki

    2004-01-01

    Full Text Available This paper regards the Jelly View technology, which provides a new, practical methodology for knowledge decomposition, storage, and retrieval within Relational Database Management Systems (RDBMS. Intensional Knowledge clauses (rules are decomposed and stored in the RDBMS founding reusable components. The results of the rule-based processing are visible as regular views, accessible through SQL. From the end-user point of view the processing capability becomes unlimited (arbitrarily complex queries can be constructed using Intensional Knowledge, while the most external queries are expressed with standard SQL. The RDBMS functionality becomes extended toward that of the Deductive Databases

  19. Comparison of object and relational database systems

    OpenAIRE

    GEYER, Jakub

    2012-01-01

    This thesis focuses on the issue of a convenient choice of database platforms. The key features of the object database systems and the relational database systems are mutually compared and tested on concrete representative samples of each individual platform.

  20. Public Budget Database - Governmental receipts 1962-Current

    Data.gov (United States)

    Executive Office of the President — This file contains governmental receipts for 1962 through the current budget year, as well as four years of projections. It can be used to reproduce many of the...

  1. The CMS Condition Database system

    CERN Document Server

    Govi, Giacomo Maria; Ojeda-Sandonis, Miguel; Pfeiffer, Andreas; Sipos, Roland

    2015-01-01

    The Condition Database plays a key role in the CMS computing infrastructure. The complexity of the detector and the variety of the sub-systems involved are setting tight requirements for handling the Conditions. In the last two years the collaboration has put an effort in the re-design of the Condition Database system, with the aim to improve the scalability and the operability for the data taking starting in 2015. The re-design has focused in simplifying the architecture, using the lessons learned during the operation of the previous data-taking period. In the new system the relational features of the database schema are mainly exploited to handle the metadata ( Tag and Interval of Validity ), allowing for a limited and controlled set of queries. The bulk condition data ( Payloads ) are stored as unstructured binary data, allowing the storage in a single table with a common layout for all of the condition data types. In this presentation, we describe the full architecture of the system, including the serv...

  2. A Relational Database System for Student Use.

    Science.gov (United States)

    Fertuck, Len

    1982-01-01

    Describes an APL implementation of a relational database system suitable for use in a teaching environment in which database development and database administration are studied, and discusses the functions of the user and the database administrator. An appendix illustrating system operation and an eight-item reference list are attached. (Author/JL)

  3. Nuclear integrated database and design advancement system

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Jae Joo; Jeong, Kwang Sub; Kim, Seung Hwan; Choi, Sun Young

    1997-01-01

    The objective of NuIDEAS is to computerize design processes through an integrated database by eliminating the current work style of delivering hardcopy documents and drawings. The major research contents of NuIDEAS are the advancement of design processes by computerization, the establishment of design database and 3 dimensional visualization of design data. KSNP (Korea Standard Nuclear Power Plant) is the target of legacy database and 3 dimensional model, so that can be utilized in the next plant design. In the first year, the blueprint of NuIDEAS is proposed, and its prototype is developed by applying the rapidly revolutionizing computer technology. The major results of the first year research were to establish the architecture of the integrated database ensuring data consistency, and to build design database of reactor coolant system and heavy components. Also various softwares were developed to search, share and utilize the data through networks, and the detailed 3 dimensional CAD models of nuclear fuel and heavy components were constructed, and walk-through simulation using the models are developed. This report contains the major additions and modifications to the object oriented database and associated program, using methods and Javascript.. (author). 36 refs., 1 tab., 32 figs.

  4. Embedded Systems Programming: Accessing Databases from Esterel

    Directory of Open Access Journals (Sweden)

    White David

    2008-01-01

    Full Text Available Abstract A current limitation in embedded controller design and programming is the lack of database support in development tools such as Esterel Studio. This article proposes a way of integrating databases and Esterel by providing two application programming interfaces (APIs which enable the use of relational databases inside Esterel programs. As databases and Esterel programs are often executed on different machines, result sets returned as responses to database queries may be processed either locally and according to Esterel's synchrony hypothesis, or remotely along several of Esterel's execution cycles. These different scenarios are reflected in the design and usage rules of the two APIs presented in this article, which rely on Esterel's facilities for extending the language by external data types, external functions, and procedures, as well as tasks. The APIs' utility is demonstrated by means of a case study modelling an automated warehouse storage system, which is constructed using Lego Mindstorms robotics kits. The robot's controller is programmed in Esterel in a way that takes dynamic ordering information and the warehouse's floor layout into account, both of which are stored in a MySQL database.

  5. Embedded Systems Programming: Accessing Databases from Esterel

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available A current limitation in embedded controller design and programming is the lack of database support in development tools such as Esterel Studio. This article proposes a way of integrating databases and Esterel by providing two application programming interfaces (APIs which enable the use of relational databases inside Esterel programs. As databases and Esterel programs are often executed on different machines, result sets returned as responses to database queries may be processed either locally and according to Esterel’s synchrony hypothesis, or remotely along several of Esterel’s execution cycles. These different scenarios are reflected in the design and usage rules of the two APIs presented in this article, which rely on Esterel’s facilities for extending the language by external data types, external functions, and procedures, as well as tasks. The APIs’ utility is demonstrated by means of a case study modelling an automated warehouse storage system, which is constructed using Lego Mindstorms robotics kits. The robot’s controller is programmed in Esterel in a way that takes dynamic ordering information and the warehouse’s floor layout into account, both of which are stored in a MySQL database.

  6. Difficulties in diagnosing Marfan syndrome using current FBN1 databases.

    Science.gov (United States)

    Groth, Kristian A; Gaustadnes, Mette; Thorsen, Kasper; Østergaard, John R; Jensen, Uffe Birk; Gravholt, Claus H; Andersen, Niels H

    2016-01-01

    The diagnostic criteria of Marfan syndrome (MFS) highlight the importance of a FBN1 mutation test in diagnosing MFS. As genetic sequencing becomes better, cheaper, and more accessible, the expected increase in the number of genetic tests will become evident, resulting in numerous genetic variants that need to be evaluated for disease-causing effects based on database information. The aim of this study was to evaluate genetic variants in four databases and review the relevant literature. We assessed background data on 23 common variants registered in ESP6500 and classified as causing MFS in the Human Gene Mutation Database (HGMD). We evaluated data in four variant databases (HGMD, UMD-FBN1, ClinVar, and UniProt) according to the diagnostic criteria for MFS and compared the results with the classification of each variant in the four databases. None of the 23 variants was clearly associated with MFS, even though all classifications in the databases stated otherwise. A genetic diagnosis of MFS cannot reliably be based on current variant databases because they contain incorrectly interpreted conclusions on variants. Variants must be evaluated by time-consuming review of the background material in the databases and by combining these data with expert knowledge on MFS. This is a major problem because we expect even more genetic test results in the near future as a result of the reduced cost and process time for next-generation sequencing.Genet Med 18 1, 98-102.

  7. Security Research on Engineering Database System

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Engine engineering database system is an oriented C AD applied database management system that has the capability managing distributed data. The paper discusses the security issue of the engine engineering database management system (EDBMS). Through studying and analyzing the database security, to draw a series of securi ty rules, which reach B1, level security standard. Which includes discretionary access control (DAC), mandatory access control (MAC) and audit. The EDBMS implem ents functions of DAC, ...

  8. Content And Multimedia Database Management Systems

    NARCIS (Netherlands)

    Vries, de Arjen Paul

    1999-01-01

    A database management system is a general-purpose software system that facilitates the processes of defining, constructing, and manipulating databases for various applications. The main characteristic of the ‘database approach’ is that it increases the value of data by its emphasis on data independe

  9. 2010 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2010 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  10. 2014 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2014 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  11. 2009 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2009 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  12. 2011 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2011 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  13. 2012 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2012 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  14. Food traceability systems in China: The current status of and future perspectives on food supply chain databases, legal support, and technological research and support for food safety regulation.

    Science.gov (United States)

    Tang, Qi; Li, Jiajia; Sun, Mei; Lv, Jun; Gai, Ruoyan; Mei, Lin; Xu, Lingzhong

    2015-02-01

    Over the past few decades, the field of food security has witnessed numerous problems and incidents that have garnered public attention. Given this serious situation, the food traceability system (FTS) has become part of the expanding food safety continuum to reduce the risk of food safety problems. This article reviews a great deal of the related literature and results from previous studies of FTS to corroborate this contention. This article describes the development and benefits of FTS in developed countries like the United States of America (USA), Japan, and some European countries. Problems with existing FTS in China are noted, including a lack of a complete database, inadequate laws and regulations, and lagging technological research into FTS. This article puts forward several suggestions for the future, including improvement of information websites, clarification of regulatory responsibilities, and promotion of technological research.

  15. The Instrumentation of the Multibackend Database System

    Science.gov (United States)

    1993-06-10

    COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD IGROUP SUB-GROUP Parallel Database, Multilingual ...identify by block number) Most database system designs and implementations are limited to single language ( monolingual ) and single model (mono- model...solution to the processing cost and data sharing problems of hetero- geneous database systems. One solution is a multimodel and multilingual database

  16. Database systems for knowledge-based discovery.

    Science.gov (United States)

    Jagarlapudi, Sarma A R P; Kishan, K V Radha

    2009-01-01

    Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery.

  17. The NCBI BioSystems database.

    Science.gov (United States)

    Geer, Lewis Y; Marchler-Bauer, Aron; Geer, Renata C; Han, Lianyi; He, Jane; He, Siqian; Liu, Chunlei; Shi, Wenyao; Bryant, Stephen H

    2010-01-01

    The NCBI BioSystems database, found at http://www.ncbi.nlm.nih.gov/biosystems/, centralizes and cross-links existing biological systems databases, increasing their utility and target audience by integrating their pathways and systems into NCBI resources. This integration allows users of NCBI's Entrez databases to quickly categorize proteins, genes and small molecules by metabolic pathway, disease state or other BioSystem type, without requiring time-consuming inference of biological relationships from the literature or multiple experimental datasets.

  18. Current Status of Atomic Spectroscopy Databases at NIST

    Science.gov (United States)

    Kramida, Alexander; Ralchenko, Yuri; Reader, Joseph

    2016-05-01

    NIST's Atomic Spectroscopy Data Center maintains several online databases on atomic spectroscopy. These databases can be accessed via the http://physics.nist.gov/PhysRefData web page. Our main database, Atomic Spectra Database (ASD), recently upgraded to v. 5.3, now contains critically evaluated data for about 250,000 spectral lines and 109,000 energy levels of almost all elements in the periodic table. This new version has added several thousand spectral lines and energy levels of Sn II, Mo V, W VIII, and Th I-III. Most of these additions contain critically evaluated transition probabilities important for astrophysics, technology, and fusion research. A new feature of ASD is providing line-ratio data for diagnostics of electron temperature and density in plasmas. Saha-Boltzmann plots have been modified by adding an experimental feature allowing the user to specify a multi-element mixture. We continue regularly updating our bibliography databases, ensuring comprehensive coverage of current literature on atomic spectra for energy levels, spectral lines, transition rates, hyperfine structure, isotope shifts, Zeeman and Stark effects. Our other popular databases, such as the Handbook of Basic Atomic Spectroscopy Data, searchable atlases of spectra of Pt-Ne and Th-Ne lamps, and non-LTE plasma-kinetics code comparisons, continue to be maintained.

  19. An automated system for terrain database construction

    Science.gov (United States)

    Johnson, L. F.; Fretz, R. K.; Logan, T. L.; Bryant, N. A.

    1987-01-01

    An automated Terrain Database Preparation System (TDPS) for the construction and editing of terrain databases used in computerized wargaming simulation exercises has been developed. The TDPS system operates under the TAE executive, and it integrates VICAR/IBIS image processing and Geographic Information System software with CAD/CAM data capture and editing capabilities. The terrain database includes such features as roads, rivers, vegetation, and terrain roughness.

  20. Security Issues in Distributed Database System Model

    OpenAIRE

    MD.TABREZ QUASIM

    2013-01-01

    This paper reviews the most common as well as emerging security mechanism used in distributed database system. As distributed database became more popular, the need for improvement in distributed database management system become even more important. The most important issue is security that may arise and possibly compromise the access control and the integrity of the system. In this paper, we propose some solution for some security aspects such as multi-level access control, ...

  1. Composite Materials Design Database and Data Retrieval System Requirements

    Science.gov (United States)

    1991-08-01

    technology. Gaining such an understanding will facilitate the eventual development and operation of utilitarian composite materials databases ( CMDB ) designed...Significant Aspects of Materials Databases. While the components of a CMDB can be mapped to components of other types of databases, some differences...stand out and make it difficult to implement an effective CMDB on current Commercial, Off-The-Shelf (COTS) systems, or general DBMSs. These are summarized

  2. Airports and Navigation Aids Database System -

    Data.gov (United States)

    Department of Transportation — Airport and Navigation Aids Database System is the repository of aeronautical data related to airports, runways, lighting, NAVAID and their components, obstacles, no...

  3. Security Issues in Distributed Database System Model

    Directory of Open Access Journals (Sweden)

    MD.TABREZ QUASIM

    2013-12-01

    Full Text Available This paper reviews the most common as well as emerging security mechanism used in distributed database system. As distributed database became more popular, the need for improvement in distributed database management system become even more important. The most important issue is security that may arise and possibly compromise the access control and the integrity of the system. In this paper, we propose some solution for some security aspects such as multi-level access control, confidentiality, reliability, integrity and recovery that pertain to a distributed database system.

  4. An Architecture for Nested Transaction Support on Standard Database Systems

    NARCIS (Netherlands)

    Boertjes, E.M.; Grefen, P.W.P.J.; Vonk, J.; Apers, Peter M.G.

    Many applications dealing with complex processes require database support for nested transactions. Current commercial database systems lack this kind of support, offering flat, non-nested transactions only. This paper presents a three-layer architecture for implementing nested transaction support on

  5. Performance related issues in distributed database systems

    Science.gov (United States)

    Mukkamala, Ravi

    1991-01-01

    The key elements of research performed during the year long effort of this project are: Investigate the effects of heterogeneity in distributed real time systems; Study the requirements to TRAC towards building a heterogeneous database system; Study the effects of performance modeling on distributed database performance; and Experiment with an ORACLE based heterogeneous system.

  6. LHCb Conditions database operation assistance systems

    Science.gov (United States)

    Clemencic, M.; Shapoval, I.; Cattaneo, M.; Degaudenzi, H.; Santinelli, R.

    2012-12-01

    The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger (HLT), reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues. The first system is a CondDB state tracking extension to the Oracle 3D Streams replication technology, to trap cases when the CondDB replication was corrupted. Second, an automated distribution system for the SQLite-based CondDB, providing also smart backup and checkout mechanisms for the CondDB managers and LHCb users respectively. And, finally, a system to verify and monitor the internal (CondDB self-consistency) and external (LHCb physics software vs. CondDB) compatibility. The former two systems are used in production in the LHCb experiment and have achieved the desired goal of higher flexibility and robustness for the management and operation of the CondDB. The latter one has been fully designed and is passing currently to the implementation stage.

  7. Content and multimedia database management systems

    OpenAIRE

    de Vries

    1999-01-01

    A database management system is a general-purpose software system that facilitates the processes of defining, constructing, and manipulating databases for various applications. The main characteristic of the ‘database approach’ is that it increases the value of data by its emphasis on data independence. DBMSs, and in particular those based on the relational data model, have been very successful at the management of administrative data in the business domain. This thesis has investigated data ...

  8. Radiation damage of biomolecules (RADAM) database development: current status

    Science.gov (United States)

    Denifl, S.; Garcia, G.; Huber, B. A.; Marinković, B. P.; Mason, N.; Postler, J.; Rabus, H.; Rixon, G.; Solov'yov, A. V.; Suraud, E.; Yakubovich, A. V.

    2013-06-01

    Ion beam therapy offers the possibility of excellent dose localization for treatment of malignant tumours, minimizing radiation damage in normal tissue, while maximizing cell killing within the tumour. However, as the underlying dependent physical, chemical and biological processes are too complex to treat them on a purely analytical level, most of our current and future understanding will rely on computer simulations, based on mathematical equations, algorithms and last, but not least, on the available atomic and molecular data. The viability of the simulated output and the success of any computer simulation will be determined by these data, which are treated as the input variables in each computer simulation performed. The radiation research community lacks a complete database for the cross sections of all the different processes involved in ion beam induced damage: ionization and excitation cross sections for ions with liquid water and biological molecules, all the possible electron - medium interactions, dielectric response data, electron attachment to biomolecules etc. In this paper we discuss current progress in the creation of such a database, outline the roadmap of the project and review plans for the exploitation of such a database in future simulations.

  9. A seismogram digitization and database management system

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper introduces a 2Seismogram Digitization and Database Management System2 (SDDMS), which is devel-oped using Delphi3, and present the key technique of automatically extracting wave data from paper seismograms. The system has various functions, such as paper seismogram digitization, database management and data analysis, etc. With this system it is possible to analyze historical paper seismograms using modern computers. Application of this system will be of help to the progress in earthquake prediction and seismological researches.

  10. Issues in Big-Data Database Systems

    Science.gov (United States)

    2014-06-01

    that big data will not be manageable using conventional relational database technology, and it is true that alternative paradigms, such as NoSQL systems...conventional relational database technology, and it is true that alternative paradigms, such as NoSQL systems and search engines, have much to offer...scale well, and because integration with external data sources is so difficult. NoSQL systems are more open to this integration, and provide excellent

  11. [Musculoskeletal shock wave therapy--current database of clinical research].

    Science.gov (United States)

    Rompe, J D; Buch, M; Gerdesmeyer, L; Haake, M; Loew, M; Maier, M; Heine, J

    2002-01-01

    During the past decade application of extracorporal shock waves became an established procedure for the treatment of various musculoskeletal diseases in Germany. Up to now the positive results of prospective randomised controlled trials have been published for the treatment of plantar fasciitis, lateral elbow epicondylitis (tennis elbow), and of calcifying tendinitis of the rotator cuff. Most recently, contradicting results of prospective randomised placebo-controlled trials with adequate sample size calculation have been reported. The goal of this review is to present information about the current clinical database on extracorporeal shock wave treatment (ESWT).

  12. Implementing database system for LHCb publications page

    CERN Document Server

    Abdullayev, Fakhriddin

    2017-01-01

    The LHCb is one of the main detectors of Large Hadron Collider, where physicists and scientists work together on high precision measurements of matter-antimatter asymmetries and searches for rare and forbidden decays, with the aim of discovering new and unexpected forces. The work does not only consist of analyzing data collected from experiments but also in publishing the results of those analyses. The LHCb publications are gathered on LHCb publications page to maximize their availability to both LHCb members and to the high energy community. In this project a new database system was implemented for LHCb publications page. This will help to improve access to research papers for scientists and better integration with current CERN library website and others.

  13. Research on computer virus database management system

    Science.gov (United States)

    Qi, Guoquan

    2011-12-01

    The growing proliferation of computer viruses becomes the lethal threat and research focus of the security of network information. While new virus is emerging, the number of viruses is growing, virus classification increasing complex. Virus naming because of agencies' capture time differences can not be unified. Although each agency has its own virus database, the communication between each other lacks, or virus information is incomplete, or a small number of sample information. This paper introduces the current construction status of the virus database at home and abroad, analyzes how to standardize and complete description of virus characteristics, and then gives the information integrity, storage security and manageable computer virus database design scheme.

  14. Concurrency control in distributed database systems

    CERN Document Server

    Cellary, W; Gelenbe, E

    1989-01-01

    Distributed Database Systems (DDBS) may be defined as integrated database systems composed of autonomous local databases, geographically distributed and interconnected by a computer network.The purpose of this monograph is to present DDBS concurrency control algorithms and their related performance issues. The most recent results have been taken into consideration. A detailed analysis and selection of these results has been made so as to include those which will promote applications and progress in the field. The application of the methods and algorithms presented is not limited to DDBSs but a

  15. Database Performance Monitoring for the Photovoltaic Systems

    Energy Technology Data Exchange (ETDEWEB)

    Klise, Katherine A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    The Database Performance Monitoring (DPM) software (copyright in processes) is being developed at Sandia National Laboratories to perform quality control analysis on time series data. The software loads time indexed databases (currently csv format), performs a series of quality control tests defined by the user, and creates reports which include summary statistics, tables, and graphics. DPM can be setup to run on an automated schedule defined by the user. For example, the software can be run once per day to analyze data collected on the previous day. HTML formatted reports can be sent via email or hosted on a website. To compare performance of several databases, summary statistics and graphics can be gathered in a dashboard view which links to detailed reporting information for each database. The software can be customized for specific applications.

  16. Database Security System for Applying Sophisticated Access Control via Database Firewall Server

    OpenAIRE

    Eun-Ae Cho; Chang-Joo Moon; Dae-Ha Park; Kang-Bin Yim

    2014-01-01

    Database security, privacy, access control, database firewall, data break masking Recently, information leakage incidents have occurred due to database security vulnerabilities. The administrators in the traditional database access control methods grant simple permissions to users for accessing database objects. Even though they tried to apply more strict permissions in recent database systems, it was difficult to properly adopt sophisticated access control policies to commercial databases...

  17. An architecture for mobile database management system

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In order to design a new kind of mobile database management system (DBMS) more suitable for mobile computing than the existent DBMS, the essence of database systems in mobile computing is analyzed. An opinion is introduced that the mobile database is a kind of dynamic distributed database, and the concept of virtual servers to translate the clients' mobility to the servers' mobility is proposed. Based on these opinions, a kind of architecture of mobile DBMS, which is of versatility, is presented. The architecture is composed of a virtual server and a local DBMS, the virtual server is the kernel of the architecture and its functions are described. Eventually, the server kernel of a mobile DBMS prototype is illustrated.

  18. Finding current evidence: search strategies and common databases.

    Science.gov (United States)

    Gillespie, Lesley Diane; Gillespie, William John

    2003-08-01

    With more than 100 orthopaedic, sports medicine, or hand surgery journals indexed in MEDLINE, it is no longer possible to keep abreast of developments in orthopaedic surgery by reading a few journals each month. Electronic resources are easier to search and more current than most print sources. We provide a practical approach to finding useful information to guide orthopaedic practice. We focus first on where to find the information by providing details about many useful databases and web links. Sources for identifying guidelines, systematic reviews, and randomized controlled trials are identified. The second section discusses how to find the information, from the first stage of formulating a question and identifying the concepts of interest, through to writing a simple strategy. Sources for additional self-directed learning are provided.

  19. Deductive databases and P systems

    Directory of Open Access Journals (Sweden)

    Miguel A. Gutierrez-Naranjo

    2004-06-01

    Full Text Available In computational processes based on backwards chaining, a rule of the type is seen as a procedure which points that the problem can be split into the problems. In classical devices, the subproblems are solved sequentially. In this paper we present some questions that circulated during the Second Brainstorming Week related to the application of the parallelism of P systems to computation based on backwards chaining on the example of inferential deductive process.

  20. Object Identity in Database Systems

    Institute of Scientific and Technical Information of China (English)

    李天柱

    1995-01-01

    The concept of object identity and implementation of object identity in some systems have been explained in literature.Based on an analysis on the idea of data scheme in ANSI/X3/SPARC,this paper presents the concept of full-identity,which includes entity identity,conceptual object identity,and internal object identity,In addition,the equality of objects,which is richer and more practical,is discussed based on the full identity of objects.Therefore,the semantics and constructions of the identity for the complex objects are fully observed,and some appliactions in object management,version management,and user interface are found.Also,it could support the combination of O-O model with V-O model.

  1. Distributed Access View Integrated Database (DAVID) system

    Science.gov (United States)

    Jacobs, Barry E.

    1991-01-01

    The Distributed Access View Integrated Database (DAVID) System, which was adopted by the Astrophysics Division for their Astrophysics Data System, is a solution to the system heterogeneity problem. The heterogeneous components of the Astrophysics problem is outlined. The Library and Library Consortium levels of the DAVID approach are described. The 'books' and 'kits' level is discussed. The Universal Object Typer Management System level is described. The relation of the DAVID project with the Small Business Innovative Research (SBIR) program is explained.

  2. Alternative treatment technology information center computer database system

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, D. [Environmental Protection Agency, Edison, NJ (United States)

    1995-10-01

    The Alternative Treatment Technology Information Center (ATTIC) computer database system was developed pursuant to the 1986 Superfund law amendments. It provides up-to-date information on innovative treatment technologies to clean up hazardous waste sites. ATTIC v2.0 provides access to several independent databases as well as a mechanism for retrieving full-text documents of key literature. It can be accessed with a personal computer and modem 24 hours a day, and there are no user fees. ATTIC provides {open_quotes}one-stop shopping{close_quotes} for information on alternative treatment options by accessing several databases: (1) treatment technology database; this contains abstracts from the literature on all types of treatment technologies, including biological, chemical, physical, and thermal methods. The best literature as viewed by experts is highlighted. (2) treatability study database; this provides performance information on technologies to remove contaminants from wastewaters and soils. It is derived from treatability studies. This database is available through ATTIC or separately as a disk that can be mailed to you. (3) underground storage tank database; this presents information on underground storage tank corrective actions, surface spills, emergency response, and remedial actions. (4) oil/chemical spill database; this provides abstracts on treatment and disposal of spilled oil and chemicals. In addition to these separate databases, ATTIC allows immediate access to other disk-based systems such as the Vendor Information System for Innovative Treatment Technologies (VISITT) and the Bioremediation in the Field Search System (BFSS). The user may download these programs to their own PC via a high-speed modem. Also via modem, users are able to download entire documents through the ATTIC system. Currently, about fifty publications are available, including Superfund Innovative Technology Evaluation (SITE) program documents.

  3. A web-based audiometry database system.

    Science.gov (United States)

    Yeh, Chung-Hui; Wei, Sung-Tai; Chen, Tsung-Wen; Wang, Ching-Yuang; Tsai, Ming-Hsui; Lin, Chia-Der

    2014-07-01

    To establish a real-time, web-based, customized audiometry database system, we worked in cooperation with the departments of medical records, information technology, and otorhinolaryngology at our hospital. This system includes an audiometry data entry system, retrieval and display system, patient information incorporation system, audiometry data transmission program, and audiometry data integration. Compared with commercial audiometry systems and traditional hand-drawn audiometry data, this web-based system saves time and money and is convenient for statistics research. Copyright © 2013. Published by Elsevier B.V.

  4. Genesis of an Electronic Database Expert System.

    Science.gov (United States)

    Ma, Wei; Cole, Timothy W.

    2000-01-01

    Reports on the creation of a prototype, Web-based expert system that helps users better navigate library databases at the University of Illinois at Urbana-Champaign. Discusses concerns that gave rise to the project. Summarizes previous work/research and common approaches in academic libraries today. Describes plans for testing the prototype,…

  5. LHCb Conditions Database Operation Assistance Systems

    CERN Multimedia

    Shapoval, Illya

    2012-01-01

    The Conditions Database of the LHCb experiment (CondDB) provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger, reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues: - an extension to the automatic content validation done by the “Oracle Streams” replication technology, to trap cases when the replication was unsuccessful; - an automated distribution process for the S...

  6. Emerging multidisciplinary research across database management systems

    CERN Document Server

    Nica, Anisoara; Varde, Aparna

    2011-01-01

    The database community is exploring more and more multidisciplinary avenues: Data semantics overlaps with ontology management; reasoning tasks venture into the domain of artificial intelligence; and data stream management and information retrieval shake hands, e.g., when processing Web click-streams. These new research avenues become evident, for example, in the topics that doctoral students choose for their dissertations. This paper surveys the emerging multidisciplinary research by doctoral students in database systems and related areas. It is based on the PIKM 2010, which is the 3rd Ph.D. workshop at the International Conference on Information and Knowledge Management (CIKM). The topics addressed include ontology development, data streams, natural language processing, medical databases, green energy, cloud computing, and exploratory search. In addition to core ideas from the workshop, we list some open research questions in these multidisciplinary areas.

  7. Database specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB)

    Energy Technology Data Exchange (ETDEWEB)

    Faby, E.Z.; Fluker, J.; Hancock, B.R.; Grubb, J.W.; Russell, D.L. [Univ. of Tennessee, Knoxville, TN (United States); Loftis, J.P.; Shipe, P.C.; Truett, L.F. [Oak Ridge National Lab., TN (United States)

    1994-03-01

    This Database Specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB) describes the database organization and storage allocation, provides the detailed data model of the logical and physical designs, and provides information for the construction of parts of the database such as tables, data elements, and associated dictionaries and diagrams.

  8. SPIRE Data-Base Management System

    Science.gov (United States)

    Fuechsel, C. F.

    1984-01-01

    Spacelab Payload Integration and Rocket Experiment (SPIRE) data-base management system (DBMS) based on relational model of data bases. Data bases typically used for engineering and mission analysis tasks and, unlike most commercially available systems, allow data items and data structures stored in forms suitable for direct analytical computation. SPIRE DBMS designed to support data requests from interactive users as well as applications programs.

  9. A New Integrated System of Logic Programming and Relational Database

    Institute of Scientific and Technical Information of China (English)

    邓铁清; 吴泉源; 等

    1993-01-01

    Based on the study of the current two methods-interpretation and compilation-for the integration of logic programming and relational database,a new precompilation-based interpretive approach is proposed.It inherits the advantages of both methods,but overcomes the drawbacks of theirs.A new integrated system based on this approach is presented,which has been implemented on Micro VAX Ⅱ and applied to practise as the kernel of the GKBMS knowledge base management system.Also discussed are the key implementation techniques,including the coupling of logic and relational database systems,the compound of logic and relational database languages,the partial evaluation and static optimization of user's programs,fact scheduling and version management in problem-solving.

  10. Portuguese food composition database quality management system.

    Science.gov (United States)

    Oliveira, L M; Castanheira, I P; Dantas, M A; Porto, A A; Calhau, M A

    2010-11-01

    The harmonisation of food composition databases (FCDB) has been a recognised need among users, producers and stakeholders of food composition data (FCD). To reach harmonisation of FCDBs among the national compiler partners, the European Food Information Resource (EuroFIR) Network of Excellence set up a series of guidelines and quality requirements, together with recommendations to implement quality management systems (QMS) in FCDBs. The Portuguese National Institute of Health (INSA) is the national FCDB compiler in Portugal and is also a EuroFIR partner. INSA's QMS complies with ISO/IEC (International Organization for Standardisation/International Electrotechnical Commission) 17025 requirements. The purpose of this work is to report on the strategy used and progress made for extending INSA's QMS to the Portuguese FCDB in alignment with EuroFIR guidelines. A stepwise approach was used to extend INSA's QMS to the Portuguese FCDB. The approach included selection of reference standards and guides and the collection of relevant quality documents directly or indirectly related to the compilation process; selection of the adequate quality requirements; assessment of adequacy and level of requirement implementation in the current INSA's QMS; implementation of the selected requirements; and EuroFIR's preassessment 'pilot' auditing. The strategy used to design and implement the extension of INSA's QMS to the Portuguese FCDB is reported in this paper. The QMS elements have been established by consensus. ISO/IEC 17025 management requirements (except 4.5) and 5.2 technical requirements, as well as all EuroFIR requirements (including technical guidelines, FCD compilation flowchart and standard operating procedures), have been selected for implementation. The results indicate that the quality management requirements of ISO/IEC 17025 in place in INSA fit the needs for document control, audits, contract review, non-conformity work and corrective actions, and users' (customers

  11. Current limiter circuit system

    Energy Technology Data Exchange (ETDEWEB)

    Witcher, Joseph Brandon; Bredemann, Michael V.

    2017-09-05

    An apparatus comprising a steady state sensing circuit, a switching circuit, and a detection circuit. The steady state sensing circuit is connected to a first, a second and a third node. The first node is connected to a first device, the second node is connected to a second device, and the steady state sensing circuit causes a scaled current to flow at the third node. The scaled current is proportional to a voltage difference between the first and second node. The switching circuit limits an amount of current that flows between the first and second device. The detection circuit is connected to the third node and the switching circuit. The detection circuit monitors the scaled current at the third node and controls the switching circuit to limit the amount of the current that flows between the first and second device when the scaled current is greater than a desired level.

  12. The Geophysical Database Management System in Taiwan

    Directory of Open Access Journals (Sweden)

    Tzay-Chyn Shin

    2013-01-01

    Full Text Available The Geophysical Database Management System (GDMS is an integrated and web-based open data service which has been developed by the Central Weather Bureau (CWB, Taiwan, ROC since 2005. This service went online on August 1, 2008. The GDMS provides six types of geophysical data acquired from the Short-period Seismographic System, Broadband Seismographic System, Free-field Strong-motion Station, Strong-motion Building Array, Global Positioning System, and Groundwater Observation System. When utilizing the GDMS website, users can download seismic event data and continuous geophysical data. At present, many researchers have accessed this public platform to obtain geophysical data. Clearly, the establishment of GDMS is a significant improvement in data sorting for interested researchers.

  13. 8th Asian Conference on Intelligent Information and Database Systems

    CERN Document Server

    Madeyski, Lech; Nguyen, Ngoc

    2016-01-01

    The objective of this book is to contribute to the development of the intelligent information and database systems with the essentials of current knowledge, experience and know-how. The book contains a selection of 40 chapters based on original research presented as posters during the 8th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2016) held on 14–16 March 2016 in Da Nang, Vietnam. The papers to some extent reflect the achievements of scientific teams from 17 countries in five continents. The volume is divided into six parts: (a) Computational Intelligence in Data Mining and Machine Learning, (b) Ontologies, Social Networks and Recommendation Systems, (c) Web Services, Cloud Computing, Security and Intelligent Internet Systems, (d) Knowledge Management and Language Processing, (e) Image, Video, Motion Analysis and Recognition, and (f) Advanced Computing Applications and Technologies. The book is an excellent resource for researchers, those working in artificial intelligence, mu...

  14. ASEAN Mineral Database and Information System (AMDIS)

    Science.gov (United States)

    Okubo, Y.; Ohno, T.; Bandibas, J. C.; Wakita, K.; Oki, Y.; Takahashi, Y.

    2014-12-01

    AMDIS has lunched officially since the Fourth ASEAN Ministerial Meeting on Minerals on 28 November 2013. In cooperation with Geological Survey of Japan, the web-based GIS was developed using Free and Open Source Software (FOSS) and the Open Geospatial Consortium (OGC) standards. The system is composed of the local databases and the centralized GIS. The local databases created and updated using the centralized GIS are accessible from the portal site. The system introduces distinct advantages over traditional GIS. Those are a global reach, a large number of users, better cross-platform capability, charge free for users, charge free for provider, easy to use, and unified updates. Raising transparency of mineral information to mining companies and to the public, AMDIS shows that mineral resources are abundant throughout the ASEAN region; however, there are many datum vacancies. We understand that such problems occur because of insufficient governance of mineral resources. Mineral governance we refer to is a concept that enforces and maximizes the capacity and systems of government institutions that manages minerals sector. The elements of mineral governance include a) strengthening of information infrastructure facility, b) technological and legal capacities of state-owned mining companies to fully-engage with mining sponsors, c) government-led management of mining projects by supporting the project implementation units, d) government capacity in mineral management such as the control and monitoring of mining operations, and e) facilitation of regional and local development plans and its implementation with the private sector.

  15. Database system selection for marketing strategies support in information systems

    Directory of Open Access Journals (Sweden)

    František Dařena

    2007-01-01

    Full Text Available In today’s dynamically changing environment marketing has a significant role. Creating successful marketing strategies requires large amount of high quality information of various kinds and data types. A powerful database management system is a necessary condition for marketing strategies creation support. The paper briefly describes the field of marketing strategies and specifies the features that should be provided by database systems in connection with these strategies support. Major commercial (Oracle, DB2, MS SQL, Sybase and open-source (PostgreSQL, MySQL, Firebird databases are than examined from the point of view of accordance with these characteristics and their comparison in made. The results are useful for making the decision before acquisition of a database system during information system’s hardware architecture specification.

  16. Trends and current status of general thoracic surgery in Japan revealed by review of nationwide databases.

    Science.gov (United States)

    Okumura, Meinoshin

    2016-08-01

    Nationwide databases of cases treated for thoracic disease have been established by several academic associations in Japan, which contain information showing trends and current status in regard to surgical treatment. The Japanese Association of Thoracic Surgery (JATS), Japanese Association of Chest Surgery (JACS), Japan Lung Cancer Society (JLCS), Japanese Respiratory Society (JRS), and Japan Society for Respiratory Endoscopy (JSRE) have maintained databases of lung cancer cases treated in Japan. In 1986, the number of general thoracic surgery cases was 15,544, which increased to 75,306 in 2013. Furthermore, the number of lung cancer operations performed in 2013 was 37,008, occupying 49.1% of all general thoracic operations. Also, the proportions of adenocarcinoma, female patients, aged patients, stage I disease, and limited resection procedures are increasing in lung cancer surgery cases. While the 5-year overall post-operative survival rate of lung cancer patients was 47.8% in those undergoing surgery in 1989, it was 69.6% in those of 2004, which means 22% increase during 15 years. JATS, JACS, and the Japanese Association for Research of the Thymus (JART) have maintained retrospective databases of thymic epithelial tumor cases. The number of mediastinal tumors surgically treated is also increasing and was 4,780 in 2013, among which thymoma was the most prevalent. The Japanese Association for Lung and Heart-Lung Transplantation has developed a prospective nationwide database of lung transplantation cases in Japan, which contains clinical data for 466 patients who received lung transplantation or heart-lung transplantation from 1998 to 2015. Nationwide databases are currently being utilized for clinical studies and will also contribute to international projects related to the Union for International Cancer Control (UICC) tumor, node, and metastasis (TNM) classification system.

  17. Dynamic graph system for a semantic database

    Science.gov (United States)

    Mizell, David

    2015-01-27

    A method and system in a computer system for dynamically providing a graphical representation of a data store of entries via a matrix interface is disclosed. A dynamic graph system provides a matrix interface that exposes to an application program a graphical representation of data stored in a data store such as a semantic database storing triples. To the application program, the matrix interface represents the graph as a sparse adjacency matrix that is stored in compressed form. Each entry of the data store is considered to represent a link between nodes of the graph. Each entry has a first field and a second field identifying the nodes connected by the link and a third field with a value for the link that connects the identified nodes. The first, second, and third fields represent the rows, column, and elements of the adjacency matrix.

  18. Multilingual Database Management System: A Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Nurul H.M. Saad

    2011-01-01

    Full Text Available Problem statement: The use of English as well as Arabic language is increasingly evident in the aspects of international business and finance. This study explored the management of multilingual data in multilingual system, to cater two or more different speakers of Internet users. Approach: The proposed method divided into two ends: The front-end that consisted of the Client and the Translator components and the back-end where the management module and the database located. In this method, a single encoded table required to store information and corresponding dictionaries needed to store the multilingual data. The proposed method based on the framework presented in previous work with some modification to suit with characteristics of chosen languages. Results: Experimental evaluation was performed in storage requirement and mathematical analysis had been used to show the time of each database operations for both of the traditional and the proposed method. Conclusion/Recommendations: The proposed method found to be consistently performed well in the developed multilingual system.

  19. The fundamentals of object-oriented database management systems.

    Science.gov (United States)

    Plateau, D

    1993-01-01

    The purpose of this document is to characterize the two technologies (database and object-oriented technologies) which constitute the foundation of object-oriented database management systems. The O2 Object-Oriented DataBase Management System is then described as an example of this type of system.

  20. Spatial Database Modeling for Indoor Navigation Systems

    Science.gov (United States)

    Gotlib, Dariusz; Gnat, Miłosz

    2013-12-01

    For many years, cartographers are involved in designing GIS and navigation systems. Most GIS applications use the outdoor data. Increasingly, similar applications are used inside buildings. Therefore it is important to find the proper model of indoor spatial database. The development of indoor navigation systems should utilize advanced teleinformation, geoinformatics, geodetic and cartographical knowledge. The authors present the fundamental requirements for the indoor data model for navigation purposes. Presenting some of the solutions adopted in the world they emphasize that navigation applications require specific data to present the navigation routes in the right way. There is presented original solution for indoor data model created by authors on the basis of BISDM model. Its purpose is to expand the opportunities for use in indoor navigation.

  1. Public Budget Database - Outlays and offsetting receipts 1962-Current

    Data.gov (United States)

    Executive Office of the President — This file contains historical outlays and offsetting receipts for 1962 through the current budget year, as well as four years of projections. It can be used to...

  2. Public Budget Database - Budget Authority and offsetting receipts 1976-Current

    Data.gov (United States)

    Executive Office of the President — This file contains historical budget authority and offsetting receipts for 1976 through the current budget year, as well as four years of projections. It can be used...

  3. Dive Data Management System and Database

    Energy Technology Data Exchange (ETDEWEB)

    Gardiner, J.

    1998-05-01

    In 1994 the International Marine Contractors Association (IMCA, formerly AODC), the Health and Safety Executive (HSE) and the United Kingdom Offshore Operators Association (UKOOA) entered into a tri-partite Agreement to create a Dive Data Recording and Management System for offshore dives in the air range on the United Kingdom Continental Shelf (UKCS). The two companies of this system were: automatic Dive Data Recording Systems (DDRS) on dive support vessels, to log depth/time and other dive parameters; and a central Dive Data Management System (DDMS) to collate and analyse these data in an industry-wide database. This report summarises the progress of the project over the first two years of operation. It presents the data obtained in the period 1 January 1995 to 31 December 1996, in the form of industry-wide Standard Reports. It comments on the significance of the data, and it records the experience of the participants in implementing and maintaining the offshore Dive Data Recording Systems and the onshore central Dive Data Management System. A key success of the project has been to provide the air-range Diving Supervisor with an accurate, real-time display of the depth and time of every dive. This has enabled the dive and the associated decompression to be managed more effectively by the Supervisor. In the event of an incident, the recorded data are also available to the Dive/Safety Manager, who now has more complete information on which to assess the possible causes of the incident. (author)

  4. The Network Configuration of an Object Relational Database Management System

    Science.gov (United States)

    Diaz, Philip; Harris, W. C.

    2000-01-01

    The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.

  5. HIV-1, human interaction database: current status and new features.

    Science.gov (United States)

    Ako-Adjei, Danso; Fu, William; Wallin, Craig; Katz, Kenneth S; Song, Guangfeng; Darji, Dakshesh; Brister, J Rodney; Ptak, Roger G; Pruitt, Kim D

    2015-01-01

    The 'Human Immunodeficiency Virus Type 1 (HIV-1), Human Interaction Database', available through the National Library of Medicine at http://www.ncbi.nlm.nih.gov/genome/viruses/retroviruses/hiv-1/interactions, serves the scientific community exploring the discovery of novel HIV vaccine candidates and therapeutic targets. Each HIV-1 human protein interaction can be retrieved without restriction by web-based downloads and ftp protocols and includes: Reference Sequence (RefSeq) protein accession numbers, National Center for Biotechnology Information Gene identification numbers, brief descriptions of the interactions, searchable keywords for interactions and PubMed identification numbers (PMIDs) of journal articles describing the interactions. In addition to specific HIV-1 protein-human protein interactions, included are interaction effects upon HIV-1 replication resulting when individual human gene expression is blocked using siRNA. A total of 3142 human genes are described participating in 12,786 protein-protein interactions, along with 1316 replication interactions described for each of 1250 human genes identified using small interfering RNA (siRNA). Together the data identifies 4006 human genes involved in 14,102 interactions. With the inclusion of siRNA interactions we introduce a redesigned web interface to enhance viewing, filtering and downloading of the combined data set.

  6. Software Application for Supporting the Education of Database Systems

    Science.gov (United States)

    Vágner, Anikó

    2015-01-01

    The article introduces an application which supports the education of database systems, particularly the teaching of SQL and PL/SQL in Oracle Database Management System environment. The application has two parts, one is the database schema and its content, and the other is a C# application. The schema is to administrate and store the tasks and the…

  7. Robust and Blind Watermarking of Relational Database Systems

    Directory of Open Access Journals (Sweden)

    A. Al-Haj

    2008-01-01

    Full Text Available Problem statement: Digital multimedia watermarking technology was suggested in the last decade to embed copyright information in digital objects such images, audio and video. However, the increasing use of relational database systems in many real-life applications created an ever increasing need for watermarking database systems. As a result, watermarking relational database systems is now merging as a research area that deals with the legal issue of copyright protection of database systems. Approach: In this study, we proposed an efficient database watermarking algorithm based on inserting binary image watermarks in non-numeric mutli-word attributes of selected database tuples. Results: The algorithm is robust as it resists attempts to remove or degrade the embedded watermark and it is blind as it does not require the original database in order to extract the embedded watermark. Conclusion: Experimental results demonstrated blindness and the robustness of the algorithm against common database attacks.

  8. Active In-Database Processing to Support Ambient Assisted Living Systems

    Directory of Open Access Journals (Sweden)

    Wagner O. de Morais

    2014-08-01

    Full Text Available As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare.

  9. DEVELOPING MULTITHREADED DATABASE APPLICATION USING JAVA TOOLS AND ORACLE DATABASE MANAGEMENT SYSTEM IN INTRANET ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Raied Salman

    2015-11-01

    Full Text Available In many business organizations, database applications are designed and implemented using various DBMS and Programming Languages. These applications are used to maintain databases for the organizations. The organization departments can be located at different locations and can be connected by intranet environment. In such environment maintenance of database records become an assignment of complexity which needs to be resolved. In this paper an intranet application is designed and implemented using Object-Oriented Programming Language Java and Object-Relational Database Management System Oracle in multithreaded Operating System environment.

  10. Audit Database and Information Tracking System

    Data.gov (United States)

    Social Security Administration — This database contains information about Social Security Administration audits regarding SSA agency performance and compliance. These audits can be requested by both...

  11. Minority Serving Institutions Reporting System Database

    Data.gov (United States)

    Social Security Administration — The database will be used to track SSA's contributions to Minority Serving Institutions such as Historically Black Colleges and Universities (HBCU), Tribal Colleges...

  12. Routing Protocols for Transmitting Large Databases or Multi-databases Systems

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Most knowledgeable people agree that networking and routingtechnologi es have been around about 25 years. Routing is simultaneously the most complicat ed function of a network and the most important. It is of the same kind that mor e than 70% of computer application fields are MIS applications. So the challenge in building and using a MIS in the network is developing the means to find, acc ess, and communicate large databases or multi-databases systems. Because genera l databases are not time continuous, in fact, they can not be streaming, so we ca n't obtain reliable and secure quality of service by deleting some unimportant d atagrams in the databases transmission. In this article, we will discuss which k ind of routing protocol is the best type for large databases or multi-databases systems transmission in the networks.

  13. Databases

    Data.gov (United States)

    National Aeronautics and Space Administration — The databases of computational and experimental data from the first Aeroelastic Prediction Workshop are located here. The databases file names tell their contents by...

  14. HLS bunch current measurement system

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    Bunch current is an important parameter for studying the injection fill-pattern in the storage ring and the instability threshold of the bunch, and the bunch current monitor also is an indispensable tool for the top-up injection. A bunch current measurement (BCM) system has been developed to meet the needs of the upgrade project of Hefei Light Source (HLS). This paper presents the layout of the BCM system. The system based on a high-speed digital oscilloscope can be used to measure the bunch current and synchronous phase shift. To obtain the absolute value of bunch-by-bunch current, the calibration coefficient is measured and analyzed. Error analysis shows that the RMS of bunch current is less than 0.01 mA when bunch current is about 5 mA, which can meet project requirement.

  15. Selecting a Relational Database Management System for Library Automation Systems.

    Science.gov (United States)

    Shekhel, Alex; O'Brien, Mike

    1989-01-01

    Describes the evaluation of four relational database management systems (RDBMSs) (Informix Turbo, Oracle 6.0 TPS, Unify 2000 and Relational Technology's Ingres 5.0) to determine which is best suited for library automation. The evaluation criteria used to develop a benchmark specifically designed to test RDBMSs for libraries are discussed. (CLB)

  16. TWRS information locator database system administrator`s manual

    Energy Technology Data Exchange (ETDEWEB)

    Knutson, B.J., Westinghouse Hanford

    1996-09-13

    This document is a guide for use by the Tank Waste Remediation System (TWRS) Information Locator Database (ILD) System Administrator. The TWRS ILD System is an inventory of information used in the TWRS Systems Engineering process to represent the TWRS Technical Baseline. The inventory is maintained in the form of a relational database developed in Paradox 4.5.

  17. The GEISA Spectroscopic Database System in its latest Edition

    Science.gov (United States)

    Jacquinet-Husson, N.; Crépeau, L.; Capelle, V.; Scott, N. A.; Armante, R.; Chédin, A.

    2009-04-01

    GEISA (Gestion et Etude des Informations Spectroscopiques Atmosphériques: Management and Study of Spectroscopic Information)[1] is a computer-accessible spectroscopic database system, designed to facilitate accurate forward planetary radiative transfer calculations using a line-by-line and layer-by-layer approach. It was initiated in 1976. Currently, GEISA is involved in activities related to the assessment of the capabilities of IASI (Infrared Atmospheric Sounding Interferometer on board the METOP European satellite -http://earth-sciences.cnes.fr/IASI/)) through the GEISA/IASI database[2] derived from GEISA. Since the Metop (http://www.eumetsat.int) launch (October 19th 2006), GEISA/IASI is the reference spectroscopic database for the validation of the level-1 IASI data, using the 4A radiative transfer model[3] (4A/LMD http://ara.lmd.polytechnique.fr; 4A/OP co-developed by LMD and Noveltis with the support of CNES). Also, GEISA is involved in planetary research, i.e.: modelling of Titan's atmosphere, in the comparison with observations performed by Voyager: http://voyager.jpl.nasa.gov/, or by ground-based telescopes, and by the instruments on board the Cassini-Huygens mission: http://www.esa.int/SPECIALS/Cassini-Huygens/index.html. The updated 2008 edition of GEISA (GEISA-08), a system comprising three independent sub-databases devoted, respectively, to line transition parameters, infrared and ultraviolet/visible absorption cross-sections, microphysical and optical properties of atmospheric aerosols, will be described. Spectroscopic parameters quality requirement will be discussed in the context of comparisons between observed or simulated Earth's and other planetary atmosphere spectra. GEISA is implemented on the CNES/CNRS Ether Products and Services Centre WEB site (http://ether.ipsl.jussieu.fr), where all archived spectroscopic data can be handled through general and user friendly associated management software facilities. More than 350 researchers are

  18. Performance assessment of EMR systems based on post-relational database.

    Science.gov (United States)

    Yu, Hai-Yan; Li, Jing-Song; Zhang, Xiao-Guang; Tian, Yu; Suzuki, Muneou; Araki, Kenji

    2012-08-01

    Post-relational databases provide high performance and are currently widely used in American hospitals. As few hospital information systems (HIS) in either China or Japan are based on post-relational databases, here we introduce a new-generation electronic medical records (EMR) system called Hygeia, which was developed with the post-relational database Caché and the latest platform Ensemble. Utilizing the benefits of a post-relational database, Hygeia is equipped with an "integration" feature that allows all the system users to access data-with a fast response time-anywhere and at anytime. Performance tests of databases in EMR systems were implemented in both China and Japan. First, a comparison test was conducted between a post-relational database, Caché, and a relational database, Oracle, embedded in the EMR systems of a medium-sized first-class hospital in China. Second, a user terminal test was done on the EMR system Izanami, which is based on the identical database Caché and operates efficiently at the Miyazaki University Hospital in Japan. The results proved that the post-relational database Caché works faster than the relational database Oracle and showed perfect performance in the real-time EMR system.

  19. European database on indoor air pollution sources in buildings: Current status of database structure and software

    NARCIS (Netherlands)

    Molina, J.L.; Clausen, G.H.; Saarela, K.; Plokker, W.; Bluyssen, P.M.; Bishop, W.; Oliveira Fernandes, E. de

    1996-01-01

    the European Joule II Project European Data Base for Indoor Air Pollution Sources in Buildings. The aim of the project is to produce a tool which would be used by designers to take into account the actual pollution of the air from the building elements and ventilation and air conditioning system com

  20. European database on indoor air pollution sources in buildings: Current status of database structure and software

    NARCIS (Netherlands)

    Molina, J.L.; Clausen, G.H.; Saarela, K.; Plokker, W.; Bluyssen, P.M.; Bishop, W.; Oliveira Fernandes, E. de

    1996-01-01

    the European Joule II Project European Data Base for Indoor Air Pollution Sources in Buildings. The aim of the project is to produce a tool which would be used by designers to take into account the actual pollution of the air from the building elements and ventilation and air conditioning system

  1. Performance comparison of non-relational database systems

    OpenAIRE

    Žlender, Rok

    2011-01-01

    Deciding on which data store to use is one of the most important aspects of every project. Besides the established relational database systems non-relational solutions are gaining in their popularity. Non-relational database systems provide an interesting alternative when we are storing large amount of data or when we are looking for greater flexibility with our data model. Purpose of this thesis is to measure and analyze how chosen non-relational database systems compare against each othe...

  2. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  3. A database/knowledge structure for a robotics vision system

    Science.gov (United States)

    Dearholt, D. W.; Gonzales, N. N.

    1987-01-01

    Desirable properties of robotics vision database systems are given, and structures which possess properties appropriate for some aspects of such database systems are examined. Included in the structures discussed is a family of networks in which link membership is determined by measures of proximity between pairs of the entities stored in the database. This type of network is shown to have properties which guarantee that the search for a matching feature vector is monotonic. That is, the database can be searched with no backtracking, if there is a feature vector in the database which matches the feature vector of the external entity which is to be identified. The construction of the database is discussed, and the search procedure is presented. A section on the support provided by the database for description of the decision-making processes and the search path is also included.

  4. Multilingual lexicon design tool and database management system for MT

    CERN Document Server

    Barisevičius, G

    2011-01-01

    The paper presents the design and development of English-Lithuanian-English dictionarylexicon tool and lexicon database management system for MT. The system is oriented to support two main requirements: to be open to the user and to describe much more attributes of speech parts as a regular dictionary that are required for the MT. Programming language Java and database management system MySql is used to implement the designing tool and lexicon database respectively. This solution allows easily deploying this system in the Internet. The system is able to run on various OS such as: Windows, Linux, Mac and other OS where Java Virtual Machine is supported. Since the modern lexicon database managing system is used, it is not a problem accessing the same database for several users.

  5. Natural Language Interfaces to Database Systems

    Science.gov (United States)

    1988-10-01

    of Toronto, Philip A. Bernstein , Harvard University, and Harry K.T. Wong, IBM Research Laboratory, "A Language Facility for Designing Database...Colin Blakemore and Susan Greenfield (editors), Mindwaves - Thoughts on Intelligence, Identity, and Consciousness, Basil Blackwell, Inc. 1987. 1110

  6. Enabling Ontology Based Semantic Queries in Biomedical Database Systems.

    Science.gov (United States)

    Zheng, Shuai; Wang, Fusheng; Lu, James

    2014-03-01

    There is a lack of tools to ease the integration and ontology based semantic queries in biomedical databases, which are often annotated with ontology concepts. We aim to provide a middle layer between ontology repositories and semantically annotated databases to support semantic queries directly in the databases with expressive standard database query languages. We have developed a semantic query engine that provides semantic reasoning and query processing, and translates the queries into ontology repository operations on NCBO BioPortal. Semantic operators are implemented in the database as user defined functions extended to the database engine, thus semantic queries can be directly specified in standard database query languages such as SQL and XQuery. The system provides caching management to boosts query performance. The system is highly adaptable to support different ontologies through easy customizations. We have implemented the system DBOntoLink as an open source software, which supports major ontologies hosted at BioPortal. DBOntoLink supports a set of common ontology based semantic operations and have them fully integrated with a database management system IBM DB2. The system has been deployed and evaluated with an existing biomedical database for managing and querying image annotations and markups (AIM). Our performance study demonstrates the high expressiveness of semantic queries and the high efficiency of the queries.

  7. Deep Time Data Infrastructure: Integrating Our Current Geologic and Biologic Databases

    Science.gov (United States)

    Kolankowski, S. M.; Fox, P. A.; Ma, X.; Prabhu, A.

    2016-12-01

    As our knowledge of Earth's geologic and mineralogical history grows, we require more efficient methods of sharing immense amounts of data. Databases across numerous disciplines have been utilized to offer extensive information on very specific Epochs of Earth's history up to its current state, i.e. Fossil record, rock composition, proteins, etc. These databases could be a powerful force in identifying previously unseen correlations such as relationships between minerals and proteins. Creating a unifying site that provides a portal to these databases will aid in our ability as a collaborative scientific community to utilize our findings more effectively. The Deep-Time Data Infrastructure (DTDI) is currently being defined as part of a larger effort to accomplish this goal. DTDI will not be a new database, but an integration of existing resources. Current geologic and related databases were identified, documentation of their schema was established and will be presented as a stage by stage progression. Through conceptual modeling focused around variables from their combined records, we will determine the best way to integrate these databases using common factors. The Deep-Time Data Infrastructure will allow geoscientists to bridge gaps in data and further our understanding of our Earth's history.

  8. Distributed Database Control and Allocation. Volume 3. Distributed Database System Designer’s Handbook.

    Science.gov (United States)

    1983-10-01

    Multiversion Data 2-18 2.7.1 Multiversion Timestamping 2-20 2.T.2 Multiversion Looking 2-20 2.8 Combining the Techniques 2-22 3. Database Recovery Algorithms...See rTHEM79, GIFF79] for details. 2.7 Multiversion Data Let us return to a database system model where each logical data item is stored at one DM...In a multiversion database each Write wifxl, produces a new copy (or version) of x, denoted xi. Thus, the value of z is a set of ver- sions. For each

  9. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Document Server

    CERN. Geneva

    2012-01-01

    Non-relational "NoSQL" databases such as Cassandra and CouchDB are best known for their ability to scale to large numbers of clients spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects, is based on traditional SQL databases but also has the same high scalability and wide-area distributability for an important subset of applications. This paper compares the architectures, behavior, performance, and maintainability of the two different approaches and identifies the criteria for choosing which approach to prefer over the other.

  10. A survey of the current status of web-based databases indexing Iranian journals.

    Science.gov (United States)

    Merat, Shahin; Khatibzadeh, Shahab; Mesgarpour, Bita; Malekzadeh, Reza

    2009-05-01

    The scientific output of Iran is increasing rapidly during the recent years. Unfortunately, most papers are published in journals which are not indexed by popular indexing systems and many of them are in Persian without English translation. This makes the results of Iranian scientific research unavailable to other researchers, including Iranians. The aim of this study was to evaluate the quality of current web-based databases indexing scientific articles published in Iran. We identified web-based databases which indexed scientific journals published in Iran using popular search engines. The sites were then subjected to a series of tests to evaluate their coverage, search capabilities, stability, accuracy of information, consistency, accessibility, ease of use, and other features. Results were compared with each other to identify strengths and shortcomings of each site. Five web sites were indentified. None had a complete coverage on scientific Iranian journals. The search capabilities were less than optimal in most sites. English translations of research titles, author names, keywords, and abstracts of Persian-language articles did not follow standards. Some sites did not cover abstracts. Numerous typing errors make searches ineffective and citation indexing unreliable. None of the currently available indexing sites are capable of presenting Iranian research to the international scientific community. The government should intervene by enforcing policies designed to facilitate indexing through a systematic approach. The policies should address Iranian journals, authors, and indexing sites. Iranian journals should be required to provide their indexing data, including references, electronically; authors should provide correct indexing information to journals; and indexing sites should improve their software to meet standards set by the government.

  11. POTENTIAL: A Highly Adaptive Core of Parallel Database System

    Institute of Scientific and Technical Information of China (English)

    文继荣; 陈红; 王珊

    2000-01-01

    POTENTIAL is a virtual database machine based on general computing platforms, especially parallel computing platforms. It provides a complete solution to high-performance database systems by a 'virtual processor + virtual data bus + virtual memory' architecture. Virtual processors manage all CPU resources in the system, on which various operations are running. Virtual data bus is responsible for the management of datatransmission between associated operations, which forms the hinges of the entire system. Virtual memory provides efficient data storage and buffering mechanisms that conform to data reference behaviors in database systems. The architecture of POTENTIAL is very clear and has many good features,including high efficiency, high scalability, high extensibility, high portability, etc.

  12. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Document Server

    Dykstra, David

    2012-01-01

    One of the main attractions of non-relational "NoSQL" databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also has high scalability and wide-area distributability for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  13. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Science.gov (United States)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  14. Potentials of Advanced Database Technology for Military Information Systems

    NARCIS (Netherlands)

    Choenni, Sunil; Bruggeman, Ben

    2001-01-01

    Research and development in database technology evolves in several directions, which are not necessarily divergent. A number of these directions might be promising for military information systems as well. In this paper, we discuss the potentials of multi-media databases and data mining. Both direct

  15. Performance analysis of different database in new internet mapping system

    Science.gov (United States)

    Yao, Xing; Su, Wei; Gao, Shuai

    2017-03-01

    In the Mapping System of New Internet, Massive mapping entries between AID and RID need to be stored, added, updated, and deleted. In order to better deal with the problem when facing a large number of mapping entries update and query request, the Mapping System of New Internet must use high-performance database. In this paper, we focus on the performance of Redis, SQLite, and MySQL these three typical databases, and the results show that the Mapping System based on different databases can adapt to different needs according to the actual situation.

  16. Spatial Database Management System of China Geological Survey Extent

    Institute of Scientific and Technical Information of China (English)

    Chen Jianguo; Chen Zhijun; Wang Quanming; Fang Yiping

    2003-01-01

    The spatial database management system of China geological survey extent is a social service system. Its aim is to help the government and the whole social public to expediently use the spatial database, such as querying, indexing, mapping and product outputting. The management system has been developed based on MAPGIS6. x SDK and Visual C++, considering the spatial database contents and structure and the requirements of users. This paper introduces the software structure, the data flow chart and some key techniques of software development.

  17. Benthic microalgal production in the Arctic: Applied methods and status of the current database

    DEFF Research Database (Denmark)

    Glud, Ronnie Nøhr; Woelfel, Jana; Karsten, Ulf

    2009-01-01

    The current database on benthic microalgal production in Arctic waters comprises 10 peer-reviewed and three unpublished studies. Here, we compile and discuss these datasets, along with the applied measurement approaches used. The latter is essential for robust comparative analysis and to clarify ...

  18. DOE technology information management system database study report

    Energy Technology Data Exchange (ETDEWEB)

    Widing, M.A.; Blodgett, D.W.; Braun, M.D.; Jusko, M.J.; Keisler, J.M.; Love, R.J.; Robinson, G.L. [Argonne National Lab., IL (United States). Decision and Information Sciences Div.

    1994-11-01

    To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performed detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.

  19. Structured Query Translation in Peer to Peer Database Sharing Systems

    Directory of Open Access Journals (Sweden)

    Mehedi Masud

    2009-10-01

    Full Text Available This paper presents a query translation mechanism between heterogeneous peers in Peer to Peer Database Sharing Systems (PDSSs. A PDSS combines a database management system with P2P functionalities. The local databases on peers are called peer databases. In a PDSS, each peer chooses its own data model and schema and maintains data independently without any global coordinator. One of the problems in such a system is translating queries between peers, taking into account both the schema and data heterogeneity. Query translation is the problem of rewriting a query posed in terms of one peer schema to a query in terms of another peer schema. This paper proposes a query translation mechanism between peers where peers are acquainted in data sharing systems through data-level mappings for sharing data.

  20. Development and trial of the drug interaction database system

    Directory of Open Access Journals (Sweden)

    Virasakdi Chongsuvivatwong

    2003-07-01

    Full Text Available The drug interaction database system was originally developed at Songklanagarind Hospital. Data sets of drugs available in Songklanagarind Hospital comprising standard drug names, trade names, group names, and drug interactions were set up using Microsoft® Access 2000. The computer used was a Pentium III processor running at 450 MHz with 128 MB SDRAM operated by Microsoft® Windows 98. A robust structured query language algorithm was chosen for detecting interactions. The functioning of this database system, including speed and accuracy of detection, was tested at Songklanagarind Hospital and Naratiwatrachanagarind Hospital using hypothetical prescriptions. Its use in determining the incidence of drug interactions was also evaluated using a retrospective prescription data file. This study has shown that the database system correctly detected drug interactions from prescriptions. Speed of detection was approximately 1 to 2 seconds depending on the size of prescription. The database system was of benefit in determining of incidence rate of drug interaction in a hospital.

  1. Development of a Relational Database for Learning Management Systems

    Science.gov (United States)

    Deperlioglu, Omer; Sarpkaya, Yilmaz; Ergun, Ertugrul

    2011-01-01

    In today's world, Web-Based Distance Education Systems have a great importance. Web-based Distance Education Systems are usually known as Learning Management Systems (LMS). In this article, a database design, which was developed to create an educational institution as a Learning Management System, is described. In this sense, developed Learning…

  2. A Grid Architecture for Manufacturing Database System

    Directory of Open Access Journals (Sweden)

    Laurentiu CIOVICĂ

    2011-06-01

    Full Text Available Before the Enterprise Resource Planning concepts business functions within enterprises were supported by small and isolated applications, most of them developed internally. Yet today ERP platforms are not by themselves the answer to all organizations needs especially in times of differentiated and diversified demands among end customers. ERP platforms were integrated with specialized systems for the management of clients, Customer Relationship Management and vendors, Supplier Relationship Management. They were integrated with Manufacturing Execution Systems for better planning and control of production lines. In order to offer real time, efficient answers to the management level, ERP systems were integrated with Business Intelligence systems. This paper analyses the advantages of grid computing at this level of integration, communication and interoperability between complex specialized informatics systems with a focus on the system architecture and data base systems.

  3. LOWER LEVEL INFERENCE CONTROL IN STATISTICAL DATABASE SYSTEMS

    Energy Technology Data Exchange (ETDEWEB)

    Lipton, D.L.; Wong, H.K.T.

    1984-02-01

    An inference is the process of transforming unclassified data values into confidential data values. Most previous research in inference control has studied the use of statistical aggregates to deduce individual records. However, several other types of inference are also possible. Unknown functional dependencies may be apparent to users who have 'expert' knowledge about the characteristics of a population. Some correlations between attributes may be concluded from 'commonly-known' facts about the world. To counter these threats, security managers should use random sampling of databases of similar populations, as well as expert systems. 'Expert' users of the DATABASE SYSTEM may form inferences from the variable performance of the user interface. Users may observe on-line turn-around time, accounting statistics. the error message received, and the point at which an interactive protocol sequence fails. One may obtain information about the frequency distributions of attribute values, and the validity of data object names from this information. At the back-end of a database system, improved software engineering practices will reduce opportunities to bypass functional units of the database system. The term 'DATA OBJECT' should be expanded to incorporate these data object types which generate new classes of threats. The security of DATABASES and DATABASE SySTEMS must be recognized as separate but related problems. Thus, by increased awareness of lower level inferences, system security managers may effectively nullify the threat posed by lower level inferences.

  4. Combination of Maximin and Kriging Prediction Methods for Eddy-Current Testing Database Generation

    Energy Technology Data Exchange (ETDEWEB)

    Bilicz, Sandor; Lambert, Marc; Vazquez, Emmanuel; Gyimothy, Szabolcs, E-mail: sandor.bilicz@lss.supelec.fr

    2010-11-01

    Eddy-current testing (ECT) is a widely used nondestructive evaluation technique. The numerical simulation of ECT methods involves high complexity and computational load. However, one needs reliable solutions (within a reasonable CPU time) for these problems to be able to solve the related inverse problem. One possible approach is to build a configuration-specific database, consisting of well-chosen samples (corresponding input data - output signal pairs). Once the database has been constructed, the sought information can be retrieved practically in no time. However, the optimal choice of samples raises complex optimization problems. This paper presents a sampling method which aims to achieve databases being optimal in a certain sense. The goal of our approach is to spread out the output samples in the whole conceivable output domain. The method is formalized as a maximin problem which is solved step-by-step using the kriging prediction.

  5. A survey of commercial object-oriented database management systems

    Science.gov (United States)

    Atkins, John

    1992-01-01

    The object-oriented data model is the culmination of over thirty years of database research. Initially, database research focused on the need to provide information in a consistent and efficient manner to the business community. Early data models such as the hierarchical model and the network model met the goal of consistent and efficient access to data and were substantial improvements over simple file mechanisms for storing and accessing data. However, these models required highly skilled programmers to provide access to the data. Consequently, in the early 70's E.F. Codd, an IBM research computer scientists, proposed a new data model based on the simple mathematical notion of the relation. This model is known as the Relational Model. In the relational model, data is represented in flat tables (or relations) which have no physical or internal links between them. The simplicity of this model fostered the development of powerful but relatively simple query languages that now made data directly accessible to the general database user. Except for large, multi-user database systems, a database professional was in general no longer necessary. Database professionals found that traditional data in the form of character data, dates, and numeric data were easily represented and managed via the relational model. Commercial relational database management systems proliferated and performance of relational databases improved dramatically. However, there was a growing community of potential database users whose needs were not met by the relational model. These users needed to store data with data types not available in the relational model and who required a far richer modelling environment than that provided by the relational model. Indeed, the complexity of the objects to be represented in the model mandated a new approach to database technology. The Object-Oriented Model was the result.

  6. Analysis of Cloud-Based Database Systems

    Science.gov (United States)

    2015-06-01

    University San Luis Obispo, 2009 Submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE IN COMPUTER SCIENCE...for a query to complete on average for the production system was 136,746 xvi microseconds. On our cloud-based system, the average was 198,875

  7. Database design for Physical Access Control System for nuclear facilities

    Energy Technology Data Exchange (ETDEWEB)

    Sathishkumar, T., E-mail: satishkumart@igcar.gov.in; Rao, G. Prabhakara, E-mail: prg@igcar.gov.in; Arumugam, P., E-mail: aarmu@igcar.gov.in

    2016-08-15

    Highlights: • Database design needs to be optimized and highly efficient for real time operation. • It requires a many-to-many mapping between Employee table and Doors table. • This mapping typically contain thousands of records and redundant data. • Proposed novel database design reduces the redundancy and provides abstraction. • This design is incorporated with the access control system developed in-house. - Abstract: A (Radio Frequency IDentification) RFID cum Biometric based two level Access Control System (ACS) was designed and developed for providing access to vital areas of nuclear facilities. The system has got both hardware [Access controller] and software components [server application, the database and the web client software]. The database design proposed, enables grouping of the employees based on the hierarchy of the organization and the grouping of the doors based on Access Zones (AZ). This design also illustrates the mapping between the Employee Groups (EG) and AZ. By following this approach in database design, a higher level view can be presented to the system administrator abstracting the inner details of the individual entities and doors. This paper describes the novel approach carried out in designing the database of the ACS.

  8. The methodology of database design in organization management systems

    Science.gov (United States)

    Chudinov, I. L.; Osipova, V. V.; Bobrova, Y. V.

    2017-01-01

    The paper describes the unified methodology of database design for management information systems. Designing the conceptual information model for the domain area is the most important and labor-intensive stage in database design. Basing on the proposed integrated approach to design, the conceptual information model, the main principles of developing the relation databases are provided and user’s information needs are considered. According to the methodology, the process of designing the conceptual information model includes three basic stages, which are defined in detail. Finally, the article describes the process of performing the results of analyzing user’s information needs and the rationale for use of classifiers.

  9. VERDI: A Web Database System for Redshift Surveys

    Science.gov (United States)

    Wirth, G. D.; Patton, D. R.

    The Victoria Explorer for Redshift Databases on the Internet (VERDI) is a Web-based data retrieval system which allows users to access tabular data, images, and spectra of astronomical objects and to perform queries on the underlying database. We developed VERDI for use with the CNOC2 Field Galaxy Redshift Survey, but designed it to be generally applicable to deep galaxy redshift surveys. The software is freely available at http://astrowww.phys.uvic.ca/~cnoc, can easily be reconfigured and customized by the user, and performs well enough to support databases of many thousands of objects.

  10. The PIR integrated protein databases and data retrieval system

    Directory of Open Access Journals (Sweden)

    H Huang

    2006-01-01

    Full Text Available The Protein Information Resource (PIR provides many databases and tools to support genomic and proteomic research. PIR is a member of UniProt—Universal Protein Resource—the central repository of protein sequence and function, which maintains UniProt Knowledgebase with extensively curated annotation, UniProt Reference databases to speed sequence searches, and UniProt Archive to reflect sequence history. PIR also provides PIRSF family classification system based on evolutionary relationships of full-length proteins, and iProClass integrated database of protein family, function, and structure. These databases are easily accessible from PIR web site using a centralized data retrieval system for information retrieval and knowledge discovery.

  11. A Transactional Asynchronous Replication Scheme for Mobile Database Systems

    Institute of Scientific and Technical Information of China (English)

    丁治明; 孟小峰; 王珊

    2002-01-01

    In mobile database systems, mobility of users has a significant impact on data replication. As a result, the various replica control protocols that exist today in traditional distributed and multidatabase environments are no longer suitable. To solve this problem, a new mobile database replication scheme, the Transaction-Level Result-Set Propagation (TLRSP)model, is put forward in this paper. The conflict detection and resolution strategy based on TLRSP is discussed in detail, and the implementation algorithm is proposed. In order to compare the performance of the TLRSP model with that of other mobile replication schemes, we have developed a detailed simulation model. Experimental results show that the TLRSP model provides an efficient support for replicated mobile database systems by reducing reprocessing overhead and maintaining database consistency.

  12. Design and Implementation of a Heterogeneous Distributed Database System

    Institute of Scientific and Technical Information of China (English)

    金志权; 柳诚飞; 等

    1990-01-01

    This paper introduces a heterogeneous distributed database system called LSZ system,where LSZ is an abbreviation of Li Shizhen,an ancient Chinese medical scientist.LSZ system adopts cluster as distributed database node(or site).Each cluster consists of one of several microcomputers and one server.Te paper describes its basic architecture and the prototype implementation,which includes query processing and optimization,transaction manager and data language translation.The system provides a uniform retrieve and update user interface through global relational data language GRDL.

  13. Seismic Monitoring System Calibration Using Ground Truth Database

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Winston; Wagner, Robert

    2002-12-22

    Calibration of a seismic monitoring system remains a major issue due to the lack of ground truth information and uncertainties in the regional geological parameters. Rapid and accurate identification of seismic events is currently not feasible due to the absence of a fundamental framework allowing immediate access to ground truth information for many parts of the world. Precise location and high-confidence identification of regional seismic events are the primary objectives of monitoring research in seismology. In the Department of Energy Knowledge Base (KB), ground truth information addresses these objectives and will play a critical role for event relocation and identification using advanced seismic analysis tools. Maintaining the KB with systematic compilation and analysis of comprehensive sets of geophysical data from various parts of the world is vital. The goal of this project is to identify a comprehensive database for China using digital seismic waveform data that are currently unavailable. These data may be analyzed along with ground truth information that becomes available. To date, arrival times for all regional phases are determined on all events above Mb 4.5 that occurred in China in 2000 and 2001. Travel-time models are constructed to compare with existing models. Seismic attenuation models may be constructed to provide better understanding of regional wave propagation in China with spatial resolution that has not previously been obtained.

  14. The Database Driven ATLAS Trigger Configuration System

    CERN Document Server

    Martyniuk, Alex; The ATLAS collaboration

    2015-01-01

    This contribution describes the trigger selection configuration system of the ATLAS low- and high-level trigger (HLT) and the upgrades it received in preparation for LHC Run 2. The ATLAS trigger configuration system is responsible for applying the physics selection parameters for the online data taking at both trigger levels and the proper connection of the trigger lines across those levels. Here the low-level trigger consists of the already existing central trigger (CT) and the new Level-1 Topological trigger (L1Topo), which has been added for Run 2. In detail the tasks of the configuration system during the online data taking are Application of the selection criteria, e.g. energy cuts, minimum multiplicities, trigger object correlation, at the three trigger components L1Topo, CT, and HLT On-the-fly, e.g. rate-dependent, generation and application of prescale factors to the CT and HLT to adjust the trigger rates to the data taking conditions, such as falling luminosity or rate spikes in the detector readout ...

  15. Reference sequence (RefSeq) database at NCBI: current status, taxonomic expansion, and functional annotation

    Science.gov (United States)

    O'Leary, Nuala A.; Wright, Mathew W.; Brister, J. Rodney; Ciufo, Stacy; Haddad, Diana; McVeigh, Rich; Rajput, Bhanu; Robbertse, Barbara; Smith-White, Brian; Ako-Adjei, Danso; Astashyn, Alexander; Badretdin, Azat; Bao, Yiming; Blinkova, Olga; Brover, Vyacheslav; Chetvernin, Vyacheslav; Choi, Jinna; Cox, Eric; Ermolaeva, Olga; Farrell, Catherine M.; Goldfarb, Tamara; Gupta, Tripti; Haft, Daniel; Hatcher, Eneida; Hlavina, Wratko; Joardar, Vinita S.; Kodali, Vamsi K.; Li, Wenjun; Maglott, Donna; Masterson, Patrick; McGarvey, Kelly M.; Murphy, Michael R.; O'Neill, Kathleen; Pujar, Shashikant; Rangwala, Sanjida H.; Rausch, Daniel; Riddick, Lillian D.; Schoch, Conrad; Shkeda, Andrei; Storz, Susan S.; Sun, Hanzhen; Thibaud-Nissen, Francoise; Tolstoy, Igor; Tully, Raymond E.; Vatsan, Anjana R.; Wallin, Craig; Webb, David; Wu, Wendy; Landrum, Melissa J.; Kimchi, Avi; Tatusova, Tatiana; DiCuccio, Michael; Kitts, Paul; Murphy, Terence D.; Pruitt, Kim D.

    2016-01-01

    The RefSeq project at the National Center for Biotechnology Information (NCBI) maintains and curates a publicly available database of annotated genomic, transcript, and protein sequence records (http://www.ncbi.nlm.nih.gov/refseq/). The RefSeq project leverages the data submitted to the International Nucleotide Sequence Database Collaboration (INSDC) against a combination of computation, manual curation, and collaboration to produce a standard set of stable, non-redundant reference sequences. The RefSeq project augments these reference sequences with current knowledge including publications, functional features and informative nomenclature. The database currently represents sequences from more than 55 000 organisms (>4800 viruses, >40 000 prokaryotes and >10 000 eukaryotes; RefSeq release 71), ranging from a single record to complete genomes. This paper summarizes the current status of the viral, prokaryotic, and eukaryotic branches of the RefSeq project, reports on improvements to data access and details efforts to further expand the taxonomic representation of the collection. We also highlight diverse functional curation initiatives that support multiple uses of RefSeq data including taxonomic validation, genome annotation, comparative genomics, and clinical testing. We summarize our approach to utilizing available RNA-Seq and other data types in our manual curation process for vertebrate, plant, and other species, and describe a new direction for prokaryotic genomes and protein name management. PMID:26553804

  16. Reference sequence (RefSeq) database at NCBI: current status, taxonomic expansion, and functional annotation.

    Science.gov (United States)

    O'Leary, Nuala A; Wright, Mathew W; Brister, J Rodney; Ciufo, Stacy; Haddad, Diana; McVeigh, Rich; Rajput, Bhanu; Robbertse, Barbara; Smith-White, Brian; Ako-Adjei, Danso; Astashyn, Alexander; Badretdin, Azat; Bao, Yiming; Blinkova, Olga; Brover, Vyacheslav; Chetvernin, Vyacheslav; Choi, Jinna; Cox, Eric; Ermolaeva, Olga; Farrell, Catherine M; Goldfarb, Tamara; Gupta, Tripti; Haft, Daniel; Hatcher, Eneida; Hlavina, Wratko; Joardar, Vinita S; Kodali, Vamsi K; Li, Wenjun; Maglott, Donna; Masterson, Patrick; McGarvey, Kelly M; Murphy, Michael R; O'Neill, Kathleen; Pujar, Shashikant; Rangwala, Sanjida H; Rausch, Daniel; Riddick, Lillian D; Schoch, Conrad; Shkeda, Andrei; Storz, Susan S; Sun, Hanzhen; Thibaud-Nissen, Francoise; Tolstoy, Igor; Tully, Raymond E; Vatsan, Anjana R; Wallin, Craig; Webb, David; Wu, Wendy; Landrum, Melissa J; Kimchi, Avi; Tatusova, Tatiana; DiCuccio, Michael; Kitts, Paul; Murphy, Terence D; Pruitt, Kim D

    2016-01-04

    The RefSeq project at the National Center for Biotechnology Information (NCBI) maintains and curates a publicly available database of annotated genomic, transcript, and protein sequence records (http://www.ncbi.nlm.nih.gov/refseq/). The RefSeq project leverages the data submitted to the International Nucleotide Sequence Database Collaboration (INSDC) against a combination of computation, manual curation, and collaboration to produce a standard set of stable, non-redundant reference sequences. The RefSeq project augments these reference sequences with current knowledge including publications, functional features and informative nomenclature. The database currently represents sequences from more than 55,000 organisms (>4800 viruses, >40,000 prokaryotes and >10,000 eukaryotes; RefSeq release 71), ranging from a single record to complete genomes. This paper summarizes the current status of the viral, prokaryotic, and eukaryotic branches of the RefSeq project, reports on improvements to data access and details efforts to further expand the taxonomic representation of the collection. We also highlight diverse functional curation initiatives that support multiple uses of RefSeq data including taxonomic validation, genome annotation, comparative genomics, and clinical testing. We summarize our approach to utilizing available RNA-Seq and other data types in our manual curation process for vertebrate, plant, and other species, and describe a new direction for prokaryotic genomes and protein name management.

  17. Intelligent high-speed cutting database system development

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In this paper,the components of a high-speed cutting system are analyzed firstly.The component variables of the high-speed cutting system are classified into four types:uncontrolled variables,process variables,control variables,and output variables.The relationships and interactions of these variables are discussed.Then,by analyzing and comparing intelligent reasoning methods frequently used,the hybrid reasoning is employed to build the high-speed cutting database system.Then,the data structures of high-speed cutting case base and databases are determined.Finally,the component parts and working process of the high-speed cutting database system on the basis of hybrid reasoning are presented.

  18. Intrusion-Tolerant Based Survivable Model of Database System

    Institute of Scientific and Technical Information of China (English)

    ZHUJianming; WANGChao; MAJianfeng

    2005-01-01

    Survivability has become increasingly important with society's increased dependence of critical infrastructures on computers. Intrusiontolerant systems extend traditional secure systems to be able to survive or operate through attacks, thus it is an approach for achieving survivability. This paper proposes survivable model of database system based on intrusion-tolerant mechanisms. The model is built on three layers security architecture, to defense intrusion at the outer layer, to detect intrusion at the middle layer, and to tolerate intrusion at the inner layer. We utilize the techniques of both redundancy and diversity and threshold secret sharing schemes to implement the survivability of database and to protect confidential data from compromised servers in the presence of intrusions. Comparing with the existing schemes, our approach has realized the security and robustness for the key functions of a database system by using the integration security strategy and multiple security measures.

  19. Flybrain neuron database: a comprehensive database system of the Drosophila brain neurons.

    Science.gov (United States)

    Shinomiya, Kazunori; Matsuda, Keiji; Oishi, Takao; Otsuna, Hideo; Ito, Kei

    2011-04-01

    The long history of neuroscience has accumulated information about numerous types of neurons in the brain of various organisms. Because such neurons have been reported in diverse publications without controlled format, it is not easy to keep track of all the known neurons in a particular nervous system. To address this issue we constructed an online database called Flybrain Neuron Database (Flybrain NDB), which serves as a platform to collect and provide information about all the types of neurons published so far in the brain of Drosophila melanogaster. Projection patterns of the identified neurons in diverse areas of the brain were recorded in a unified format, with text-based descriptions as well as images and movies wherever possible. In some cases projection sites and the distribution of the post- and presynaptic sites were determined with greater detail than described in the original publication. Information about the labeling patterns of various antibodies and expression driver strains to visualize identified neurons are provided as a separate sub-database. We also implemented a novel visualization tool with which users can interactively examine three-dimensional reconstruction of the confocal serial section images with desired viewing angles and cross sections. Comprehensive collection and versatile search function of the anatomical information reported in diverse publications make it possible to analyze possible connectivity between different brain regions. We analyzed the preferential connectivity among optic lobe layers and the plausible olfactory sensory map in the lateral horn to show the usefulness of such a database.

  20. The Eruption Forecasting Information System (EFIS) database project

    Science.gov (United States)

    Ogburn, Sarah; Harpel, Chris; Pesicek, Jeremy; Wellik, Jay; Pallister, John; Wright, Heather

    2016-04-01

    The Eruption Forecasting Information System (EFIS) project is a new initiative of the U.S. Geological Survey-USAID Volcano Disaster Assistance Program (VDAP) with the goal of enhancing VDAP's ability to forecast the outcome of volcanic unrest. The EFIS project seeks to: (1) Move away from relying on the collective memory to probability estimation using databases (2) Create databases useful for pattern recognition and for answering common VDAP questions; e.g. how commonly does unrest lead to eruption? how commonly do phreatic eruptions portend magmatic eruptions and what is the range of antecedence times? (3) Create generic probabilistic event trees using global data for different volcano 'types' (4) Create background, volcano-specific, probabilistic event trees for frequently active or particularly hazardous volcanoes in advance of a crisis (5) Quantify and communicate uncertainty in probabilities A major component of the project is the global EFIS relational database, which contains multiple modules designed to aid in the construction of probabilistic event trees and to answer common questions that arise during volcanic crises. The primary module contains chronologies of volcanic unrest, including the timing of phreatic eruptions, column heights, eruptive products, etc. and will be initially populated using chronicles of eruptive activity from Alaskan volcanic eruptions in the GeoDIVA database (Cameron et al. 2013). This database module allows us to query across other global databases such as the WOVOdat database of monitoring data and the Smithsonian Institution's Global Volcanism Program (GVP) database of eruptive histories and volcano information. The EFIS database is in the early stages of development and population; thus, this contribution also serves as a request for feedback from the community.

  1. Quality assurance database for the CBM silicon tracking system

    Energy Technology Data Exchange (ETDEWEB)

    Lymanets, Anton [Physikalisches Institut, Universitaet Tuebingen (Germany); Collaboration: CBM-Collaboration

    2015-07-01

    The Silicon Tracking System is a main tracking device of the CBM Experiment at FAIR. Its construction includes production, quality assurance and assembly of large number of components, e.g., 106 carbon fiber support structures, 1300 silicon microstrip sensors, 16.6k readout chips, analog microcables, etc. Detector construction is distributed over several production and assembly sites and calls for a database that would be extensible and allow tracing the components, integrating the test data, monitoring the component statuses and data flow. A possible implementation of the above-mentioned requirements is being developed at GSI (Darmstadt) based on the FAIR DB Virtual Database Library that provides connectivity to common SQL-Database engines (PostgreSQL, Oracle, etc.). Data structure, database architecture as well as status of implementation are discussed.

  2. Current scenario of forensic DNA databases in or outside India and their relative risk

    Directory of Open Access Journals (Sweden)

    Sachil Kumar

    2016-03-01

    Full Text Available DNA technology has proved to be a worthy investigative tool for releasing the innocent citizens and bringing forth the person responsible for serious crimes. In a populated country like India there is a requirement for these types of databases. The Union government is working on a new version of a legislation that seeks to set up a national DNA database of ‘offenders’. As expected with the great success of the use of forensic DNA databases, new challenges are coming up. To rise to the challenges, different strategies have been proposed for increasing search capabilities, the implementation of which is on-going. The Federal Bureau of Investigation (FBI in the US has proposed to add more autosomal short tandem repeat (STR loci to its current core set of loci. The constant growth in the size of forensic DNA databases raises issues on the criteria of inclusion and retention and doubts on the efficiency, commensurability and infringement of privacy of such large personal data collections. People have difficulties that spill beyond the level of simple privacy and confidentiality issues.

  3. An Expert System Helps Students Learn Database Design

    Science.gov (United States)

    Post, Gerald V.; Whisenand, Thomas G.

    2005-01-01

    Teaching and learning database design is difficult for both instructors and students. Students need to solve many problems with feedback and corrections. A Web-based specialized expert system was created to enable students to create designs online and receive immediate feedback. An experiment testing the system shows that it significantly enhances…

  4. ADVICE--Educational System for Teaching Database Courses

    Science.gov (United States)

    Cvetanovic, M.; Radivojevic, Z.; Blagojevic, V.; Bojovic, M.

    2011-01-01

    This paper presents a Web-based educational system, ADVICE, that helps students to bridge the gap between database management system (DBMS) theory and practice. The usage of ADVICE is presented through a set of laboratory exercises developed to teach students conceptual and logical modeling, SQL, formal query languages, and normalization. While…

  5. Research of database-based modeling for mining management system

    Institute of Scientific and Technical Information of China (English)

    WU Hai-feng; JIN Zhi-xin; BAI Xi-jun

    2005-01-01

    Put forward the method to construct the simulation model automatically with database-based automatic modeling(DBAM) for mining system. Designed the standard simulation model linked with some open cut Pautomobile dispatch system. Analyzed and finded out the law among them, and designed model maker to realize the automatic programming of the new model program.

  6. An Expert System Helps Students Learn Database Design

    Science.gov (United States)

    Post, Gerald V.; Whisenand, Thomas G.

    2005-01-01

    Teaching and learning database design is difficult for both instructors and students. Students need to solve many problems with feedback and corrections. A Web-based specialized expert system was created to enable students to create designs online and receive immediate feedback. An experiment testing the system shows that it significantly enhances…

  7. Content-based image database system for epilepsy.

    Science.gov (United States)

    Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid; Fotouhi, Farshad; Elisevich, Kost

    2005-09-01

    We have designed and implemented a human brain multi-modality database system with content-based image management, navigation and retrieval support for epilepsy. The system consists of several modules including a database backbone, brain structure identification and localization, segmentation, registration, visual feature extraction, clustering/classification and query modules. Our newly developed anatomical landmark localization and brain structure identification method facilitates navigation through an image data and extracts useful information for segmentation, registration and query modules. The database stores T1-, T2-weighted and FLAIR MRI and ictal/interictal SPECT modalities with associated clinical data. We confine the visual feature extractors within anatomical structures to support semantically rich content-based procedures. The proposed system serves as a research tool to evaluate a vast number of hypotheses regarding the condition such as resection of the hippocampus with a relatively small volume and high average signal intensity on FLAIR. Once the database is populated, using data mining tools, partially invisible correlations between different modalities of data, modeled in database schema, can be discovered. The design and implementation aspects of the proposed system are the main focus of this paper.

  8. Optics Toolbox: An Intelligent Relational Database System For Optical Designers

    Science.gov (United States)

    Weller, Scott W.; Hopkins, Robert E.

    1986-12-01

    Optical designers were among the first to use the computer as an engineering tool. Powerful programs have been written to do ray-trace analysis, third-order layout, and optimization. However, newer computing techniques such as database management and expert systems have not been adopted by the optical design community. For the purpose of this discussion we will define a relational database system as a database which allows the user to specify his requirements using logical relations. For example, to search for all lenses in a lens database with a F/number less than two, and a half field of view near 28 degrees, you might enter the following: FNO English-like language, and which are easily modified by the user. An example rule is: IF require microscope objective in air and require NA > 0.9 THEN suggest the use of an oil immersion objective The heart of the expert system is the rule interpreter, sometimes called an inference engine, which reads the rules and forms conclusions based on them. The use of a relational database system containing lens prototypes seems to be a viable prospect. However, it is not clear that expert systems have a place in optical design. In domains such as medical diagnosis and petrology, expert systems are flourishing. These domains are quite different from optical design, however, because optical design is a creative process, and the rules are difficult to write down. We do think that an expert system is feasible in the area of first order layout, which is sufficiently diagnostic in nature to permit useful rules to be written. This first-order expert would emulate an expert designer as he interacted with a customer for the first time: asking the right questions, forming conclusions, and making suggestions. With these objectives in mind, we have developed the Optics Toolbox. Optics Toolbox is actually two programs in one: it is a powerful relational database system with twenty-one search parameters, four search modes, and multi-database

  9. Generating Shifting Workloads to Benchmark Adaptability in Relational Database Systems

    Science.gov (United States)

    Rabl, Tilmann; Lang, Andreas; Hackl, Thomas; Sick, Bernhard; Kosch, Harald

    A large body of research concerns the adaptability of database systems. Many commercial systems already contain autonomic processes that adapt configurations as well as data structures and data organization. Yet there is virtually no possibility for a just measurement of the quality of such optimizations. While standard benchmarks have been developed that simulate real-world database applications very precisely, none of them considers variations in workloads produced by human factors. Today’s benchmarks test the performance of database systems by measuring peak performance on homogeneous request streams. Nevertheless, in systems with user interaction access patterns are constantly shifting. We present a benchmark that simulates a web information system with interaction of large user groups. It is based on the analysis of a real online eLearning management system with 15,000 users. The benchmark considers the temporal dependency of user interaction. Main focus is to measure the adaptability of a database management system according to shifting workloads. We will give details on our design approach that uses sophisticated pattern analysis and data mining techniques.

  10. The design and implementation of pedagogical software for multi-backend/multi-lingual database system.

    OpenAIRE

    Little, Craig W.

    1987-01-01

    Approved for public release; distribution is unlimited Traditionally, courses in database systems do not use pedagogical software for the purpose of instructing the database systems, despite the progress made in modem database architecture. In this thesis, we present a working document to assist in the instruction of a new database system, the Multi-Backend Database System (MBDS)-and the Multi-Lingual Database System (MLDS). As the course of instruction describes the creatio...

  11. Design of BEPC Ⅱ bunch current monitor system

    Institute of Scientific and Technical Information of China (English)

    ZHANG Lei; MA Hui-Zhou; YUE Jun-Hui; LEI Ge; CAO Jian-She; MA Li

    2008-01-01

    BEPC Ⅱ is an electron-positron collider designed to run under multi-bunches and high beam current condition. The accelerator consists of an electron ring, a positron ring and a linear injector. In order to achieve the target luminosity and implement the equal bunch charge injection, the Bunch Current Monitor (BCM)system is built on BEPC Ⅱ. The BCM system consists of three parts: the front-end circuit, the bunch current acquisition system and the bucket selection system. The control software of BCM is based on VxWorks and EPICS. With the help of BCM system, the bunch current in each bucket can be monitored in the Central Control Room. The BEPC Ⅱ timing system can also use the bunch current database to decide which bucket needs to refill to implement "top-off" injection.

  12. (BARS) -- Bibliographic Retrieval System Sandia shock compression (SSC) database shock physics index (SPHINX) database. Volume 3, UNIX version Systems Guide

    Energy Technology Data Exchange (ETDEWEB)

    von Laven, G.M. [Advanced Software Engineering, Madison, AL (United States); Herrmann, W. [Sandia National Labs., Albuquerque, NM (United States)

    1993-09-01

    The Bibliographic Retrieval System (BARS) is a database management system specially designed to store and retrieve bibliographic references and track documents. The system uses INGRES to manage this database and user interface. It uses forms for journal articles, books, conference proceedings, theses, technical reports, letters, memos, visual aids, as well as a miscellaneous form which can be used for data sets or any other material which can be assigned an access or file number. Sorted output resulting from flexible BOOLEAN searches can be printed or saved in files which can be inserted in reference lists for use with word processors.

  13. Cluster based parallel database management system for data intensive computing

    Institute of Scientific and Technical Information of China (English)

    Jianzhong LI; Wei ZHANG

    2009-01-01

    This paper describes a computer-cluster based parallel database management system (DBMS), InfiniteDB, developed by the authors. InfiniteDB aims at efficiently sup-port data intensive computing in response to the rapid grow-ing in database size and the need of high performance ana-lyzing of massive databases. It can be efficiently executed in the computing system composed by thousands of computers such as cloud computing system. It supports the parallelisms of intra-query, inter-query, intra-operation, inter-operation and pipelining. It provides effective strategies for managing massive databases including the multiple data declustering methods, the declustering-aware algorithms for relational operations and other database operations, and the adaptive query optimization method. It also provides the functions of parallel data warehousing and data mining, the coordinator-wrapper mechanism to support the integration of heteroge-neous information resources on the Internet, and the fault tol-erant and resilient infrastructures. It has been used in many applications and has proved quite effective for data intensive computing.

  14. CancerHSP: anticancer herbs database of systems pharmacology

    Science.gov (United States)

    Tao, Weiyang; Li, Bohui; Gao, Shuo; Bai, Yaofei; Shar, Piar Ali; Zhang, Wenjuan; Guo, Zihu; Sun, Ke; Fu, Yingxue; Huang, Chao; Zheng, Chunli; Mu, Jiexin; Pei, Tianli; Wang, Yuan; Li, Yan; Wang, Yonghua

    2015-06-01

    The numerous natural products and their bioactivity potentially afford an extraordinary resource for new drug discovery and have been employed in cancer treatment. However, the underlying pharmacological mechanisms of most natural anticancer compounds remain elusive, which has become one of the major obstacles in developing novel effective anticancer agents. Here, to address these unmet needs, we developed an anticancer herbs database of systems pharmacology (CancerHSP), which records anticancer herbs related information through manual curation. Currently, CancerHSP contains 2439 anticancer herbal medicines with 3575 anticancer ingredients. For each ingredient, the molecular structure and nine key ADME parameters are provided. Moreover, we also provide the anticancer activities of these compounds based on 492 different cancer cell lines. Further, the protein targets of the compounds are predicted by state-of-art methods or collected from literatures. CancerHSP will help reveal the molecular mechanisms of natural anticancer products and accelerate anticancer drug development, especially facilitate future investigations on drug repositioning and drug discovery. CancerHSP is freely available on the web at http://lsp.nwsuaf.edu.cn/CancerHSP.php.

  15. CancerHSP: anticancer herbs database of systems pharmacology

    Science.gov (United States)

    Tao, Weiyang; Li, Bohui; Gao, Shuo; Bai, Yaofei; Shar, Piar Ali; Zhang, Wenjuan; Guo, Zihu; Sun, Ke; Fu, Yingxue; Huang, Chao; Zheng, Chunli; Mu, Jiexin; Pei, Tianli; Wang, Yuan; Li, Yan; Wang, Yonghua

    2015-01-01

    The numerous natural products and their bioactivity potentially afford an extraordinary resource for new drug discovery and have been employed in cancer treatment. However, the underlying pharmacological mechanisms of most natural anticancer compounds remain elusive, which has become one of the major obstacles in developing novel effective anticancer agents. Here, to address these unmet needs, we developed an anticancer herbs database of systems pharmacology (CancerHSP), which records anticancer herbs related information through manual curation. Currently, CancerHSP contains 2439 anticancer herbal medicines with 3575 anticancer ingredients. For each ingredient, the molecular structure and nine key ADME parameters are provided. Moreover, we also provide the anticancer activities of these compounds based on 492 different cancer cell lines. Further, the protein targets of the compounds are predicted by state-of-art methods or collected from literatures. CancerHSP will help reveal the molecular mechanisms of natural anticancer products and accelerate anticancer drug development, especially facilitate future investigations on drug repositioning and drug discovery. CancerHSP is freely available on the web at http://lsp.nwsuaf.edu.cn/CancerHSP.php. PMID:26074488

  16. Towards Platform Independent Database Modelling in Enterprise Systems

    OpenAIRE

    Ellison, Martyn Holland; Calinescu, Radu; Paige, Richard F.

    2016-01-01

    Enterprise software systems are prevalent in many organisations, typically they are data-intensive and manage customer, sales, or other important data. When an enterprise system needs to be modernised or migrated (e.g. to the cloud) it is necessary to understand the structure of this data and how it is used. We have developed a tool-supported approach to model database structure, query patterns, and growth patterns. Compared to existing work, our tool offers increased system support and exten...

  17. Rhode Island Water Supply System Management Plan Database (WSSMP-Version 1.0)

    Science.gov (United States)

    Granato, Gregory E.

    2004-01-01

    In Rhode Island, the availability of water of sufficient quality and quantity to meet current and future environmental and economic needs is vital to life and the State's economy. Water suppliers, the Rhode Island Water Resources Board (RIWRB), and other State agencies responsible for water resources in Rhode Island need information about available resources, the water-supply infrastructure, and water use patterns. These decision makers need historical, current, and future water-resource information. In 1997, the State of Rhode Island formalized a system of Water Supply System Management Plans (WSSMPs) to characterize and document relevant water-supply information. All major water suppliers (those that obtain, transport, purchase, or sell more than 50 million gallons of water per year) are required to prepare, maintain, and carry out WSSMPs. An electronic database for this WSSMP information has been deemed necessary by the RIWRB for water suppliers and State agencies to consistently document, maintain, and interpret the information in these plans. Availability of WSSMP data in standard formats will allow water suppliers and State agencies to improve the understanding of water-supply systems and to plan for future needs or water-supply emergencies. In 2002, however, the Rhode Island General Assembly passed a law that classifies some of the WSSMP information as confidential to protect the water-supply infrastructure from potential terrorist threats. Therefore the WSSMP database was designed for an implementation method that will balance security concerns with the information needs of the RIWRB, suppliers, other State agencies, and the public. A WSSMP database was developed by the U.S. Geological Survey in cooperation with the RIWRB. The database was designed to catalog WSSMP information in a format that would accommodate synthesis of current and future information about Rhode Island's water-supply infrastructure. This report documents the design and implementation of

  18. YUCSA: A CLIPS expert database system to monitor academic performance

    Science.gov (United States)

    Toptsis, Anestis A.; Ho, Frankie; Leindekar, Milton; Foon, Debra Low; Carbonaro, Mike

    1991-01-01

    The York University CLIPS Student Administrator (YUCSA), an expert database system implemented in C Language Integrated Processing System (CLIPS), for monitoring the academic performance of undergraduate students at York University, is discussed. The expert system component in the system has already been implemented for two major departments, and it is under testing and enhancement for more departments. Also, more elaborate user interfaces are under development. We describe the design and implementation of the system, problems encountered, and immediate future plans. The system has excellent maintainability and it is very efficient, taking less than one minute to complete an assessment of one student.

  19. Integrity control in relational database systems - an overview

    NARCIS (Netherlands)

    Grefen, Paul W.P.J.; Apers, Peter M.G.

    1993-01-01

    This paper gives an overview of research regarding integrity control or integrity constraint handling in relational database management systems. The topic of constraint handling is discussed from two points of view. First, constraint handling is discussed by identifying a number of important researc

  20. National Levee Database, series information for the current inventory of the Nation's levees.

    Data.gov (United States)

    Federal Geographic Data Committee — The National Levee Database is authoritative database that describes the location and condition of the Nation’s levees. The database contains 21 feature classes...

  1. 9th Asian Conference on Intelligent Information and Database Systems

    CERN Document Server

    Nguyen, Ngoc; Shirai, Kiyoaki

    2017-01-01

    This book presents recent research in intelligent information and database systems. The carefully selected contributions were initially accepted for presentation as posters at the 9th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2017) held from to 5 April 2017 in Kanazawa, Japan. While the contributions are of an advanced scientific level, several are accessible for non-expert readers. The book brings together 47 chapters divided into six main parts: • Part I. From Machine Learning to Data Mining. • Part II. Big Data and Collaborative Decision Support Systems, • Part III. Computer Vision Analysis, Detection, Tracking and Recognition, • Part IV. Data-Intensive Text Processing, • Part V. Innovations in Web and Internet Technologies, and • Part VI. New Methods and Applications in Information and Software Engineering. The book is an excellent resource for researchers and those working in algorithmics, artificial and computational intelligence, collaborative systems, decisio...

  2. lexiDB:a scalable corpus database management system

    OpenAIRE

    Coole, Matt; Rayson, Paul Edward; Mariani, John Amedeo

    2016-01-01

    lexiDB is a scalable corpus database management system designed to fulfill corpus linguistics retrieval queries on multi-billion-word multiply-annotated corpora. It is based on a distributed architecture that allows the system to scale out to support ever larger text collections. This paper presents an overview of the architecture behind lexiDB as well as a demonstration of its functionality. We present lexiDB's performance metrics based on the AWS (Amazon Web Services) infrastructure with tw...

  3. Developing genomic knowledge bases and databases to support clinical management: current perspectives.

    Science.gov (United States)

    Huser, Vojtech; Sincan, Murat; Cimino, James J

    2014-01-01

    Personalized medicine, the ability to tailor diagnostic and treatment decisions for individual patients, is seen as the evolution of modern medicine. We characterize here the informatics resources available today or envisioned in the near future that can support clinical interpretation of genomic test results. We assume a clinical sequencing scenario (germline whole-exome sequencing) in which a clinical specialist, such as an endocrinologist, needs to tailor patient management decisions within his or her specialty (targeted findings) but relies on a genetic counselor to interpret off-target incidental findings. We characterize the genomic input data and list various types of knowledge bases that provide genomic knowledge for generating clinical decision support. We highlight the need for patient-level databases with detailed lifelong phenotype content in addition to genotype data and provide a list of recommendations for personalized medicine knowledge bases and databases. We conclude that no single knowledge base can currently support all aspects of personalized recommendations and that consolidation of several current resources into larger, more dynamic and collaborative knowledge bases may offer a future path forward.

  4. Dataspace: an automated visualization system for large databases

    Science.gov (United States)

    Petajan, Eric D.; Jean, Yves D.; Lieuwen, Dan; Anupam, Vinod

    1997-04-01

    DataSpace is a multi-platform software system for easily visualizing relational databases using a set of flexible 3D graphics tools. Typically, five attributes are selected for a given visualization session where two of the attributes are used to generate 2D plots and the other three attributes are used to position the 2D plots in a regular 3D lattice. Mouse-based 3D navigation with constraints allows the user to see the 'forest and the trees' without getting 'lost in space'. DataSpace uses the Standard Query Language to allow connection popular database systems. DataSpace also incorporates a variety of additional tools e.g., aggregation, data 'drill down', multidimensional scaling, variable transparency, query by example, and display of graphics from external applications. Labeling of node contents is automatic. 3D strokefonts are used to provide readable yet scalable text in a 3D environment. Since interactive 3D navigation is essential to DataSpace, we have incorporated several methods for adaptively reducing graphical detail without losing information when the host machine is overloaded. DataSpace has been sued to visualized databases containing over 1 million records with interactive performance. In particular, large databases containing stock price information and telecommunications customer profiles have been analyzed using DataSpace.

  5. A HYBRID INTRUSION PREVENTION SYSTEM (HIPS FOR WEB DATABASE SECURITY

    Directory of Open Access Journals (Sweden)

    Eslam Mohsin Hassib

    2010-07-01

    Full Text Available Web database security is a challenging issue that should be taken into consideration when designing and building business based web applications. Those applications usually include critical processes such as electronic-commerce web applications that include money transfer via visa or master cards. Security is a critical issue in other web based application such as sites for military weapons companies and national security of countries. The main contributionof this paper is to introduce a new web database security model that includes a combination of triple system ; (i Host Identity protocol(HIP in a new authentication method called DSUC (Data Security Unique Code, (ii a strong filtering rules that detects intruders with high accuracy, and (iii a real time monitoring system that employs the Uncertainty Degree Model (UDM using fuzzy sets theory. It was shown that the combination of those three powerful security issues results in very strong security model. Accordingly, the proposed web database security model has the ability to detect and provide a real time prevention of intruder access with high precision. Experimental results have shown that the proposed model introduces satisfactory web database protection levels which reach in some cases to detect and prevent more that 93% of the intruders.

  6. Documentation for the U.S. Geological Survey Public-Supply Database (PSDB): a database of permitted public-supply wells, surface-water intakes, and systems in the United States

    Science.gov (United States)

    Price, Curtis V.; Maupin, Molly A.

    2014-01-01

    The U.S. Geological Survey (USGS) has developed a database containing information about wells, surface-water intakes, and distribution systems that are part of public water systems across the United States, its territories, and possessions. Programs of the USGS such as the National Water Census, the National Water Use Information Program, and the National Water-Quality Assessment Program all require a complete and current inventory of public water systems, the sources of water used by those systems, and the size of populations served by the systems across the Nation. Although the U.S. Environmental Protection Agency’s Safe Drinking Water Information System (SDWIS) database already exists as the primary national Federal database for information on public water systems, the Public-Supply Database (PSDB) was developed to add value to SDWIS data with enhanced location and ancillary information, and to provide links to other databases, including the USGS’s National Water Information System (NWIS) database.

  7. NMO-DBr: the Brazilian Neuromyelitis Optica Database System

    Directory of Open Access Journals (Sweden)

    Marco A. Lana-Peixoto

    2011-08-01

    Full Text Available OBJECTIVE: To present the Brazilian Neuromyelitis Optica Database System (NMO-DBr, a database system which collects, stores, retrieves, and analyzes information from patients with NMO and NMO-related disorders. METHOD: NMO-DBr uses Flux, a LIMS (Laboratory Information Management Systems for data management. We used information from medical records of patients with NMO spectrum disorders, and NMO variants, the latter defined by the presence of neurological symptoms associated with typical lesions on brain magnetic resonance imaging (MRI or aquaporin-4 antibody seropositivity. RESULTS: NMO-DBr contains data related to patient's identification, symptoms, associated conditions, index events, recurrences, family history, visual and spinal cord evaluation, disability, cerebrospinal fluid and blood tests, MRI, optic coherence tomography, diagnosis and treatment. It guarantees confidentiality, performs cross-checking and statistical analysis. CONCLUSION: NMO-DBr is a tool which guides professionals to take the history, record and analyze information making medical practice more consistent and improving research in the area.

  8. Coordinate Systems Integration for Craniofacial Database from Multimodal Devices

    Directory of Open Access Journals (Sweden)

    Deni Suwardhi

    2005-05-01

    Full Text Available This study presents a data registration method for craniofacial spatial data of different modalities. The data consists of three dimensional (3D vector and raster data models. The data is stored in object relational database. The data capture devices are Laser scanner, CT (Computed Tomography scan and CR (Close Range Photogrammetry. The objective of the registration is to transform the data from various coordinate systems into a single 3-D Cartesian coordinate system. The standard error of the registration obtained from multimodal imaging devices using 3D affine transformation is in the ranged of 1-2 mm. This study is a step forward for storing the craniofacial spatial data in one reference system in database.

  9. Coordinate systems integration for development of malaysian craniofacial database.

    Science.gov (United States)

    Rajion, Zainul; Suwardhi, Deni; Setan, Halim; Chong, Albert; Majid, Zulkepli; Ahmad, Anuar; Rani Samsudin, Ab; Aziz, Izhar; Wan Harun, W A R

    2005-01-01

    This study presents a data registration method for craniofacial spatial data of different modalities. The data consists of three dimensional (3D) vector and raster data models. The data is stored in object relational database. The data capture devices are Laser scanner, CT (Computed Tomography) scan and CR (Close Range) Photogrammetry. The objective of the registration is to transform the data from various coordinate systems into a single 3-D Cartesian coordinate system. The standard error of the registration obtained from multimodal imaging devices using 3D affine transformation is in the ranged of 1-2 mm. This study is a step forward for storing the spatial craniofacial data in one reference system in database.

  10. Towards a Component Based Model for Database Systems

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2004-02-01

    Full Text Available Due to their effectiveness in the design and development of software applications and due to their recognized advantages in terms of reusability, Component-Based Software Engineering (CBSE concepts have been arousing a great deal of interest in recent years. This paper presents and extends a component-based approach to object-oriented database systems (OODB introduced by us in [1] and [2]. Components are proposed as a new abstraction level for database system, logical partitions of the schema. In this context, the scope is introduced as an escalated property for transactions. Components are studied from the integrity, consistency, and concurrency control perspective. The main benefits of our proposed component model for OODB are the reusability of the database design, including the access statistics required for a proper query optimization, and a smooth information exchange. The integration of crosscutting concerns into the component database model using aspect-oriented techniques is also discussed. One of the main goals is to define a method for the assessment of component composition capabilities. These capabilities are restricted by the component’s interface and measured in terms of adaptability, degree of compose-ability and acceptability level. The above-mentioned metrics are extended from database components to generic software components. This paper extends and consolidates into one common view the ideas previously presented by us in [1, 2, 3].[1] Octavian Paul Rotaru, Marian Dobre, Component Aspects in Object Oriented Databases, Proceedings of the International Conference on Software Engineering Research and Practice (SERP’04, Volume II, ISBN 1-932415-29-7, pages 719-725, Las Vegas, NV, USA, June 2004.[2] Octavian Paul Rotaru, Marian Dobre, Mircea Petrescu, Integrity and Consistency Aspects in Component-Oriented Databases, Proceedings of the International Symposium on Innovation in Information and Communication Technology (ISIICT

  11. State analysis requirements database for engineering complex embedded systems

    Science.gov (United States)

    Bennett, Matthew B.; Rasmussen, Robert D.; Ingham, Michel D.

    2004-01-01

    It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer's intent, potentially leading to software errors. This problem is addressed by a systems engineering tool called the State Analysis Database, which provides a tool for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using the State Analysis Database.

  12. State analysis requirements database for engineering complex embedded systems

    Science.gov (United States)

    Bennett, Matthew B.; Rasmussen, Robert D.; Ingham, Michel D.

    2004-01-01

    It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer's intent, potentially leading to software errors. This problem is addressed by a systems engineering tool called the State Analysis Database, which provides a tool for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using the State Analysis Database.

  13. Design and Implementation of the Digital Engineering Laboratory Distributed Database Management System.

    Science.gov (United States)

    1984-12-01

    COMMUNICATION CHIANN EL (a) DBMS DBMS DBMS 1 2 ... n COMMUNICATION COMMUNICATION COMMUNICATIN MODULE MODULE MODULE COMMUNICATION CHANNEL (b) DBMS DBMS DBMS 1...problems. Dawson of Mitre Corporation (2) discusses using distributed databases for a field-deployable, tactical air control system. The Worldwide...Military Command and Control System is *heavily dependent on networking capabilities, and in an article Coles of Mitre Corporation (1) discusses current

  14. Defining and resolving current systems in geospace

    Science.gov (United States)

    Ganushkina, N. Y.; Liemohn, M. W.; Dubyagin, S.; Daglis, I. A.; Dandouras, I.; De Zeeuw, D. L.; Ebihara, Y.; Ilie, R.; Katus, R.; Kubyshkina, M.; Milan, S. E.; Ohtani, S.; Ostgaard, N.; Reistad, J. P.; Tenfjord, P.; Toffoletto, F.; Zaharia, S.; Amariutei, O.

    2015-11-01

    Electric currents flowing through near-Earth space (R ≤ 12 RE) can support a highly distorted magnetic field topology, changing particle drift paths and therefore having a nonlinear feedback on the currents themselves. A number of current systems exist in the magnetosphere, most commonly defined as (1) the dayside magnetopause Chapman-Ferraro currents, (2) the Birkeland field-aligned currents with high-latitude "region 1" and lower-latitude "region 2" currents connected to the partial ring current, (3) the magnetotail currents, and (4) the symmetric ring current. In the near-Earth nightside region, however, several of these current systems flow in close proximity to each other. Moreover, the existence of other temporal current systems, such as the substorm current wedge or "banana" current, has been reported. It is very difficult to identify a local measurement as belonging to a specific system. Such identification is important, however, because how the current closes and how these loops change in space and time governs the magnetic topology of the magnetosphere and therefore controls the physical processes of geospace. Furthermore, many methods exist for identifying the regions of near-Earth space carrying each type of current. This study presents a robust collection of these definitions of current systems in geospace, particularly in the near-Earth nightside magnetosphere, as viewed from a variety of observational and computational analysis techniques. The influence of definitional choice on the resulting interpretation of physical processes governing geospace dynamics is presented and discussed.

  15. CancerHSP: anticancer herbs database of systems pharmacology

    OpenAIRE

    Weiyang Tao; Bohui Li; Shuo Gao; Yaofei Bai; Piar Ali Shar; Wenjuan Zhang; Zihu Guo; Ke Sun; Yingxue Fu; Chao Huang; Chunli Zheng; Jiexin Mu; Tianli Pei; Yuan Wang; Yan Li

    2015-01-01

    The numerous natural products and their bioactivity potentially afford an extraordinary resource for new drug discovery and have been employed in cancer treatment. However, the underlying pharmacological mechanisms of most natural anticancer compounds remain elusive, which has become one of the major obstacles in developing novel effective anticancer agents. Here, to address these unmet needs, we developed an anticancer herbs database of systems pharmacology (CancerHSP), which records antican...

  16. Soft Biometrics Database: A Benchmark For Keystroke Dynamics Biometric Systems

    OpenAIRE

    Syed Idrus, Syed Zulkarnain; Cherrier, Estelle; Rosenberger, Christophe; Bours, Patrick

    2013-01-01

    International audience; Among all the existing biometric modalities, authentication systems based on keystroke dynamics are particularly interesting for usability reasons. Many researchers proposed in the last decades some algorithms to increase the efficiency of this biometric modality. Propose in this paper: a benchmark testing suite composed of a database containing multiple data (keystroke dynamics templates, soft biometric traits . . . ), which will be made available for the research com...

  17. Computerized database management system for breast cancer patients.

    Science.gov (United States)

    Sim, Kok Swee; Chong, Sze Siang; Tso, Chih Ping; Nia, Mohsen Esmaeili; Chong, Aun Kee; Abbas, Siti Fathimah

    2014-01-01

    Data analysis based on breast cancer risk factors such as age, race, breastfeeding, hormone replacement therapy, family history, and obesity was conducted on breast cancer patients using a new enhanced computerized database management system. My Structural Query Language (MySQL) is selected as the application for database management system to store the patient data collected from hospitals in Malaysia. An automatic calculation tool is embedded in this system to assist the data analysis. The results are plotted automatically and a user-friendly graphical user interface is developed that can control the MySQL database. Case studies show breast cancer incidence rate is highest among Malay women, followed by Chinese and Indian. The peak age for breast cancer incidence is from 50 to 59 years old. Results suggest that the chance of developing breast cancer is increased in older women, and reduced with breastfeeding practice. The weight status might affect the breast cancer risk differently. Additional studies are needed to confirm these findings.

  18. Development of database and searching system for tool grinding

    Directory of Open Access Journals (Sweden)

    J.Y. Chen

    2008-02-01

    Full Text Available Purpose: For achieving the goal of saving time on the tool grinding and design, an efficient method of developing the data management and searching system for the standard cutting tools is proposed in this study.Design/methodology/approach: At first the tool grinding software with open architecture was employed to design and plan grinding processes for seven types of tools. According to the characteristics of tools (e.g. types, diameter, radius and so on, 4802 tool data were established in the relational database. Then, the SQL syntax was utilized to write the searching algorithms, and the human machine interfaces of the searching system for the tool database were developed by C++ Builder.Findings: For grinding a square end mill with two-flute, a half of time on the tool design and the change of production line for grinding other types of tools can be saved by means of our system. More specifically, the efficiency in terms of the approach and retract time was improved up to 40%, and an improvement of approximately 10.6% in the overall machining time can be achieved.Research limitations/implications: In fact, the used tool database in this study only includes some specific tools such as the square end mill. The step drill, taper tools, and special tools can also be taken into account in the database for future research.Practical implications: The most commercial tool grinding software is the modular-based design and use tool shapes to construct the CAM interface. Some limitations on the tool design are undesirable for customers. On the contrary, employing not only the grinding processes to construct the grinding path of tools but the searching system combined with the grinding software, it gives more flexible for one to design new tools.Originality/value: A novel tool database and searching system is presented for tool grinding. Using this system can save time and provide more convenience on designing tools and grinding. In other words, the

  19. A neuroinformatics database system for disease-oriented neuroimaging research.

    Science.gov (United States)

    Wong, Stephen T C; Hoo, Kent Soo; Cao, Xinhua; Tjandra, Donny; Fu, J C; Dillon, William P

    2004-03-01

    Clinical databases are continually growing and accruing more patient information. One of the challenges for managing this wealth of data is efficient retrieval and analysis of a broad range of image and non-image patient data from diverse data sources. This article describes the design and implementation of a new class of research data warehouse, neuroinformatics database system (NIDS), which will alleviate these problems for clinicians and researchers studying and treating patients with intractable temporal lobe epilepsy. The NIDS is a secured, multi-tier system that enables the user to gather, proofread, analyze, and store data from multiple underlying sources. In addition to data management, the NIDS provides several key functions including image analysis and processing, free text search of patient reports, construction of general queries, and on-line statistical analysis. The establishment of this integrated research database will serve as a foundation for future hypothesis-driven experiments, which could uncover previously unsuspected correlations and perhaps help to identify new and accurate predictors for image diagnosis.

  20. Low Quality Image Retrieval System For Generic Databases

    Directory of Open Access Journals (Sweden)

    W.A.D.N. Wijesekera

    2015-08-01

    Full Text Available Abstract Content Based Image Retrieval CBIR systems have become the trend in image retrieval technologies as the index or notation based image retrieval algorithms give less efficient results in high usage of images. These CBIR systems are mostly developed considering the availability of high or normal quality images. High availability of low quality images in databases due to usage of different quality equipment to capture images and different environmental conditions the photos are being captured has opened up a new path in image retrieval research area. The algorithms which are developed for low quality image based image retrieval are only a few and have been performed only for specific domains. Low quality image based image retrieval algorithm on a generic database with a considerable accuracy level for different industries is an area which remains unsolved. Through this study an algorithm has been developed to achieve above mentioned gaps. By using images with inappropriate brightness and compressed images as low quality images the proposed algorithm is tested on a generic database which includes many categories of data instead of using a specific domain. The new algorithm developed gives better precision and recall values when they are clustered into the most appropriate number of clusters which changes according to the level of quality of the image. As the quality of the image decreases the accuracy of the algorithm also tends to be reduced a space for further improvement.

  1. Adaptive Tuning Algorithm for Performance tuning of Database Management System

    CERN Document Server

    Rodd, S F

    2010-01-01

    Performance tuning of Database Management Systems(DBMS) is both complex and challenging as it involves identifying and altering several key performance tuning parameters. The quality of tuning and the extent of performance enhancement achieved greatly depends on the skill and experience of the Database Administrator (DBA). As neural networks have the ability to adapt to dynamically changing inputs and also their ability to learn makes them ideal candidates for employing them for tuning purpose. In this paper, a novel tuning algorithm based on neural network estimated tuning parameters is presented. The key performance indicators are proactively monitored and fed as input to the Neural Network and the trained network estimates the suitable size of the buffer cache, shared pool and redo log buffer size. The tuner alters these tuning parameters using the estimated values using a rate change computing algorithm. The preliminary results show that the proposed method is effective in improving the query response tim...

  2. The current knowledge on centipedes (Chilopoda) in Slovenia: faunistic and ecological records from a national database.

    Science.gov (United States)

    Ravnjak, Blanka; Kos, Ivan

    2015-01-01

    In spite of Slovenia's very high biodiversity, it has only a few animal groups that have been significantly investigated and are well known in this area. Slovenian researchers have studied only about half of the species known to be living in the country (Mršić 1997), but among well investigated species are centipedes. All available data about centipedes in Slovenia collected from 1921 to 2014 have been consolidated and constitute a general electronic database called "CHILOBIO", which was created to provide an easy overview of the Slovenian centipede fauna and to allow entry and interpretation of new data collected in future research. The level of investigation has been studied with this database, in conjunction with a geographic information system (GIS). In the study period, 109 species were identified from 350 localities in 109 of the 236 UTM 10 × 10 km quadrants which cover the study area. The south-central part of the country has been the subject of the best investigations, whereas there is an absence of data from the south-eastern, eastern and north-eastern regions The highest number of species (52) has been recorded near the Iška valley (Central Slovenia, quadrant VL68). In 48% of the UTM quadrants investigated fewer than 10 species were recorded and just 5 species were found in one locality. Seventeen species were reported only in the Dinaric region, 4 in the Prealpine-subpannonian region and 7 in the Primorska-submediterranean region.

  3. A Novel Database Design for Student Information System

    Directory of Open Access Journals (Sweden)

    Noraziah Ahmad

    2010-01-01

    Full Text Available Problem statement: A new system designed, where necessary and alternative solutions given to solve the different problems and the most feasible solution were selected. Approach: This study presents the database design for student information system. Computerization of a system means to change it from a manual to a computer-based, system to automate the work and to provide efficiency, accuracy, timelessness, security and economy. Results: After undertaking an in-depth examination of the Ayub Medical Collage's (AMC existing manual student information system and analyzing its short comings, it has been found necessary to remove its deficiencies and provide a suitable solution for presently encountered problem. Conclusion: The proposed algorithm can help the management to exercise an effective and timely decision making.

  4. Development of a Comprehensive Database System for Safety Analyst.

    Science.gov (United States)

    Paz, Alexander; Veeramisti, Naveen; Khanal, Indira; Baker, Justin; de la Fuente-Mella, Hanns

    2015-01-01

    This study addressed barriers associated with the use of Safety Analyst, a state-of-the-art tool that has been developed to assist during the entire Traffic Safety Management process but that is not widely used due to a number of challenges as described in this paper. As part of this study, a comprehensive database system and tools to provide data to multiple traffic safety applications, with a focus on Safety Analyst, were developed. A number of data management tools were developed to extract, collect, transform, integrate, and load the data. The system includes consistency-checking capabilities to ensure the adequate insertion and update of data into the database. This system focused on data from roadways, ramps, intersections, and traffic characteristics for Safety Analyst. To test the proposed system and tools, data from Clark County, which is the largest county in Nevada and includes the cities of Las Vegas, Henderson, Boulder City, and North Las Vegas, was used. The database and Safety Analyst together help identify the sites with the potential for safety improvements. Specifically, this study examined the results from two case studies. The first case study, which identified sites having a potential for safety improvements with respect to fatal and all injury crashes, included all roadway elements and used default and calibrated Safety Performance Functions (SPFs). The second case study identified sites having a potential for safety improvements with respect to fatal and all injury crashes, specifically regarding intersections; it used default and calibrated SPFs as well. Conclusions were developed for the calibration of safety performance functions and the classification of site subtypes. Guidelines were provided about the selection of a particular network screening type or performance measure for network screening.

  5. Development of a Comprehensive Database System for Safety Analyst

    Directory of Open Access Journals (Sweden)

    Alexander Paz

    2015-01-01

    Full Text Available This study addressed barriers associated with the use of Safety Analyst, a state-of-the-art tool that has been developed to assist during the entire Traffic Safety Management process but that is not widely used due to a number of challenges as described in this paper. As part of this study, a comprehensive database system and tools to provide data to multiple traffic safety applications, with a focus on Safety Analyst, were developed. A number of data management tools were developed to extract, collect, transform, integrate, and load the data. The system includes consistency-checking capabilities to ensure the adequate insertion and update of data into the database. This system focused on data from roadways, ramps, intersections, and traffic characteristics for Safety Analyst. To test the proposed system and tools, data from Clark County, which is the largest county in Nevada and includes the cities of Las Vegas, Henderson, Boulder City, and North Las Vegas, was used. The database and Safety Analyst together help identify the sites with the potential for safety improvements. Specifically, this study examined the results from two case studies. The first case study, which identified sites having a potential for safety improvements with respect to fatal and all injury crashes, included all roadway elements and used default and calibrated Safety Performance Functions (SPFs. The second case study identified sites having a potential for safety improvements with respect to fatal and all injury crashes, specifically regarding intersections; it used default and calibrated SPFs as well. Conclusions were developed for the calibration of safety performance functions and the classification of site subtypes. Guidelines were provided about the selection of a particular network screening type or performance measure for network screening.

  6. Nephrogenic systemic fibrosis: Current concepts

    Directory of Open Access Journals (Sweden)

    Prasanta Basak

    2011-01-01

    Full Text Available Nephrogenic systemic fibrosis (NSF was first described in 2000 as a scleromyxedema-like illness in patients on chronic hemodialysis. The relationship between NSF and gadolinium contrast during magnetic resonance imaging was postulated in 2006, and subsequently, virtually all published cases of NSF have had documented prior exposure to gadolinium-containing contrast agents. NSF has been reported in patients from a variety of ethnic backgrounds from America, Europe, Asia and Australia. Skin lesions may evolve into poorly demarcated thickened plaques that range from erythematous to hyperpigmented. With time, the skin becomes markedly indurated and tethered to the underlying fascia. Extracutaneous manifestations also occur. The diagnosis of NSF is based on the presence of characteristic clinical features in the setting of chronic kidney disease, and substantiated by skin histology. Differential diagnosis is with scleroderma, scleredema, scleromyxedema, graft-versus-host disease, etc. NSF has a relentlessly progressive course. While there is no consistently successful treatment for NSF, improving renal function seems to slow or arrest the progression of this condition. Because essentially all cases of NSF have developed following exposure to a gadolinium-containing contrast agent, prevention of this devastating condition involves the careful avoidance of administering these agents to individuals at risk.

  7. Current state in tracking and robotic navigation systems for application in endovascular aortic aneurysm repair

    NARCIS (Netherlands)

    De Ruiter, Quirina M B; Moll, Frans L.; Van Herwaarden, Joost A.

    2015-01-01

    Objective This study reviewed the current developments in manual tracking and robotic navigation technologies for application in endovascular aortic aneurysm repair (EVAR). Methods EMBASE and MEDLINE databases were searched for studies reporting manual tracking or robotic navigation systems that are

  8. Current state in tracking and robotic navigation systems for application in endovascular aortic aneurysm repair

    NARCIS (Netherlands)

    De Ruiter, Quirina M B; Moll, Frans L.|info:eu-repo/dai/nl/070246882; Van Herwaarden, Joost A.|info:eu-repo/dai/nl/304814733

    2015-01-01

    Objective This study reviewed the current developments in manual tracking and robotic navigation technologies for application in endovascular aortic aneurysm repair (EVAR). Methods EMBASE and MEDLINE databases were searched for studies reporting manual tracking or robotic navigation systems that are

  9. A Replication Protocol for Real Time database System

    Directory of Open Access Journals (Sweden)

    Ashish Srivastava

    2012-06-01

    Full Text Available Database replication protocols for real time system based on a certification approach are usually the best ones for achieving good performance. The weak voting approach achieves a slightly longer transaction completion time, but with a lower abortion rate. So, both techniques can be considered as the best ones for replication when performance is a must, and both of them take advantage of the properties provided by atomic broadcast. We propose a new database replication strategy that shares many characteristics with such previous strategies. It is also based on totally ordering the application of writesets, using only an unordered reliable broadcast, instead of an atomic broadcast. Additionally, the writesets of transactions that are aborted in the final validation phase along with verification phase incorporated in the new system are not broadcast in our strategy rather than only validation phase. Thus, this new approach certainly reducesc the communication traffic and also achieves a good transaction response time (even shorter than those previous strategies associated with only validation phase in some system configurations.

  10. Record Linkage system in a complex relational database - MINPHIS example.

    Science.gov (United States)

    Achimugu, Philip; Soriyan, Abimbola; Oluwagbemi, Oluwatolani; Ajayi, Anu

    2010-01-01

    In the health sector, record linkage is of paramount importance as clinical data can be distributed across different data repositories leading to duplication. Record Linkage is the process of tracking duplicate records that actually refers to the same entity. This paper proposes a fast and efficient method for duplicates detection within the healthcare domain. The first step is to standardize the data in the database using SQL. The second is to match similar pair records, and third step is to organize records into match and non-match status. The system was developed in Unified Modeling Language and Java. In the batch analysis of 31, 177 "supposedly" distinct identities, our method isolates 25, 117 true unique records and 6, 060 suspected duplicates using a healthcare system called MINPHIS (Made in Nigeria Primary Healthcare Information System) as the test bed.

  11. Current and Future Flight Operating Systems

    Science.gov (United States)

    Cudmore, Alan

    2007-01-01

    This viewgraph presentation reviews the current real time operating system (RTOS) type in use with current flight systems. A new RTOS model is described, i.e. the process model. Included is a review of the challenges of migrating from the classic RTOS to the Process Model type.

  12. Development of materials database system for cae system of heat treatment based on data mining technology

    Institute of Scientific and Technical Information of China (English)

    GU Qiang; ZHONG Rui; JU Dong-ying

    2006-01-01

    Computer simulation for materials processing needs a huge database containing a great deal of various physical properties of materials. In order to employ the accumulated large data on materials heat treatment in the past years,it is significant to develop an intelligent database system. Based on the data mining technology for data analysis,an intelligent database web tool system of computer simulation for heat treatment process named as IndBASEweb-HT was built up. The architecture and the arithmetic of this system as well as its application were introduced.

  13. Generic Natural Systems Evaluation - Thermodynamic Database Development and Data Management

    Energy Technology Data Exchange (ETDEWEB)

    Wolery, T W; Sutton, M

    2011-09-19

    , meaning that they use a large body of thermodynamic data, generally from a supporting database file, to sort out the various important reactions from a wide spectrum of possibilities, given specified inputs. Usually codes of this kind are used to construct models of initial aqueous solutions that represent initial conditions for some process, although sometimes these calculations also represent a desired end point. Such a calculation might be used to determine the major chemical species of a dissolved component, the solubility of a mineral or mineral-like solid, or to quantify deviation from equilibrium in the form of saturation indices. Reactive transport codes such as TOUGHREACT and NUFT generally require the user to determine which chemical species and reactions are important, and to provide the requisite set of information including thermodynamic data in an input file. Usually this information is abstracted from the output of a geochemical modeling code and its supporting thermodynamic data file. The Yucca Mountain Project (YMP) developed two qualified thermodynamic databases to model geochemical processes, including ones involving repository components such as spent fuel. The first of the two (BSC, 2007a) was for systems containing dilute aqueous solutions only, the other (BSC, 2007b) for systems involving concentrated aqueous solutions and incorporating a model for such based on Pitzer's (1991) equations. A 25 C-only database with similarities to the latter was also developed for the Waste Isolation Pilot Plant (WIPP, cf. Xiong, 2005). The NAGRA/PSI database (Hummel et al., 2002) was developed to support repository studies in Europe. The YMP databases are often used in non-repository studies, including studies of geothermal systems (e.g., Wolery and Carroll, 2010) and CO2 sequestration (e.g., Aines et al., 2011).

  14. Response Current from Spin-Vortex-Induced Loop Current System to Feeding Current

    Science.gov (United States)

    Morisaki, Tsubasa; Wakaura, Hikaru; Abou Ghantous, Michel; Koizumi, Hiroyasu

    2017-07-01

    The spin-vortex-induced loop current (SVILC) is a loop current generated around a spin-vortex formed by itinerant electrons. It is generated by a U(1) instanton created by the single-valued requirement of wave functions with respect to the coordinate, and protected by the topological number, "winding number". In a system with SVILCs, a macroscopic persistent current is generated as a collection of SVILCs. In the present work, we consider the situation where external currents are fed in the SVILC system and response currents are measured as spontaneous currents that flow through leads attached to the SVILC system. The response currents from SVILC systems are markedly different from the feeding currents in their directions and magnitude, and depend on the original current pattern of the SVILC system; thus, they may be used in the readout process in the recently proposed SVILC quantum computer, a quantum computer that utilizes SVILCs as qubits. We also consider the use of the response current to detect SVILCs.

  15. CycADS: an annotation database system to ease the development and update of BioCyc databases.

    Science.gov (United States)

    Vellozo, Augusto F; Véron, Amélie S; Baa-Puyoulet, Patrice; Huerta-Cepas, Jaime; Cottret, Ludovic; Febvay, Gérard; Calevro, Federica; Rahbé, Yvan; Douglas, Angela E; Gabaldón, Toni; Sagot, Marie-France; Charles, Hubert; Colella, Stefano

    2011-01-01

    In recent years, genomes from an increasing number of organisms have been sequenced, but their annotation remains a time-consuming process. The BioCyc databases offer a framework for the integrated analysis of metabolic networks. The Pathway tool software suite allows the automated construction of a database starting from an annotated genome, but it requires prior integration of all annotations into a specific summary file or into a GenBank file. To allow the easy creation and update of a BioCyc database starting from the multiple genome annotation resources available over time, we have developed an ad hoc data management system that we called Cyc Annotation Database System (CycADS). CycADS is centred on a specific database model and on a set of Java programs to import, filter and export relevant information. Data from GenBank and other annotation sources (including for example: KAAS, PRIAM, Blast2GO and PhylomeDB) are collected into a database to be subsequently filtered and extracted to generate a complete annotation file. This file is then used to build an enriched BioCyc database using the PathoLogic program of Pathway Tools. The CycADS pipeline for annotation management was used to build the AcypiCyc database for the pea aphid (Acyrthosiphon pisum) whose genome was recently sequenced. The AcypiCyc database webpage includes also, for comparative analyses, two other metabolic reconstruction BioCyc databases generated using CycADS: TricaCyc for Tribolium castaneum and DromeCyc for Drosophila melanogaster. Linked to its flexible design, CycADS offers a powerful software tool for the generation and regular updating of enriched BioCyc databases. The CycADS system is particularly suited for metabolic gene annotation and network reconstruction in newly sequenced genomes. Because of the uniform annotation used for metabolic network reconstruction, CycADS is particularly useful for comparative analysis of the metabolism of different organisms. Database URL: http://www.cycadsys.org.

  16. Evaluation of Database Modeling Methods for Geographic Information Systems

    Directory of Open Access Journals (Sweden)

    Thanasis Hadzilacos

    1998-11-01

    Full Text Available We present a systematic evaluation of different modeling techniques for the design of Geographic Information Systems as we experienced them through theoretical research and real world applications. A set of exemplary problems for spatial systems on which the suitability of models can be tested is discussed. We analyse the use of a specific database design methodology including the phases of conceptual, logical and physical modeling. By employing, at each phase, representative models of classical and object-oriented approaches we assess their efficiency in spatial data handling. At the conceptual phase, we show how the Entity-Relationship, EFO and OMT models deal with the geographic needs; at the logical phase we argue why the relational model is good to serve as a basis to accommodate these requirements, but not good enough as a stand alone solution.

  17. Databases: Computerized Resource Retrieval Systems. Inservice Series No. 5.

    Science.gov (United States)

    Wilson, Mary Alice

    This document defines and describes electronic databases and provides guidance for organizing a useful database and for selecting hardware and software. Alternatives such as using larger machines are discussed, as are the computer skills necessary to use an electronic database and the use of the computer in the classroom. Files, records, and…

  18. Developing Visualization Support System for Teaching/Learning Database Normalization

    Science.gov (United States)

    Folorunso, Olusegun; Akinwale, AdioTaofeek

    2010-01-01

    Purpose: In tertiary institution, some students find it hard to learn database design theory, in particular, database normalization. The purpose of this paper is to develop a visualization tool to give students an interactive hands-on experience in database normalization process. Design/methodology/approach: The model-view-controller architecture…

  19. A Method of Rapid Generation of an Expert System Based on SQL Database

    Institute of Scientific and Technical Information of China (English)

    ShunxiangWu; WentingHuang; XiaoshengWang; JiandeGu; MaoqingLi; ShifengLiu

    2004-01-01

    This paper applies the relevant principles and methods of SQL database and Expert System, trying to research the methods and techniques of combining them and design a template of production expert system that is based on SQL database and drived by the database so as to simplify the structure of the knowledge base carried by database, generating the system more conveniently and operating it more effectively.

  20. Fossil-Fuel C02 Emissions Database and Exploration System

    Science.gov (United States)

    Krassovski, M.; Boden, T.; Andres, R. J.; Blasing, T. J.

    2012-12-01

    tabular, national, mass-emissions data and distribute them spatially on a one degree latitude by one degree longitude grid. The within-country spatial distribution is achieved through a fixed population distribution as reported in Andres et al. (1996). This presentation introduces newly build database and web interface, reflects the present state and functionality of the Fossil-Fuel CO2 Emissions Database and Exploration System as well as future plans for expansion.

  1. Thermal currents in highly correlated systems

    OpenAIRE

    MORENO, J; Coleman, P.

    1996-01-01

    Conventional approaches to thermal conductivity in itinerant systems neglect the contribution to thermal current due to interactions. We derive this contribution to the thermal current and show how it produces important corrections to the thermal conductivity in anisotropic superconductors. We discuss the possible relevance of these corrections for the interpretation of the thermal conductivity of anisotropic superconductors.

  2. Multiple Currents in the Gulf Stream System

    OpenAIRE

    Fuglister, F. C.

    2011-01-01

    A new interpretation of the accumulated temperature and salinity data from the Gulf Stream Area indicates that the System is made up of a series of overlapping currents. These currents are separated by relatively weak countercurrents. Data from a recent survey are presented as supporting this hypothesis.DOI: 10.1111/j.2153-3490.1951.tb00804.x

  3. Current frontiers in systemic sclerosis pathogenesis

    NARCIS (Netherlands)

    Ciechomska, Marzena; van Laar, Jacob; O'Reilly, Steven

    2015-01-01

    Systemic sclerosis is an autoimmune disease characterised by vascular dysfunction, impaired angiogenesis, inflammation and fibrosis. There is no currently accepted disease-modifying treatment with only autologous stem cell transplant showing clinically meaningful benefit. The lack of treatment optio

  4. Visualizing Concurrency Control Algorithms for Real-Time Database Systems

    Directory of Open Access Journals (Sweden)

    Olusegun Folorunso

    2008-11-01

    Full Text Available This paper describes an approach to visualizing concurrency control (CC algorithms for real-time database systems (RTDBs. This approach is based on the principle of software visualization, which has been applied in related fields. The Model-View-controller (MVC architecture is used to alleviate the black box syndrome associated with the study of algorithm behaviour for RTDBs Concurrency Controls. We propose a Visualization "exploratory" tool that assists the RTDBS designer in understanding the actual behaviour of the concurrency control algorithms of choice and also in evaluating the performance quality of the algorithm. We demonstrate the feasibility of our approach using an optimistic concurrency control model as our case study. The developed tool substantiates the earlier simulation-based performance studies by exposing spikes at some points when visualized dynamically that are not observed using usual static graphs. Eventually this tool helps solve the problem of contradictory assumptions of CC in RTDBs.

  5. Representing clinical communication knowledge through database management system integration.

    Science.gov (United States)

    Khairat, Saif; Craven, Catherine; Gong, Yang

    2012-01-01

    Clinical communication failures are considered the leading cause of medical errors [1]. The complexity of the clinical culture and the significant variance in training and education levels form a challenge to enhancing communication within the clinical team. In order to improve communication, a comprehensive understanding of the overall communication process in health care is required. In an attempt to further understand clinical communication, we conducted a thorough methodology literature review to identify strengths and limitations of previous approaches [2]. Our research proposes a new data collection method to study the clinical communication activities among Intensive Care Unit (ICU) clinical teams with a primary focus on the attending physician. In this paper, we present the first ICU communication instrument, and, we introduce the use of database management system to aid in discovering patterns and associations within our ICU communications data repository.

  6. Superpersistent Currents in Dirac Fermion Systems

    Science.gov (United States)

    2017-03-06

    TITLE AND SUBTITLE Superpersistent Currents in Dirac Fermion Systems 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-15-1-0151 5c.   PROGRAM ELEMENT...currents in 2D Dirac material systems and pertinent phenomena in the emerging field of relativistic quantum nonlinear dynamics and chaos. Systematic...anomalous optical transitions, and spin control in topological insulator quantum dots, (4) the discovery of nonlinear dynamics induced anomalous Hall

  7. Ground-target detection system for digital video database

    Science.gov (United States)

    Liang, Yiqing; Huang, Jeffrey R.; Wolf, Wayne H.; Liu, Bede

    1998-07-01

    As more and more visual information is available on video, information indexing and retrieval of digital video data is becoming important. A digital video database embedded with visual information processing using image analysis and image understanding techniques such as automated target detection, classification, and identification can provide query results of higher quality. We address in this paper a robust digital video database system within which a target detection module is implemented and applied onto the keyframe images extracted by our digital library system. The tasks and application scenarios under consideration involve indexing video with information about detection and verification of artificial objects that exist in video scenes. Based on the scenario that the video sequences are acquired by an onboard camera mounted on Predator unmanned aircraft, we demonstrate how an incoming video stream is structured into different levels -- video program level, scene level, shot level, and object level, based on the analysis of video contents using global imagery information. We then consider that the keyframe representation is most appropriate for video processing and it holds the property that can be used as the input for our detection module. As a result, video processing becomes feasible in terms of decreased computational resources spent and increased confidence in the (detection) decisions reached. The architecture we proposed can respond to the query of whether artificial structures and suspected combat vehicles are detected. The architecture for ground detection takes advantage of the image understanding paradigm and it involves different methods to locate and identify the artificial object rather than nature background such as tree, grass, and cloud. Edge detection, morphological transformation, line and parallel line detection using Hough transform applied on key frame images at video shot level are introduced in our detection module. This function can

  8. Experimenting with recursive queries in database and logic programming systems

    CERN Document Server

    Terracina, Giorgio; Lio, Vincenzino; Panetta, Claudio

    2007-01-01

    This paper considers the problem of reasoning on massive amounts of (possibly distributed) data. Presently, existing proposals show some limitations: {\\em (i)} the quantity of data that can be handled contemporarily is limited, due to the fact that reasoning is generally carried out in main-memory; {\\em (ii)} the interaction with external (and independent) DBMSs is not trivial and, in several cases, not allowed at all; {\\em (iii)} the efficiency of present implementations is still not sufficient for their utilization in complex reasoning tasks involving massive amounts of data. This paper provides a contribution in this setting; it presents a new system, called DLV$^{DB}$, which aims to solve these problems. Moreover, the paper reports the results of a thorough experimental analysis we have carried out for comparing our system with several state-of-the-art systems (both logic and databases) on some classical deductive problems; the other tested systems are: LDL++, XSB, Smodels and three top-level commercial D...

  9. Database Design Methodology and Database Management System for Computer-Aided Structural Design Optimization.

    Science.gov (United States)

    1984-12-01

    1983). Several researchers Lillehagen and Dokkar (1982), Grabowski, Eigener and Ranch (1978), and Eberlein and Wedekind (1982) have worked on database...Proceedings of International Federation of Information Processing. pp. 335-366. Eberlein, W. and Wedekind , H., 1982, "A Methodology for Embedding Design

  10. Current trends on knowledge-based systems

    CERN Document Server

    Valencia-García, Rafael

    2017-01-01

    This book presents innovative and high-quality research on the implementation of conceptual frameworks, strategies, techniques, methodologies, informatics platforms and models for developing advanced knowledge-based systems and their application in different fields, including Agriculture, Education, Automotive, Electrical Industry, Business Services, Food Manufacturing, Energy Services, Medicine and others. Knowledge-based technologies employ artificial intelligence methods to heuristically address problems that cannot be solved by means of formal techniques. These technologies draw on standard and novel approaches from various disciplines within Computer Science, including Knowledge Engineering, Natural Language Processing, Decision Support Systems, Artificial Intelligence, Databases, Software Engineering, etc. As a combination of different fields of Artificial Intelligence, the area of Knowledge-Based Systems applies knowledge representation, case-based reasoning, neural networks, Semantic Web and TICs used...

  11. AtlasT4SS: A curated database for type IV secretion systems

    Directory of Open Access Journals (Sweden)

    Souza Rangel C

    2012-08-01

    Full Text Available Abstract Background The type IV secretion system (T4SS can be classified as a large family of macromolecule transporter systems, divided into three recognized sub-families, according to the well-known functions. The major sub-family is the conjugation system, which allows transfer of genetic material, such as a nucleoprotein, via cell contact among bacteria. Also, the conjugation system can transfer genetic material from bacteria to eukaryotic cells; such is the case with the T-DNA transfer of Agrobacterium tumefaciens to host plant cells. The system of effector protein transport constitutes the second sub-family, and the third one corresponds to the DNA uptake/release system. Genome analyses have revealed numerous T4SS in Bacteria and Archaea. The purpose of this work was to organize, classify, and integrate the T4SS data into a single database, called AtlasT4SS - the first public database devoted exclusively to this prokaryotic secretion system. Description The AtlasT4SS is a manual curated database that describes a large number of proteins related to the type IV secretion system reported so far in Gram-negative and Gram-positive bacteria, as well as in Archaea. The database was created using the RDBMS MySQL and the Catalyst Framework based in the Perl programming language and using the Model-View-Controller (MVC design pattern for Web. The current version holds a comprehensive collection of 1,617 T4SS proteins from 58 Bacteria (49 Gram-negative and 9 Gram-Positive, one Archaea and 11 plasmids. By applying the bi-directional best hit (BBH relationship in pairwise genome comparison, it was possible to obtain a core set of 134 clusters of orthologous genes encoding T4SS proteins. Conclusions In our database we present one way of classifying orthologous groups of T4SSs in a hierarchical classification scheme with three levels. The first level comprises four classes that are based on the organization of genetic determinants, shared homologies, and

  12. Strong Ground Motion Database System for the Mexican Seismic Network

    Science.gov (United States)

    Perez-Yanez, C.; Ramirez-Guzman, L.; Ruiz, A. L.; Delgado, R.; Macías, M. A.; Sandoval, H.; Alcántara, L.; Quiroz, A.

    2014-12-01

    A web-based system for strong Mexican ground motion records dissemination and archival is presented. More than 50 years of continuous strong ground motion instrumentation and monitoring in Mexico have provided a fundamental resource -several thousands of accelerograms- for better understanding earthquakes and their effects in the region. Lead by the Institute of Engineering (IE) of the National Autonomous University of Mexico (UNAM), the engineering strong ground motion monitoring program at IE relies on a continuously growing network, that at present includes more than 100 free-field stations and provides coverage to the seismic zones in the country. Among the stations, approximately 25% send the observed acceleration to a processing center in Mexico City in real-time, and the rest require manual access, remote or in situ, for later processing and cataloguing. As part of a collaboration agreement between UNAM and the National Center for Disaster Prevention, regarding the construction and operation of a unified seismic network, a web system was developed to allow access to UNAM's engineering strong motion archive and host data from other institutions. The system allows data searches under a relational database schema, following a general structure relying on four databases containing the: 1) free-field stations, 2) epicentral location associated with the strong motion records available, 3) strong motion catalogue, and 4) acceleration files -the core of the system. In order to locate and easily access one or several records of the data bank, the web system presents a variety of parameters that can be involved in a query (seismic event, region boundary, station name or ID, radial distance to source or peak acceleration). This homogeneous platform has been designed to facilitate dissemination and processing of the information worldwide. Each file, in a standard format, contains information regarding the recording instrument, the station, the corresponding earthquake

  13. The Multi-level Recovery of Main-memory Real-time Database Systems with ECBH

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Storing the whole database in the main-memory is a common method to process real-time transaction in real-time database systems. The recovery mechanism of Main-memory Real-time Database Systems (MMRTDBS) should reflect the characteristics of the main-memory database and real-time database because their structures are quite different from other conventional database systems. In this paper, therefore, we propose a multi-level recovery mechanism for main-memory real-time database systems with Extendible Chained Bucket Hashing (ECBH). Owing to the occurrence of real-time data in real-time systems, we should also consider it in our recovery mechanism. According to our performance test, this mechanism can improve the transaction concurrency, reducing transactions' deadline missing rate.

  14. Medical Robots: Current Systems and Research Directions

    OpenAIRE

    Beasley, Ryan A.

    2012-01-01

    First used medically in 1985, robots now make an impact in laparoscopy, neurosurgery, orthopedic surgery, emergency response, and various other medical disciplines. This paper provides a review of medical robot history and surveys the capabilities of current medical robot systems, primarily focusing on commercially available systems while covering a few prominent research projects. By examining robotic systems across time and disciplines, trends are discernible that imply future capabilities ...

  15. The Current Status of Germplum Database: a Tool for Characterization of Plum Genetic Resources in Romania

    Directory of Open Access Journals (Sweden)

    Monica Harta

    2016-11-01

    Full Text Available In Romania, Prunus genetic resources are kept in collections of varieties, populations and biotypes, mainly located in research and development institutes or fruit growing stations and, in the last years, by some private enterprises. Creating the experimental model for the Germplum database based on phenotypic descriptors and SSR molecular markers analysis is an important and topical objective for the efficient characterization of genetic resources and also for establishing a public-private partnership for the effective management of plum germplasm resources in Romania. The technical development of the Germplum database was completed and data will be added continuously after characterizing each new accession.

  16. T4SP Database 2.0: An Improved Database for Type IV Secretion Systems in Bacterial Genomes with New Online Analysis Tools

    Directory of Open Access Journals (Sweden)

    Na Han

    2016-01-01

    Full Text Available Type IV secretion system (T4SS can mediate the passage of macromolecules across cellular membranes and is essential for virulent and genetic material exchange among bacterial species. The Type IV Secretion Project 2.0 (T4SP 2.0 database is an improved and extended version of the platform released in 2013 aimed at assisting with the detection of Type IV secretion systems (T4SS in bacterial genomes. This advanced version provides users with web server tools for detecting the existence and variations of T4SS genes online. The new interface for the genome browser provides a user-friendly access to the most complete and accurate resource of T4SS gene information (e.g., gene number, name, type, position, sequence, related articles, and quick links to other webs. Currently, this online database includes T4SS information of 5239 bacterial strains. Conclusions. T4SS is one of the most versatile secretion systems necessary for the virulence and survival of bacteria and the secretion of protein and/or DNA substrates from a donor to a recipient cell. This database on virB/D genes of the T4SS system will help scientists worldwide to improve their knowledge on secretion systems and also identify potential pathogenic mechanisms of various microbial species.

  17. T4SP Database 2.0: An Improved Database for Type IV Secretion Systems in Bacterial Genomes with New Online Analysis Tools

    Science.gov (United States)

    Han, Na; Yu, Weiwen; Qiang, Yujun

    2016-01-01

    Type IV secretion system (T4SS) can mediate the passage of macromolecules across cellular membranes and is essential for virulent and genetic material exchange among bacterial species. The Type IV Secretion Project 2.0 (T4SP 2.0) database is an improved and extended version of the platform released in 2013 aimed at assisting with the detection of Type IV secretion systems (T4SS) in bacterial genomes. This advanced version provides users with web server tools for detecting the existence and variations of T4SS genes online. The new interface for the genome browser provides a user-friendly access to the most complete and accurate resource of T4SS gene information (e.g., gene number, name, type, position, sequence, related articles, and quick links to other webs). Currently, this online database includes T4SS information of 5239 bacterial strains. Conclusions. T4SS is one of the most versatile secretion systems necessary for the virulence and survival of bacteria and the secretion of protein and/or DNA substrates from a donor to a recipient cell. This database on virB/D genes of the T4SS system will help scientists worldwide to improve their knowledge on secretion systems and also identify potential pathogenic mechanisms of various microbial species.

  18. Checkpointing and Recovery in Distributed and Database Systems

    Science.gov (United States)

    Wu, Jiang

    2011-01-01

    A transaction-consistent global checkpoint of a database records a state of the database which reflects the effect of only completed transactions and not the results of any partially executed transactions. This thesis establishes the necessary and sufficient conditions for a checkpoint of a data item (or the checkpoints of a set of data items) to…

  19. Design of Student Information Management Database Application System for Office and Departmental Target Responsibility System

    Science.gov (United States)

    Zhou, Hui

    It is the inevitable outcome of higher education reform to carry out office and departmental target responsibility system, in which statistical processing of student's information is an important part of student's performance review. On the basis of the analysis of the student's evaluation, the student information management database application system is designed by using relational database management system software in this paper. In order to implement the function of student information management, the functional requirement, overall structure, data sheets and fields, data sheet Association and software codes are designed in details.

  20. An outline of compilation and processing of metadata in agricultural database management system WebAgris

    Directory of Open Access Journals (Sweden)

    Tomaž Bartol

    2008-01-01

    Full Text Available The paper tackles international information system for agriculture Agris and local processing of metadata with database management software WebAgris. Operations are coordinated by the central repository at the FAO in Rome. Based on international standards and unified methodology, national and regional centers collect and process local publications, and then send the records to the central unit, which enables global website accessibility of the data. Earlier DOS-run application was based on package Agrin CDS/ISIS. The Current package WebAgris runs on web servers. Database construction tools and instructions are accessible on FAO Web pages. Data are entered through unified input masks. International consistency is achieved through authority control of certain elements, such as author or corporate affiliation. Central authority control is made available for subject headings, such as descriptors and subject categories.Subject indexing is based on controlled multilingual thesaurus Agrovoc, also available freely on the Internet. This glossary has become an important tool in the area of the international agricultural ontology. The data are exported to the central unit in XML format. Global database is currently eccessible to everyone. This international cooperative information network combines elements of a document repository,electronic publishing, open archiving and full text open access. Links with Google Scholar provide a good possibility for international promotion of publishing.

  1. 16th East-European Conference on Advances in Databases and Information Systems (ADBIS 2012)

    CERN Document Server

    Härder, Theo; Wrembel, Robert; Advances in Databases and Information Systems

    2013-01-01

    This volume is the second one of the 16th East-European Conference on Advances in Databases and Information Systems (ADBIS 2012), held on September 18-21, 2012, in Poznań, Poland. The first one has been published in the LNCS series.   This volume includes 27 research contributions, selected out of 90. The contributions cover a wide spectrum of topics in the database and information systems field, including: database foundation and theory, data modeling and database design, business process modeling, query optimization in relational and object databases, materialized view selection algorithms, index data structures, distributed systems, system and data integration, semi-structured data and databases, semantic data management, information retrieval, data mining techniques, data stream processing, trust and reputation in the Internet, and social networks. Thus, the content of this volume covers the research areas from fundamentals of databases, through still hot topic research problems (e.g., data mining, XML ...

  2. StreetTiVo: Using a P2P XML Database System to Manage Multimedia Data in Your Living Room

    NARCIS (Netherlands)

    Y. Zhang (Ying); A.P. de Vries (Arjen); P.A. Boncz (Peter); D. Hiemstra; R. Ordelman

    2009-01-01

    textabstractStreetTiVo is a project that aims at bringing research results into the living room; in particular, a mix of current results in the areas of Peer-to-Peer XML Database Management System (P2P XDBMS), advanced multimedia analysis techniques, and advanced information retrieval techniques. Th

  3. The relational database system of KM3NeT

    Science.gov (United States)

    Albert, Arnauld; Bozza, Cristiano

    2016-04-01

    The KM3NeT Collaboration is building a new generation of neutrino telescopes in the Mediterranean Sea. For these telescopes, a relational database is designed and implemented for several purposes, such as the centralised management of accounts, the storage of all documentation about components and the status of the detector and information about slow control and calibration data. It also contains information useful during the construction and the data acquisition phases. Highlights in the database schema, storage and management are discussed along with design choices that have impact on performances. In most cases, the database is not accessed directly by applications, but via a custom designed Web application server.

  4. Automatic system for ionization chamber current measurements.

    Science.gov (United States)

    Brancaccio, Franco; Dias, Mauro S; Koskinas, Marina F

    2004-12-01

    The present work describes an automatic system developed for current integration measurements at the Laboratório de Metrologia Nuclear of Instituto de Pesquisas Energéticas e Nucleares. This system includes software (graphic user interface and control) and a module connected to a microcomputer, by means of a commercial data acquisition card. Measurements were performed in order to check the performance and for validating the proposed design.

  5. Direct current power delivery system and method

    Science.gov (United States)

    Zhang, Di; Garces, Luis Jose; Dai, Jian; Lai, Rixin

    2016-09-06

    A power transmission system includes a first unit for carrying out the steps of receiving high voltage direct current (HVDC) power from an HVDC power line, generating an alternating current (AC) component indicative of a status of the first unit, and adding the AC component to the HVDC power line. Further, the power transmission system includes a second unit for carrying out the steps of generating a direct current (DC) voltage to transfer the HVDC power on the HVDC power line, wherein the HVDC power line is coupled between the first unit and the second unit, detecting a presence or an absence of the added AC component in the HVDC power line, and determining the status of the first unit based on the added AC component.

  6. The Nuclear Science References (NSR) Database and Web Retrieval System

    CERN Document Server

    Pritychenko, B; Kellett, M A; Singh, B; Totans, J

    2011-01-01

    The Nuclear Science References (NSR) database, and associated Web inter- face, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 200,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, com- pilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center http://www.nndc.bnl.gov/nsr and the International Atomic Energy Agency http://www-nds.iaea.org/nsr.

  7. CURRENT TRENDS IN PULSATILE DRUG DELIVERY SYSTEMS

    Directory of Open Access Journals (Sweden)

    S. R. Tajane et al.

    2012-01-01

    Full Text Available The purpose for this review on pulsatile drug delivery systems (PDDS is to compile the recent literatures with special focus on the different types and approaches involved in the development of the formulation. Pulsatile drug delivery system is the most interesting time and site-specific system. This system is designed for chronopharmacotherapy. Thus, to mimic the function of living systems and in view of emerging chronotherapeutic approaches, pulsatile delivery, which is meant to release a drug following programmed lag phase, has increasing interest in the recent years. Diseases wherein PDDS are promising include asthma, peptic ulcer, cardiovascular diseases, arthritis, and attention deficit syndrome in children, cancer, diabetes, and hypercholesterolemia. Pulsatile drug delivery system divided into 2 types’ preplanned systems and stimulus induced system, preplanned systems based on osmosis, rupturable layers, and erodible barrier coatings. Stimuli induced system based on electrical, temperature and chemically induced systems. This review also summarizes some current PDDS already available in the market. These systems are useful to several problems encountered during the development of a pharmaceutical dosage form.

  8. Medical Robots: Current Systems and Research Directions

    Directory of Open Access Journals (Sweden)

    Ryan A. Beasley

    2012-01-01

    Full Text Available First used medically in 1985, robots now make an impact in laparoscopy, neurosurgery, orthopedic surgery, emergency response, and various other medical disciplines. This paper provides a review of medical robot history and surveys the capabilities of current medical robot systems, primarily focusing on commercially available systems while covering a few prominent research projects. By examining robotic systems across time and disciplines, trends are discernible that imply future capabilities of medical robots, for example, increased usage of intraoperative images, improved robot arm design, and haptic feedback to guide the surgeon.

  9. The relational clinical database: a possible solution to the star wars in registry systems.

    Science.gov (United States)

    Michels, D K; Zamieroski, M

    1990-12-01

    In summary, having data from other service areas available in a relational clinical database could resolve many of the problems existing in today's registry systems. Uniting sophisticated information systems into a centralized database system could definitely be a corporate asset in managing the bottom line.

  10. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  11. NADIR: A Flexible Archiving System Current Development

    Science.gov (United States)

    Knapic, C.; De Marco, M.; Smareglia, R.; Molinaro, M.

    2014-05-01

    The New Archiving Distributed InfrastructuRe (NADIR) is under development at the Italian center for Astronomical Archives (IA2) to increase the performances of the current archival software tools at the data center. Traditional softwares usually offer simple and robust solutions to perform data archive and distribution but are awkward to adapt and reuse in projects that have different purposes. Data evolution in terms of data model, format, publication policy, version, and meta-data content are the main threats to re-usage. NADIR, using stable and mature framework features, answers those very challenging issues. Its main characteristics are a configuration database, a multi threading and multi language environment (C++, Java, Python), special features to guarantee high scalability, modularity, robustness, error tracking, and tools to monitor with confidence the status of each project at each archiving site. In this contribution, the development of the core components is presented, commenting also on some performance and innovative features (multi-cast and publisher-subscriber paradigms). NADIR is planned to be developed as simply as possible with default configurations for every project, first of all for LBT and other IA2 projects.

  12. ATLAS DAQ Configuration Databases

    Institute of Scientific and Technical Information of China (English)

    I.Alexandrov; A.Amorim; 等

    2001-01-01

    The configuration databases are an important part of the Trigger/DAQ system of the future ATLAS experiment .This paper describes their current status giving details of architecture,implementation,test results and plans for future work.

  13. The Bulgarian Odonata databasecurrent status, organisation and a case study new entries

    Directory of Open Access Journals (Sweden)

    Yordan Kutsarov

    2012-09-01

    Full Text Available Bulgarian Odonata database is analysed for the period of the last 10 years. All new entries are summarised in individual species graphs representing the trends in data compilations. Special attention is paid on the role of communities in this process with a single study case which is evident of how a small contribution could elucidate important new information on some underexplored areas. It is concluded that for the past 10 years mountain areas and large Bulgarian rivers have been understudied. These should be the priority target areas in the investigations undertaken in near future.

  14. Review of Current Nuclear Vacuum System Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Carroll, M.; McCracken, J.; Shope, T.

    2003-02-25

    Nearly all industrial operations generate unwanted dust, particulate matter, and/or liquid wastes. Waste dust and particulates can be readily tracked to other work locations, and airborne particulates can be spread through ventilation systems to all locations within a building, and even vented outside the building - a serious concern for processes involving hazardous, radioactive, or nuclear materials. Several varieties of vacuum systems have been proposed and/or are commercially available for clean up of both solid and liquid hazardous and nuclear materials. A review of current technologies highlights both the advantages and disadvantages of the various systems, and demonstrates the need for a system designed to address issues specific to hazardous and nuclear material cleanup. A review of previous and current hazardous/nuclear material cleanup technologies is presented. From simple conventional vacuums modified for use in industrial operations, to systems specifically engineered for such purposes, the advantages and disadvantages are examined in light of the following criteria: minimal worker exposure; minimal secondary waste generation;reduced equipment maintenance and consumable parts; simplicity of design, yet fully compatible with all waste types; and ease of use. The work effort reviews past, existing and proposed technologies in light of such considerations. Accomplishments of selected systems are presented, including identified areas where technological improvements could be suggested.

  15. Software design for a database driven system for accelerator magnet measurements

    Energy Technology Data Exchange (ETDEWEB)

    Brown, B.C.; Bleadon, M.E.; Glass, H.D.; Glosson, R.; Hanft, R.W.; Harding, D.J.; Mazur, P.O.; Pachnik, J.E.; Sim, J.W.; Trombly-Freytag, K.; Walbridge, D.G.

    1991-05-01

    Measurements of more than 1000 new magnets are needed for the Main Injector Project at Fermilab. In order to achieve efficiency and accuracy in measurements, we chose a database driven design for control of the measurement system. We will use a relational database to describe the measurement subjects and equipment. A logbook system defined in the database will provide for prescription of measurements to be carried out, description of measurements as they are carried out, and a comment database for less structured information. The operator interface will be built on X-windows. This paper will describe our system design. 2 refs.

  16. Drug development and nonclinical to clinical translational databases: past and current efforts.

    Science.gov (United States)

    Monticello, Thomas M

    2015-01-01

    The International Consortium for Innovation and Quality (IQ) in Pharmaceutical Development is a science-focused organization of pharmaceutical and biotechnology companies. The mission of the Preclinical Safety Leadership Group (DruSafe) of the IQ is to advance science-based standards for nonclinical development of pharmaceutical products and to promote high-quality and effective nonclinical safety testing that can enable human risk assessment. DruSafe is creating an industry-wide database to determine the accuracy with which the interpretation of nonclinical safety assessments in animal models correctly predicts human risk in the early clinical development of biopharmaceuticals. This initiative aligns with the 2011 Food and Drug Administration strategic plan to advance regulatory science and modernize toxicology to enhance product safety. Although similar in concept to the initial industry-wide concordance data set conducted by International Life Sciences Institute's Health and Environmental Sciences Institute (HESI/ILSI), the DruSafe database will proactively track concordance, include exposure data and large and small molecules, and will continue to expand with longer duration nonclinical and clinical study comparisons. The output from this work will help identify actual human and animal adverse event data to define both the reliability and the potential limitations of nonclinical data and testing paradigms in predicting human safety in phase 1 clinical trials. © 2014 by The Author(s).

  17. The Development of a Standard Database System for Republic of Korea Army’s Personnel Management.

    Science.gov (United States)

    1983-06-01

    for ROK Army personnel management ? Which data items should be incorporated in a database? Which tecnique should be applied to design data- bases using a...iD-Ri33 499 THE DEVELOPMENT OF A STANDARD DATABASE SYSTEM FOR i/i REPUBLIC OF KOREA ARMY’S PERSONNEL MANAGEMENT (U) NAVAL POSTGRADUATE SCHOOL MONTEREY...NAVAL POSTGRADUATE SCHOOL Monterey, California . THESIS THE DEVELOPMENT OF A STANDARD DATABASE SYSTEM FOR REPUBLIC OF KOREA ARMY’S PERSONNEL MANAGE

  18. Clustering, concurrency control, crash recovery, garbage collection, and security in object-oriented database management systems

    OpenAIRE

    1991-01-01

    This paper presents considerations about several topics that have a direct influence on data reliability and performance in object oriented database management systems. These topics are: physical storage management (clustering), concurrency control, crash recovery, garbage collection, and database security. Each topic is illustrated by its application to the Tactical Database as designed for the Low Cost Combat Direction System Naval Postgraduate School, Department of Computer Science, Cod...

  19. The relational database system of KM3NeT

    Directory of Open Access Journals (Sweden)

    Albert Arnauld

    2016-01-01

    Full Text Available The KM3NeT Collaboration is building a new generation of neutrino telescopes in the Mediterranean Sea. For these telescopes, a relational database is designed and implemented for several purposes, such as the centralised management of accounts, the storage of all documentation about components and the status of the detector and information about slow control and calibration data. It also contains information useful during the construction and the data acquisition phases. Highlights in the database schema, storage and management are discussed along with design choices that have impact on performances. In most cases, the database is not accessed directly by applications, but via a custom designed Web application server.

  20. Environmental Factor(tm) system: Superfund site information from five EPA databases (on cd-rom). Database

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-07-01

    Environmental Factor puts today`s technology to work to provide a better, more cost-efficient and time-saving way to access EPA information on hazardous waste sites. Environmental consultants, insurers, and reinsurers, corporate risk assessors and companies actively involved in the generation, transport, storage or cleanup of hazardous waste materials can use its user-friendly information retrieval system to gain rapid access to vital information in immediately-usable form. Search, retrieve, and export information in real time. No more waiting for the mail or overnight delivery services to deliver hard copies of voluminous listings and individual site reports. More than 200,000 pages of EPA hazardous waste site information are contained in 5 related databases: (1) Site data from the National Priority List (NPL) and CERCLIS databases, Potentially Responsible Parties (PRP) and Records of Decision (RODs) summaries; (2) Complete PRP information; (3) EPA Records of Decision (Full Text); (4) entire Civil Enforcement Docket; and (5) Glossary of EPA terms, abbreviations and acronyms. Environmental Factor`s powerful database management engine gives even the most inexperienced computer user extensive search capabilities, including wildcard, phonetic and direct cross reference searches across multiple databases. The first menu option delivers information from the NPL, CERCLIS site data, PRP and RODs summary information. Enter a set of search criteria and then immediately access displays containing information from all of these databases. Get full PRP information and Full Text RODs by using their respective menu options. If your search turns up multiple items, a list of site names appears. To bring up the data, highlight the specific site you want and hit Enter. That`s how easy it is to access the vast amount of data stored in the Environmental Factor CD-ROM.

  1. Current therapy of systemic sclerosis (scleroderma).

    Science.gov (United States)

    Müller-Ladner, U; Benning, K; Lang, B

    1993-04-01

    Treatment of systemic sclerosis (scleroderma) presents a challenge to both the patient and the physician. Established approaches include long-term physiotherapy, disease-modifying agents such as D-penicillamine, and treatment of organ involvement. These efforts are often unsatisfactory since the results are poor. However, recent advances include treatment of Raynaud's phenomenon (plasmapheresis, stanozolol, and prostacyclin analogues), scleroderma renal crisis (angiotensin-converting enzyme inhibitors), and gastric hypomotility (cisapride). This article covers the current approaches to the disease-modifying therapy including those related to the function of collagen-producing fibroblasts, vascular alterations, and the cellular and humoral immune system, as well as treatment of involved organs.

  2. 18th East European Conference on Advances in Databases and Information Systems and Associated Satellite Events

    CERN Document Server

    Ivanovic, Mirjana; Kon-Popovska, Margita; Manolopoulos, Yannis; Palpanas, Themis; Trajcevski, Goce; Vakali, Athena

    2015-01-01

    This volume contains the papers of 3 workshops and the doctoral consortium, which are organized in the framework of the 18th East-European Conference on Advances in Databases and Information Systems (ADBIS’2014). The 3rd International Workshop on GPUs in Databases (GID’2014) is devoted to subjects related to utilization of Graphics Processing Units in database environments. The use of GPUs in databases has not yet received enough attention from the database community. The intention of the GID workshop is to provide a discussion on popularizing the GPUs and providing a forum for discussion with respect to the GID’s research ideas and their potential to achieve high speedups in many database applications. The 3rd International Workshop on Ontologies Meet Advanced Information Systems (OAIS’2014) has a twofold objective to present: new and challenging issues in the contribution of ontologies for designing high quality information systems, and new research and technological developments which use ontologie...

  3. Network Management Temporal Database System Based on XTACACS Protocol

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper first analyzes the basic concepts of Cisco router user certification, XTACACS certification protocol and describes the data architecture of user data protocol, then defines a kind of doubly temporal databases and log-in transaction processing based on network management property, at last, introduces implementation technology and method.

  4. An Updating System for the Gridded Population Database of China Based on Remote Sensing, GIS and Spatial Database Technologies

    Directory of Open Access Journals (Sweden)

    Xiaohuan Yang

    2009-02-01

    Full Text Available The spatial distribution of population is closely related to land use and land cover (LULC patterns on both regional and global scales. Population can be redistributed onto geo-referenced square grids according to this relation. In the past decades, various approaches to monitoring LULC using remote sensing and Geographic Information Systems (GIS have been developed, which makes it possible for efficient updating of geo-referenced population data. A Spatial Population Updating System (SPUS is developed for updating the gridded population database of China based on remote sensing, GIS and spatial database technologies, with a spatial resolution of 1 km by 1 km. The SPUS can process standard Moderate Resolution Imaging Spectroradiometer (MODIS L1B data integrated with a Pattern Decomposition Method (PDM and an LULC-Conversion Model to obtain patterns of land use and land cover, and provide input parameters for a Population Spatialization Model (PSM. The PSM embedded in SPUS is used for generating 1 km by 1 km gridded population data in each population distribution region based on natural and socio-economic variables. Validation results from finer township-level census data of Yishui County suggest that the gridded population database produced by the SPUS is reliable.

  5. 75 FR 18255 - Passenger Facility Charge Database System for Air Carrier Reporting

    Science.gov (United States)

    2010-04-09

    ... Federal Aviation Administration Passenger Facility Charge Database System for Air Carrier Reporting AGENCY... interested parties of the availability of the Passenger Facility Charge (PFC) database system to report PFC..., 2010. FOR FURTHER INFORMATION CONTACT: Jane Johnson, Financial Analysis and Passenger Facility Charge...

  6. Translation of the Data Flow Query Language for the Multimodel, Multibackend Database System

    Science.gov (United States)

    1994-09-01

    the DBMS, but are not there. Or the organization can purchase each of the monolingual database systems it needs separately. Either alternative results...Implementation of a Functional IDaplex Data Interface or the Multlimodel ard Multilingual Database System, Master’s Thesis, Naval Postgraduate Scho’ol

  7. EPAUS9R - An Energy Systems Database for use with the Market Allocation (MARKAL) Model

    Science.gov (United States)

    EPA’s MARKAL energy system databases estimate future-year technology dispersals and associated emissions. These databases are valuable tools for exploring a variety of future scenarios for the U.S. energy-production systems that can impact climate change c

  8. Dynamic Real Time Distributed Sensor Network Based Database Management System Using XML, JAVA and PHP Technologies

    Directory of Open Access Journals (Sweden)

    D. Sudharsan

    2012-03-01

    Full Text Available Wireless Sensor Network (WSN is well known for distributed real time systems for various applications. In order to handle the increasing functionality and complexity of high resolution spatio-temporal sensorydatabase, there is a strong need for a system/tool to analyse real time data associated with distributed sensor network systems. There are a few package/systems available to maintain the near real time database system/management, which are expensive and requires expertise. Hence, there is a need for a cost effective and easy to use dynamic real-time data repository system to provide real time data (raw as well as usable units in a structured format. In the present study, a distributed sensor network system, with Agrisens (AS and FieldServer (FS as well as FS-based Flux Tower and FieldTwitter, is used, which consists of network of sensors and field images to observe/collect the real time weather, crop and environmental parameters for precision agriculture. The real time FieldServer-based spatio-temporal high resolution dynamic sensory data was converted into Dynamic Real-Time Database Management System (DRTDBMS in a structured format for both raw and converted (with usable units data. A web interface has been developed to access the DRTDBMS and exclusive domain has been created with the help of open/free Information and Communication Technology (ICT tools in Extendable Markup Language (XML using (Hypertext preprocessor PHP algorithms and with eXtensible Hyper Text Markup Language (XHTML self-scripting. The proposed DRTDBMS prototype, called GeoSense DRTDBMS, which is a part of the ongoing IndoJapan initiative ‘ICT and Sensor Network based Decision Support Systems in Agriculture and EnvironmentAssessment’, will be integrated with GeoSense cloud server to provide database (dynamic real-time weather/soil/crop and environmental parameters and modeling services (crop water requirement and simulated rice yield modeling. GeoSense-cloud server

  9. The INFN-CNAF Tier-1 GEMSS Mass Storage System and database facility activity

    Science.gov (United States)

    Ricci, Pier Paolo; Cavalli, Alessandro; Dell'Agnello, Luca; Favaro, Matteo; Gregori, Daniele; Prosperini, Andrea; Pezzi, Michele; Sapunenko, Vladimir; Zizzi, Giovanni; Vagnoni, Vincenzo

    2015-05-01

    The consolidation of Mass Storage services at the INFN-CNAF Tier1 Storage department that has occurred during the last 5 years, resulted in a reliable, high performance and moderately easy-to-manage facility that provides data access, archive, backup and database services to several different use cases. At present, the GEMSS Mass Storage System, developed and installed at CNAF and based upon an integration between the IBM GPFS parallel filesystem and the Tivoli Storage Manager (TSM) tape management software, is one of the largest hierarchical storage sites in Europe. It provides storage resources for about 12% of LHC data, as well as for data of other non-LHC experiments. Files are accessed using standard SRM Grid services provided by the Storage Resource Manager (StoRM), also developed at CNAF. Data access is also provided by XRootD and HTTP/WebDaV endpoints. Besides these services, an Oracle database facility is in production characterized by an effective level of parallelism, redundancy and availability. This facility is running databases for storing and accessing relational data objects and for providing database services to the currently active use cases. It takes advantage of several Oracle technologies, like Real Application Cluster (RAC), Automatic Storage Manager (ASM) and Enterprise Manager centralized management tools, together with other technologies for performance optimization, ease of management and downtime reduction. The aim of the present paper is to illustrate the state-of-the-art of the INFN-CNAF Tier1 Storage department infrastructures and software services, and to give a brief outlook to forthcoming projects. A description of the administrative, monitoring and problem-tracking tools that play a primary role in managing the whole storage framework is also given.

  10. [Brescia Local Health Autority Population Database: a method based on current data for monitoring chronic diseases and management].

    Science.gov (United States)

    Lonati, Fulvio; Scarcella, Carmelo; Indelicato, Annamaria; Brioschi, Alessia; Magoni, Michele; Medea, Gerardo; Saleri, Nada; Orizio, Grazia; Donato, Francesco

    2008-01-01

    The Local Health Autority (ASL) of Brescia has activated an innovative method of surveillance, based on the integration ofcurrent databases in a single database, Population Database (BDA), for monitoring the prevalence of chronic diseases in the area. The BDA has been set up using automatic record-linkages of databases regarding disease exemptions, drug treatments, hospital admissions and outpatient specialist visits. This enabled us to calculate the prevalence of various chronic diseases (single or grouped) and the gross average expenditure per person for each disease group. Out of the 1,092,201 people in the Brescia ASL, 275,601 had at least one chronic disease (prevalence 252.3/1,000). Diseases ofthe circulatory system were the most frequent (169.1/1,000), followed by diabetes mellitus (36 6/1,000). Having had an organ transplant was the condition with the highest per-person expenditure (Euro 16,170/year). The highest total expenditure was associated with circulatory diseases, because of the high prevalence (Euro 470,377,413). A single computerised data base is capable of achieving epidemiological aims (assessing population health status) as well as managerial and health care aims (resources management, control of the appropriateness of services, adaptation of diagnostic-therapeutic methods to international guidelines and standards).

  11. Development of database systems for safety of repositories for disposal of radioactive wastes

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yeong Hun; Han, Jeong Sang; Shin, Hyeon Jun; Ham, Sang Won; Kim, Hye Seong [Yonsei Univ., Seoul (Korea, Republic of)

    1999-03-15

    In the study, GSIS os developed for the maximizing effectiveness of the database system. For this purpose, the spatial relation of data from various fields that are constructed in the database which was developed for the site selection and management of repository for radioactive waste disposal. By constructing the integration system that can link attribute and spatial data, it is possible to evaluate the safety of repository effectively and economically. The suitability of integrating database and GSIS is examined by constructing the database in the test district where the site characteristics are similar to that of repository for radioactive waste disposal.

  12. Optimizing Parallel Access to the BaBar Database System Using CORBA Servers

    Institute of Scientific and Technical Information of China (English)

    JacekBecla; IgorGaponenko

    2001-01-01

    The BaBar Experiment collected around 20 TB of data during its first 6 months of running.Now,after 18 months,data size exceeds 300 TB,and according to prognosis,it is a small fraction of the size of data coming in the next few months,In order to keep up with the data significant effort was put into tuning the database system,It led to great performance improvements,as well as to inevitable system expansion-450 simultaneous processing nodes alone used for data reconstruction.It is believed,that further growth beyond 600 nodes will happen soon.In such an environment,many complex operations are executed simultaneously on hundreds of machines,putting a huge load on data servers and increasing network traffic Introducing two CORBA servers halved startup time,and dramatically offloaded database servers:data servers as well as lock servers The paper describes details of design and implementation of two servers recently in troduced in the Babar system:conditions OID server and Clustering Server,The first experience of using these servers is discussed.A discussion on a Collection Server for data analysis,currently being designed is included.

  13. OrientX: An Integrated, Schema Based Native XML Database System

    Institute of Scientific and Technical Information of China (English)

    MENG Xiaofeng; WANG Xiaofeng; XIE Min; ZHANG Xin; ZHOU Junfeng

    2006-01-01

    The increasing number of XML repositories has stimulated the design of systems that can store and query XML data efficiently. OrientX, a native XML database system, is designed to meet this requirement. In this paper, we described the system structure and design of OrientX, an integrated, schema-based native XML database. The main contributions of OrientX are: a)We have implemented an integrated native XML database system, which supports native storage of XML data, and based on it we can handle XPath& XQuery efficiently; b)In our OrientX system, schema information is fully explored to guide the storage, optimization and query processing.

  14. The NCBI Taxonomy database.

    Science.gov (United States)

    Federhen, Scott

    2012-01-01

    The NCBI Taxonomy database (http://www.ncbi.nlm.nih.gov/taxonomy) is the standard nomenclature and classification repository for the International Nucleotide Sequence Database Collaboration (INSDC), comprising the GenBank, ENA (EMBL) and DDBJ databases. It includes organism names and taxonomic lineages for each of the sequences represented in the INSDC's nucleotide and protein sequence databases. The taxonomy database is manually curated by a small group of scientists at the NCBI who use the current taxonomic literature to maintain a phylogenetic taxonomy for the source organisms represented in the sequence databases. The taxonomy database is a central organizing hub for many of the resources at the NCBI, and provides a means for clustering elements within other domains of NCBI web site, for internal linking between domains of the Entrez system and for linking out to taxon-specific external resources on the web. Our primary purpose is to index the domain of sequences as conveniently as possible for our user community.

  15. Exploring the Ligand-Protein Networks in Traditional Chinese Medicine: Current Databases, Methods, and Applications

    Directory of Open Access Journals (Sweden)

    Mingzhu Zhao

    2013-01-01

    Full Text Available The traditional Chinese medicine (TCM, which has thousands of years of clinical application among China and other Asian countries, is the pioneer of the “multicomponent-multitarget” and network pharmacology. Although there is no doubt of the efficacy, it is difficult to elucidate convincing underlying mechanism of TCM due to its complex composition and unclear pharmacology. The use of ligand-protein networks has been gaining significant value in the history of drug discovery while its application in TCM is still in its early stage. This paper firstly surveys TCM databases for virtual screening that have been greatly expanded in size and data diversity in recent years. On that basis, different screening methods and strategies for identifying active ingredients and targets of TCM are outlined based on the amount of network information available, both on sides of ligand bioactivity and the protein structures. Furthermore, applications of successful in silico target identification attempts are discussed in detail along with experiments in exploring the ligand-protein networks of TCM. Finally, it will be concluded that the prospective application of ligand-protein networks can be used not only to predict protein targets of a small molecule, but also to explore the mode of action of TCM.

  16. Future Robotics Database Management System along with Cloud TPS

    CERN Document Server

    S, Vijaykumar

    2011-01-01

    This paper deals with memory management issues of robotics. In our proposal we break one of the major issues in creating humanoid. . Database issue is the complicated thing in robotics schema design here in our proposal we suggest new concept called NOSQL database for the effective data retrieval, so that the humanoid robots will get the massive thinking ability in searching each items using chained instructions. For query transactions in robotics we need an effective consistency transactions so by using latest technology called CloudTPS which guarantees full ACID properties so that the robot can make their queries using multi-item transactions through this we obtain data consistency in data retrievals. In addition we included map reduce concepts it can splits the job to the respective workers so that it can process the data in a parallel way.

  17. Renewing the Dutch economics syllabus for higher secondary education: educational reforms from past to current databases

    NARCIS (Netherlands)

    Rol, Menno

    2015-01-01

    Dutch secondary school economics education was never at rest. It currently finds itself once more in an interesting phase of transition. New developments of behavioural economics have been incorpo-rated into the exam subject matter while the deletion of Keynesian model making from the corpus

  18. Developing a Geological Management Information System: National Important Mining Zone Database

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Geo-data is a foundation for the prediction and assessment of ore resources, so managing and making full use of those data, including geography database, geology database, mineral deposits database, aeromagnetics database, gravity database, geochemistry database and remote sensing database, is very significant. We developed national important mining zone database (NIMZDB) to manage 14 national important mining zone databases to support a new round prediction of ore deposit. We found that attention should be paid to the following issues: ① data accuracy: integrity, logic consistency, attribute,spatial and time accuracy; ② management of both attribute and spatial data in the same system; ③transforming data between MapGIS and ArcGIS; ④ data sharing and security; ⑤ data searches that can query both attribute and spatial data. Accuracy of input data is guaranteed and the search, analysis and translation of data between MapGIS and ArcGIS has been made convenient via the development of a checking data module and a managing data module based on MapGIS and ArcGIS. Using ArcSDE, we based data sharing on a client/server system, and attribute and spatial data are also managed in the same system.

  19. Design of special purpose database for credit cooperation bank business processing network system

    Science.gov (United States)

    Yu, Yongling; Zong, Sisheng; Shi, Jinfa

    2011-12-01

    With the popularization of e-finance in the city, the construction of e-finance is transfering to the vast rural market, and quickly to develop in depth. Developing the business processing network system suitable for the rural credit cooperative Banks can make business processing conveniently, and have a good application prospect. In this paper, We analyse the necessity of adopting special purpose distributed database in Credit Cooperation Band System, give corresponding distributed database system structure , design the specical purpose database and interface technology . The application in Tongbai Rural Credit Cooperatives has shown that system has better performance and higher efficiency.

  20. Security in the CernVM File System and the Frontier Distributed Database Caching System

    CERN Document Server

    Dykstra, David

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently both CVMFS and Frontier have added X509-based integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  1. Security in the CernVM File System and the Frontier Distributed Database Caching System

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, D.; Blomer, J. [CERN

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  2. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  3. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  4. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  5. Tracking the violent criminal offender through DNA typing profiles--a national database system concept.

    Science.gov (United States)

    Baechtel, F S; Monson, K L; Forsen, G E; Budowle, B; Kearney, J J

    1991-01-01

    Implementation of standard methods for the conduct of restriction fragment length polymorphism analysis into the protocols of United States crime laboratories offers an unprecedented opportunity for the establishment of a national computer database system to enable interchange of DNA typing information. The FBI Laboratory, in concert with crime laboratory representatives, has taken the initiative in planning and implementing such a database system. The Combined DNA Index System (CODIS) will be composed of three sub-indices: a statistical database, which will contain frequencies of DNA fragment alleles in various population groups; an investigative database which will enable linkage of violent crimes through a common subject; and a convicted felon database that will serve to maintain DNA typing profiles for comparison to profiles developed from violent crimes where the suspect may be unknown.

  6. GaussDal: An open source database management system for quantum chemical computations

    Science.gov (United States)

    Alsberg, Bjørn K.; Bjerke, Håvard; Navestad, Gunn M.; Åstrand, Per-Olof

    2005-09-01

    An open source software system called GaussDal for management of results from quantum chemical computations is presented. Chemical data contained in output files from different quantum chemical programs are automatically extracted and incorporated into a relational database (PostgreSQL). The Structural Query Language (SQL) is used to extract combinations of chemical properties (e.g., molecules, orbitals, thermo-chemical properties, basis sets etc.) into data tables for further data analysis, processing and visualization. This type of data management is particularly suited for projects involving a large number of molecules. In the current version of GaussDal, parsers for Gaussian and Dalton output files are supported, however future versions may also include parsers for other quantum chemical programs. For visualization and analysis of generated data tables from GaussDal we have used the locally developed open source software SciCraft. Program summaryTitle of program: GaussDal Catalogue identifier: ADVT Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVT Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: Any Operating system under which the system has been tested: Linux Programming language used: Python Memory required to execute with typical data: 256 MB No. of bits in word: 32 or 64 No. of processors used: 1 Has the code been vectorized or parallelized?: No No. of lines in distributed program, including test data, etc: 543 531 No. of bytes in distribution program, including test data, etc: 7 718 121 Distribution format: tar.gzip file Nature of physical problem: Handling of large amounts of data from quantum chemistry computations. Method of solution: Use of SQL based database and quantum chemistry software specific parsers. Restriction on the complexity of the problem: Program is currently limited to Gaussian and Dalton output, but expandable to other formats. Generates subsets of multiple data tables from

  7. Transport and Environment Database System (TRENDS): Maritime Air Pollutant Emission Modelling

    DEFF Research Database (Denmark)

    Georgakaki, Aliki; Coffey, Robert; Lock, Grahm

    2005-01-01

    with a view to this purpose, are mentioned. Examples of the results obtained by the database are presented. These include detailed air pollutant emission calculations for bulk carriers entering the port of Helsinki, as an example of the database operation, and aggregate results for different types......This paper reports the development of the maritime module within the framework of the Transport and Environment Database System (TRENDS) project. A detailed database has been constructed for the calculation of energy consumption and air pollutant emissions. Based on an in-house database...... changes from findings reported in Methodologies for Estimating air pollutant Emissions from Transport (MEET). The database operates on statistical data provided by Eurostat, which describe vessel and freight movements from and towards EU 15 major ports. Data are at port to Maritime Coastal Area (MCA...

  8. PACSY, a relational database management system for protein structure and chemical shift analysis.

    Science.gov (United States)

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L

    2012-10-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu.

  9. Applying Cognitive Load Theory to the Redesign of a Conventional Database Systems Course

    Science.gov (United States)

    Mason, Raina; Seton, Carolyn; Cooper, Graham

    2016-01-01

    Cognitive load theory (CLT) was used to redesign a Database Systems course for Information Technology students. The redesign was intended to address poor student performance and low satisfaction, and to provide a more relevant foundation in database design and use for subsequent studies and industry. The original course followed the conventional…

  10. Field Validation of Food Service Listings: A Comparison of Commercial and Online Geographic Information System Databases

    Science.gov (United States)

    Seliske, Laura; Pickett, William; Bates, Rebecca; Janssen, Ian

    2012-01-01

    Many studies examining the food retail environment rely on geographic information system (GIS) databases for location information. The purpose of this study was to validate information provided by two GIS databases, comparing the positional accuracy of food service places within a 1 km circular buffer surrounding 34 schools in Ontario, Canada. A commercial database (InfoCanada) and an online database (Yellow Pages) provided the addresses of food service places. Actual locations were measured using a global positioning system (GPS) device. The InfoCanada and Yellow Pages GIS databases provided the locations for 973 and 675 food service places, respectively. Overall, 749 (77.1%) and 595 (88.2%) of these were located in the field. The online database had a higher proportion of food service places found in the field. The GIS locations of 25% of the food service places were located within approximately 15 m of their actual location, 50% were within 25 m, and 75% were within 50 m. This validation study provided a detailed assessment of errors in the measurement of the location of food service places in the two databases. The location information was more accurate for the online database, however, when matching criteria were more conservative, there were no observed differences in error between the databases. PMID:23066385

  11. Field validation of food service listings: a comparison of commercial and online geographic information system databases.

    Science.gov (United States)

    Seliske, Laura; Pickett, William; Bates, Rebecca; Janssen, Ian

    2012-08-01

    Many studies examining the food retail environment rely on geographic information system (GIS) databases for location information. The purpose of this study was to validate information provided by two GIS databases, comparing the positional accuracy of food service places within a 1 km circular buffer surrounding 34 schools in Ontario, Canada. A commercial database (InfoCanada) and an online database (Yellow Pages) provided the addresses of food service places. Actual locations were measured using a global positioning system (GPS) device. The InfoCanada and Yellow Pages GIS databases provided the locations for 973 and 675 food service places, respectively. Overall, 749 (77.1%) and 595 (88.2%) of these were located in the field. The online database had a higher proportion of food service places found in the field. The GIS locations of 25% of the food service places were located within approximately 15 m of their actual location, 50% were within 25 m, and 75% were within 50 m. This validation study provided a detailed assessment of errors in the measurement of the location of food service places in the two databases. The location information was more accurate for the online database, however, when matching criteria were more conservative, there were no observed differences in error between the databases.

  12. Gastric Antral Vascular Ectasia in Systemic Sclerosis: Current Concepts

    Directory of Open Access Journals (Sweden)

    Raphael Hernando Parrado

    2015-01-01

    Full Text Available Introduction. Gastric antral vascular ectasia (GAVE is a rare entity with unique endoscopic appearance described as “watermelon stomach.” It has been associated with systemic sclerosis but the pathophysiological changes leading to GAVE have not been explained and still remain uncertain. Methods. Databases Medline, Scopus, Embase, PubMed, and Cochrane were searched for relevant papers. The main search words were “Gastric antral vascular ectasia,” “Watermelon Stomach,” “GAVE,” “Scleroderma,” and “Systemic Sclerosis.” Fifty-four papers were considered for this review. Results. GAVE is a rare entity in the spectrum of manifestations of systemic sclerosis with unknown pathogenesis. Most patients with systemic sclerosis and GAVE present with asymptomatic anemia, iron deficiency anemia, or heavy acute gastrointestinal bleeding. Symptomatic therapy and endoscopic ablation are the first-line of treatment. Surgical approach may be recommended for patients who do not respond to medical or endoscopic therapies. Conclusion. GAVE can be properly diagnosed and treated. Early diagnosis is key in the management of GAVE because it makes symptomatic therapies and endoscopic approaches feasible. A high index of suspicion is critical. Future studies and a critical review of the current findings about GAVE are needed to understand the role of this condition in systemic sclerosis.

  13. Gastric Antral Vascular Ectasia in Systemic Sclerosis: Current Concepts.

    Science.gov (United States)

    Parrado, Raphael Hernando; Lemus, Hernan Nicolas; Coral-Alvarado, Paola Ximena; Quintana López, Gerardo

    2015-01-01

    Introduction. Gastric antral vascular ectasia (GAVE) is a rare entity with unique endoscopic appearance described as "watermelon stomach." It has been associated with systemic sclerosis but the pathophysiological changes leading to GAVE have not been explained and still remain uncertain. Methods. Databases Medline, Scopus, Embase, PubMed, and Cochrane were searched for relevant papers. The main search words were "Gastric antral vascular ectasia," "Watermelon Stomach," "GAVE," "Scleroderma," and "Systemic Sclerosis." Fifty-four papers were considered for this review. Results. GAVE is a rare entity in the spectrum of manifestations of systemic sclerosis with unknown pathogenesis. Most patients with systemic sclerosis and GAVE present with asymptomatic anemia, iron deficiency anemia, or heavy acute gastrointestinal bleeding. Symptomatic therapy and endoscopic ablation are the first-line of treatment. Surgical approach may be recommended for patients who do not respond to medical or endoscopic therapies. Conclusion. GAVE can be properly diagnosed and treated. Early diagnosis is key in the management of GAVE because it makes symptomatic therapies and endoscopic approaches feasible. A high index of suspicion is critical. Future studies and a critical review of the current findings about GAVE are needed to understand the role of this condition in systemic sclerosis.

  14. Content Based Retrieval Database Management System with Support for Similarity Searching and Query Refinement

    Science.gov (United States)

    2002-01-01

    friendship and compre- hension. Particularly I want to thank Ulises Cervantes, Miguel Valdez, Gabriel Lopez , Olga Baos, Seguti, Erick and Javiera...Database System: The Story of O2. The Morgan Kaufmann Series in Data Management Systems, May 1992. [10] D. Barbara, H. Garcia -Molina, and D. Porter...Sessions, page 360, 1994. [57] Hector Garcia -Molina, Jeffrey D. Ullman, and Jennifer Widom. Database System Implemen- tation. Prentice Hall, 1999. [58

  15. An Approach for Integrating Data Mining with Saudi Universities Database Systems: Case Study

    OpenAIRE

    Mohamed Osman Hegazi; Mohammad Alhawarat; Anwer Hilal

    2016-01-01

    This paper presents an approach for integrating data mining algorithms within Saudi university’s database system, viz., Prince Sattam Bin Abdulaziz University (PSAU) as a case study. The approach based on a bottom-up methodology; it starts by providing a data mining application that represents a solution to one of the problems that face Saudi Universities’ systems. After that, it integrates and implements the solution inside the university’s database system. This process is then repeated to e...

  16. Catalytic currents in dithiophosphate-iodide systems

    Energy Technology Data Exchange (ETDEWEB)

    Gabdullin, M.G.; Garifzyanov, A.R.; Toropova, V.F.

    1986-01-01

    Catalytic currents of oxidizing agents are used to determinerate constants of simultaneous chemical reactions. In the present paper, the authors investigated electrochemical oxidation of iodide ions in the presence of a series of dithiophosphates (RO)/sub 2/PSS/sup -/ at a glassy carbon electrode n that (R=CH/sub 3/, C/sub 2/H/sub 5/, n-C/sub 3/H/sub 7/, n-C/sub 4/H/sub 9/, iso-C/sub 4/H/sub 9/, and sec-C/sub 4/H/sub 9/). It is know n that dithiophosphates (DTP) are strong reducing agents and are oxidized by iodine. At the same time, as shown previously, electrochemical oxidation of DTP occurs at more positive potentials in comparision with the oxidation potential of iodide ions. This suggested that it is possible for a catalytic effect to be manifested in DTP-I/sup -/ systems. Current-voltage curves are shown for solutions of I/sup -/ in the absence and in the presence of DTP. All data indicate a catalytic nature of the electrode process. The obtained data show that the rates of reactions of DTP with iodine decrease with increasing volume and branching of the substituents at the phosphorus atom.

  17. 7th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2015)

    CERN Document Server

    Nguyen, Ngoc; Batubara, John; New Trends in Intelligent Information and Database Systems

    2015-01-01

    Intelligent information and database systems are two closely related subfields of modern computer science which have been known for over thirty years. They focus on the integration of artificial intelligence and classic database technologies to create the class of next generation information systems. The book focuses on new trends in intelligent information and database systems and discusses topics addressed to the foundations and principles of data, information, and knowledge models, methodologies for intelligent information and database systems analysis, design, and implementation, their validation, maintenance and evolution. They cover a broad spectrum of research topics discussed both from the practical and theoretical points of view such as: intelligent information retrieval, natural language processing, semantic web, social networks, machine learning, knowledge discovery, data mining, uncertainty management and reasoning under uncertainty, intelligent optimization techniques in information systems, secu...

  18. CRITICAL ASSESSMENT OF AUDITING CONTRIBUTIONS TO EFFECTIVE AND EFFICIENT SECURITY IN DATABASE SYSTEMS

    Directory of Open Access Journals (Sweden)

    Olumuyiwa O. Matthew

    2015-03-01

    Full Text Available Database auditing has become a very crucial aspect of security as organisations increase their adoption of database management systems (DBMS as major asset that keeps, maintain and monitor sensitive information. Database auditing is the group of activities involved in observing a set of stored data in order to be aware of the actions of users. The work presented here outlines the main auditing techniques and methods. Some architectural based auditing systems were also considered to assess the contribution of auditing to database security. Here a framework of several stages to be used in the instigation of auditing is proposed. Some issues relating to handling of audit trails are also discussed in this paper. This paper also itemizes some of the key important impacts of the concept to security and how compliance with government policies and regulations is enforced through auditing. Once the framework is adopted, it will provide support to database auditors and DBAs.

  19. The BRENDA enzyme information system-From a database to an expert system.

    Science.gov (United States)

    Schomburg, I; Jeske, L; Ulbrich, M; Placzek, S; Chang, A; Schomburg, D

    2017-04-21

    Enzymes, representing the largest and by far most complex group of proteins, play an essential role in all processes of life, including metabolism, gene expression, cell division, the immune system, and others. Their function, also connected to most diseases or stress control makes them interesting targets for research and applications in biotechnology, medical treatments, or diagnosis. Their functional parameters and other properties are collected, integrated, and made available to the scientific community in the BRaunschweig ENzyme DAtabase (BRENDA). In the last 30 years BRENDA has developed into one of the most highly used biological databases worldwide. The data contents, the process of data acquisition, data integration and control, the ways to access the data, and visualizations provided by the website are described and discussed. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  20. EcoCyc: fusing model organism databases with systems biology.

    Science.gov (United States)

    Keseler, Ingrid M; Mackie, Amanda; Peralta-Gil, Martin; Santos-Zavaleta, Alberto; Gama-Castro, Socorro; Bonavides-Martínez, César; Fulcher, Carol; Huerta, Araceli M; Kothari, Anamika; Krummenacker, Markus; Latendresse, Mario; Muñiz-Rascado, Luis; Ong, Quang; Paley, Suzanne; Schröder, Imke; Shearer, Alexander G; Subhraveti, Pallavi; Travers, Mike; Weerasinghe, Deepika; Weiss, Verena; Collado-Vides, Julio; Gunsalus, Robert P; Paulsen, Ian; Karp, Peter D

    2013-01-01

    EcoCyc (http://EcoCyc.org) is a model organism database built on the genome sequence of Escherichia coli K-12 MG1655. Expert manual curation of the functions of individual E. coli gene products in EcoCyc has been based on information found in the experimental literature for E. coli K-12-derived strains. Updates to EcoCyc content continue to improve the comprehensive picture of E. coli biology. The utility of EcoCyc is enhanced by new tools available on the EcoCyc web site, and the development of EcoCyc as a teaching tool is increasing the impact of the knowledge collected in EcoCyc.

  1. Version based spatial record management techniques for spatial database management system

    Institute of Scientific and Technical Information of China (English)

    KIM Ho-seok; KIM Hee-taek; KIM Myung-keun; BAE Hae-young

    2004-01-01

    The search operation of spatial data was a principal operation in existent spatial database management system, but the update operation of spatial data such as tracking are occurring frequently in the spatial database management system recently. So, necessity of concurrency improvement among transactions is increasing. In general database management system, many techniques have been studied to solve concurrency problem of transaction. Among them, multi-version algorithm does to minimize interference among transactions. However, to apply existent multi-version algorithm to improve concurrency of transaction to spatial database management system, the waste of storage happens because it must store entire version for spatial record even if only aspatial data of spatial record is changed. This paper has proposed the record management techniques to manage separating aspatial data version and spatial data version to decrease waste of storage for record version and improve concurrency among transactions.

  2. A user's guide to particle physics computer-searchable databases on the SLAC-SPIRES system

    Energy Technology Data Exchange (ETDEWEB)

    Rittenberg, A.; Armstrong, F.E.; Levine, B.S.; Trippe, T.G.; Wohl, C.G.; Yost, G.P.; Whalley, M.R.; Addis, L.

    1986-09-01

    This report discusses five computer-searchable databases located at SLAC which are of interest to particle physicists. These databases assist the user in literature-searching, provide numerical data extracted from papers, and contain information about experiments. We describe the databases briefly, tell how to use the SPIRES database management system to access them interactively, and give several examples of their use.

  3. Development of the severe accident risk information database management system SARD

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Kwang Il; Kim, Dong Ha

    2003-01-01

    The main purpose of this report is to introduce essential features and functions of a severe accident risk information management system, SARD (Severe Accident Risk Database Management System) version 1.0, which has been developed in Korea Atomic Energy Research Institute, and database management and data retrieval procedures through the system. The present database management system has powerful capabilities that can store automatically and manage systematically the plant-specific severe accident analysis results for core damage sequences leading to severe accidents, and search intelligently the related severe accident risk information. For that purpose, the present database system mainly takes into account the plant-specific severe accident sequences obtained from the Level 2 Probabilistic Safety Assessments (PSAs), base case analysis results for various severe accident sequences (such as code responses and summary for key-event timings), and related sensitivity analysis results for key input parameters/models employed in the severe accident codes. Accordingly, the present database system can be effectively applied in supporting the Level 2 PSA of similar plants, for fast prediction and intelligent retrieval of the required severe accident risk information for the specific plant whose information was previously stored in the database system, and development of plant-specific severe accident management strategies.

  4. Integrated Controlling System and Unified Database for High Throughput Protein Crystallography Experiments

    Science.gov (United States)

    Gaponov, Yu. A.; Igarashi, N.; Hiraki, M.; Sasajima, K.; Matsugaki, N.; Suzuki, M.; Kosuge, T.; Wakatsuki, S.

    2004-05-01

    An integrated controlling system and a unified database for high throughput protein crystallography experiments have been developed. Main features of protein crystallography experiments (purification, crystallization, crystal harvesting, data collection, data processing) were integrated into the software under development. All information necessary to perform protein crystallography experiments is stored (except raw X-ray data that are stored in a central data server) in a MySQL relational database. The database contains four mutually linked hierarchical trees describing protein crystals, data collection of protein crystal and experimental data processing. A database editor was designed and developed. The editor supports basic database functions to view, create, modify and delete user records in the database. Two search engines were realized: direct search of necessary information in the database and object oriented search. The system is based on TCP/IP secure UNIX sockets with four predefined sending and receiving behaviors, which support communications between all connected servers and clients with remote control functions (creating and modifying data for experimental conditions, data acquisition, viewing experimental data, and performing data processing). Two secure login schemes were designed and developed: a direct method (using the developed Linux clients with secure connection) and an indirect method (using the secure SSL connection using secure X11 support from any operating system with X-terminal and SSH support). A part of the system has been implemented on a new MAD beam line, NW12, at the Photon Factory Advanced Ring for general user experiments.

  5. A brief history of cancer: age-old milestones underlying our current knowledge database.

    Science.gov (United States)

    Faguet, Guy B

    2015-05-01

    This mini-review chronicles the history of cancer ranging from cancerous growths discovered in dinosaur fossils, suggestions of cancer in Ancient Egyptian papyri written in 1500-1600 BC, and the first documented case of human cancer 2,700 years ago, to contributions by pioneers beginning with Hippocrates and ending with the originators of radiation and medical oncology. Fanciful notions that soon fell into oblivion are mentioned such as Paracelsus and van Helmont substituting Galen's black bile by mysterious ens or archeus systems. Likewise, unfortunate episodes such as Virchow claiming Remak's hypotheses as his own remind us that human shortcomings can affect otherwise excellent scientists. However, age-old benchmark observations, hypotheses, and practices of historic and scientific interest are underscored, excerpts included, as precursors of recent discoveries that shaped modern medicine. Examples include: Petit's total mastectomy with excision of axillary glands for breast cancer; a now routine practice, Peyrilhe's ichorous matter a cancer-causing factor he tested for transmissibility one century before Rous confirmed the virus-cancer link, Hill's warning of the dangers of tobacco snuff; heralding today's cancer pandemic caused by smoking, Pott reporting scrotum cancer in chimney sweepers; the first proven occupational cancer, Velpeau's remarkable foresight that a yet unknown subcellular element would have to be discovered in order to define the nature of cancer; a view confirmed by cancer genetics two centuries later, ending with Röntgen and the Curies, and Gilman et al. ushering radiation (1896, 1919) and medical oncology (1942), respectively.

  6. Dynamics of the southern California current system

    Science.gov (United States)

    di Lorenzo, Emanuele

    The dynamics of seasonal to long-term variability of the Southern California Current System (SCCS) is studied using a four dimensional space-time analysis of the 52 year (1949--2000) California Cooperative Oceanic Fisheries Investigations (CalCOFI) hydrography combined with a sensitivity analysis of an eddy permitting primitive equation ocean model under various forcing scenarios. The dynamics of the seasonal cycle in the SCCS can be summarized as follows. In spring upwelling favorable winds force an upward tilt of the isopycnals along the coast (equatorward flow). Quasi-linear Rossby waves are excited by the ocean adjustment to the isopycnal displacement. In summer as these waves propagate offshore poleward flow develops at the coast and the Southern California Eddy (SCE) reaches its seasonal maxima. Positive wind stress curl in the Southern California Bight is important in maintaining poleward flow and locally reinforcing the SCE with an additional upward displacement of isopycnals through Ekman pumping. At the end of summer and throughout the fall instability processes within the SCE are a generating mechanism for mesoscale eddies, which fully develop in the offshore waters during winter. On decadal timescales a warming trend in temperature (1 C) and a deepening trend in the depth of the mean thermocline (20 m) between 1950 and 1998 are found to be primarily forced by large-scale decadal fluctuations in surface heat fluxes combined with horizontal advection by the mean currents. After 1998 the surface heat fluxes suggest the beginning of a period of cooling, which is consistent with colder observed ocean temperatures. The temporal and spatial distribution of the warming is coherent over the entire northeast Pacific Ocean. Salinity changes are decoupled from temperature and uncorrelated with indices of large-scale oceanic variability. Temporal modulation of southward horizontal advection by the California Current is the primary mechanism controlling local

  7. A Multilevel Transaction Problem for Multilevel Secure Database Systems and its Solution for the Replicated Architecture

    Science.gov (United States)

    1992-01-01

    interesting a research issue. An algorithm for this case, using a multiversion technique, will be the subject of future work. In addition, there is a...34 Multiversion Concurrency Control for Multilevel Secure Database Systems" in Proceedings of the IEEE Symposium on Security and Privacy, pp. 369-383...Oakland, CA May 1990. 7. William T. Maimone and Ira B. Greenberg, "Single-Level Multiversion Schedulers for Multilevel Secure Database Systems" in

  8. Smart travel guide: from internet image database to intelligent system

    Science.gov (United States)

    Chareyron, Ga"l.; Da Rugna, Jérome; Cousin, Saskia

    2011-02-01

    To help the tourist to discover a city, a region or a park, many options are provided by public tourism travel centers, by free online guides or by dedicated book guides. Nonetheless, these guides provide only mainstream information which are not conform to a particular tourist behavior. On the other hand, we may find several online image databases allowing users to upload their images and to localize each image on a map. These websites are representative of tourism practices and constitute a proxy to analyze tourism flows. Then, this work intends to answer this question: knowing what I have visited and what other people have visited, where should I go now? This process needs to profile users, sites and photos. our paper presents the acquired data and relationship between photographers, sites and photos and introduces the model designed to correctly estimate the site interest of each tourism point. The third part shows an application of our schema: a smart travel guide on geolocated mobile devices. This android application is a travel guide truly matching the user wishes.

  9. Application of the British Food Standards Agency nutrient profiling system in a French food composition database.

    Science.gov (United States)

    Julia, Chantal; Kesse-Guyot, Emmanuelle; Touvier, Mathilde; Méjean, Caroline; Fezeu, Léopold; Hercberg, Serge

    2014-11-28

    Nutrient profiling systems are powerful tools for public health initiatives, as they aim at categorising foods according to their nutritional quality. The British Food Standards Agency (FSA) nutrient profiling system (FSA score) has been validated in a British food database, but the application of the model in other contexts has not yet been evaluated. The objective of the present study was to assess the application of the British FSA score in a French food composition database. Foods from the French NutriNet-Santé study food composition table were categorised according to their FSA score using the Office of Communication (OfCom) cut-off value ('healthier' ≤ 4 for foods and ≤ 1 for beverages; 'less healthy' >4 for foods and >1 for beverages) and distribution cut-offs (quintiles for foods, quartiles for beverages). Foods were also categorised according to the food groups used for the French Programme National Nutrition Santé (PNNS) recommendations. Foods were weighted according to their relative consumption in a sample drawn from the NutriNet-Santé study (n 4225), representative of the French population. Classification of foods according to the OfCom cut-offs was consistent with food groups described in the PNNS: 97·8 % of fruit and vegetables, 90·4 % of cereals and potatoes and only 3·8 % of sugary snacks were considered as 'healthier'. Moreover, variability in the FSA score allowed for a discrimination between subcategories in the same food group, confirming the possibility of using the FSA score as a multiple category system, for example as a basis for front-of-pack nutrition labelling. Application of the FSA score in the French context would adequately complement current public health recommendations.

  10. System Thinking Tutorial and Reef Link Database Fact Sheets

    Science.gov (United States)

    The sustainable well-being of communities is inextricably linked to both the health of the earth’s ecosystems and the health of humans living in the community. Currently, many policy and management decisions are made without considering the goods and services humans derive from ...

  11. A new scoring system in Cystic Fibrosis: statistical tools for database analysis - a preliminary report.

    Science.gov (United States)

    Hafen, G M; Hurst, C; Yearwood, J; Smith, J; Dzalilov, Z; Robinson, P J

    2008-10-05

    Cystic fibrosis is the most common fatal genetic disorder in the Caucasian population. Scoring systems for assessment of Cystic fibrosis disease severity have been used for almost 50 years, without being adapted to the milder phenotype of the disease in the 21st century. The aim of this current project is to develop a new scoring system using a database and employing various statistical tools. This study protocol reports the development of the statistical tools in order to create such a scoring system. The evaluation is based on the Cystic Fibrosis database from the cohort at the Royal Children's Hospital in Melbourne. Initially, unsupervised clustering of the all data records was performed using a range of clustering algorithms. In particular incremental clustering algorithms were used. The clusters obtained were characterised using rules from decision trees and the results examined by clinicians. In order to obtain a clearer definition of classes expert opinion of each individual's clinical severity was sought. After data preparation including expert-opinion of an individual's clinical severity on a 3 point-scale (mild, moderate and severe disease), two multivariate techniques were used throughout the analysis to establish a method that would have a better success in feature selection and model derivation: 'Canonical Analysis of Principal Coordinates' and 'Linear Discriminant Analysis'. A 3-step procedure was performed with (1) selection of features, (2) extracting 5 severity classes out of a 3 severity class as defined per expert-opinion and (3) establishment of calibration datasets. (1) Feature selection: CAP has a more effective "modelling" focus than DA.(2) Extraction of 5 severity classes: after variables were identified as important in discriminating contiguous CF severity groups on the 3-point scale as mild/moderate and moderate/severe, Discriminant Function (DF) was used to determine the new groups mild, intermediate moderate, moderate, intermediate

  12. A new scoring system in Cystic Fibrosis: statistical tools for database analysis – a preliminary report

    Science.gov (United States)

    Hafen, GM; Hurst, C; Yearwood, J; Smith, J; Dzalilov, Z; Robinson, PJ

    2008-01-01

    Background Cystic fibrosis is the most common fatal genetic disorder in the Caucasian population. Scoring systems for assessment of Cystic fibrosis disease severity have been used for almost 50 years, without being adapted to the milder phenotype of the disease in the 21st century. The aim of this current project is to develop a new scoring system using a database and employing various statistical tools. This study protocol reports the development of the statistical tools in order to create such a scoring system. Methods The evaluation is based on the Cystic Fibrosis database from the cohort at the Royal Children's Hospital in Melbourne. Initially, unsupervised clustering of the all data records was performed using a range of clustering algorithms. In particular incremental clustering algorithms were used. The clusters obtained were characterised using rules from decision trees and the results examined by clinicians. In order to obtain a clearer definition of classes expert opinion of each individual's clinical severity was sought. After data preparation including expert-opinion of an individual's clinical severity on a 3 point-scale (mild, moderate and severe disease), two multivariate techniques were used throughout the analysis to establish a method that would have a better success in feature selection and model derivation: 'Canonical Analysis of Principal Coordinates' and 'Linear Discriminant Analysis'. A 3-step procedure was performed with (1) selection of features, (2) extracting 5 severity classes out of a 3 severity class as defined per expert-opinion and (3) establishment of calibration datasets. Results (1) Feature selection: CAP has a more effective "modelling" focus than DA. (2) Extraction of 5 severity classes: after variables were identified as important in discriminating contiguous CF severity groups on the 3-point scale as mild/moderate and moderate/severe, Discriminant Function (DF) was used to determine the new groups mild, intermediate moderate

  13. A new scoring system in Cystic Fibrosis: statistical tools for database analysis – a preliminary report

    Directory of Open Access Journals (Sweden)

    Yearwood J

    2008-10-01

    Full Text Available Abstract Background Cystic fibrosis is the most common fatal genetic disorder in the Caucasian population. Scoring systems for assessment of Cystic fibrosis disease severity have been used for almost 50 years, without being adapted to the milder phenotype of the disease in the 21st century. The aim of this current project is to develop a new scoring system using a database and employing various statistical tools. This study protocol reports the development of the statistical tools in order to create such a scoring system. Methods The evaluation is based on the Cystic Fibrosis database from the cohort at the Royal Children's Hospital in Melbourne. Initially, unsupervised clustering of the all data records was performed using a range of clustering algorithms. In particular incremental clustering algorithms were used. The clusters obtained were characterised using rules from decision trees and the results examined by clinicians. In order to obtain a clearer definition of classes expert opinion of each individual's clinical severity was sought. After data preparation including expert-opinion of an individual's clinical severity on a 3 point-scale (mild, moderate and severe disease, two multivariate techniques were used throughout the analysis to establish a method that would have a better success in feature selection and model derivation: 'Canonical Analysis of Principal Coordinates' and 'Linear Discriminant Analysis'. A 3-step procedure was performed with (1 selection of features, (2 extracting 5 severity classes out of a 3 severity class as defined per expert-opinion and (3 establishment of calibration datasets. Results (1 Feature selection: CAP has a more effective "modelling" focus than DA. (2 Extraction of 5 severity classes: after variables were identified as important in discriminating contiguous CF severity groups on the 3-point scale as mild/moderate and moderate/severe, Discriminant Function (DF was used to determine the new groups mild

  14. BGMUT: NCBI dbRBC database of allelic variations of genes encoding antigens of blood group systems.

    Science.gov (United States)

    Patnaik, Santosh Kumar; Helmberg, Wolfgang; Blumenfeld, Olga O

    2012-01-01

    Analogous to human leukocyte antigens, blood group antigens are surface markers on the erythrocyte cell membrane whose structures differ among individuals and which can be serologically identified. The Blood Group Antigen Gene Mutation Database (BGMUT) is an online repository of allelic variations in genes that determine the antigens of various human blood group systems. The database is manually curated with allelic information collated from scientific literature and from direct submissions from research laboratories. Currently, the database documents sequence variations of a total of 1251 alleles of all 40 gene loci that together are known to affect antigens of 30 human blood group systems. When available, information on the geographic or ethnic prevalence of an allele is also provided. The BGMUT website also has general information on the human blood group systems and the genes responsible for them. BGMUT is a part of the dbRBC resource of the National Center for Biotechnology Information, USA, and is available online at http://www.ncbi.nlm.nih.gov/projects/gv/rbc/xslcgi.fcgi?cmd=bgmut. The database should be of use to members of the transfusion medicine community, those interested in studies of genetic variation and related topics such as human migrations, and students as well as members of the general public.

  15. Current-potential characteristics of electrochemical systems

    Energy Technology Data Exchange (ETDEWEB)

    Battaglia, V.S.

    1993-07-01

    This dissertation contains investigations in three distinct areas. Chapters 1 and 2 provide an analysis of the effects of electromagnetic phenomena during the initial stages of cell discharge. Chapter 1 includes the solution to Maxwell`s equations for the penetration of the axial component of an electric field into an infinitely long cylindrical conductor. Chapter 2 contains the analysis of the conductor included in a radial circuit. Chapter 3 provides a complete description of the equations that describe the growth of an oxide film. A finite difference program was written to solve the equations. The system investigated is the iron/iron oxide in a basic, aqueous solution. Chapters 4 and 5 include the experimental attempts for replacing formaldehyde with an innocuous reducing agent for electroless deposition. In chapter 4, current-versus-voltage curves are provided for a sodium thiosulfate bath in the presence of a copper disk electrode. Also provided are the cathodic polarization curves of a copper/EDTA bath in the presence of a copper electrode. Chapter 5 contains the experimental results of work done with sodium hypophosphite as a reducing agent. Mixed-potential-versus-time curves for solutions containing various combinations of copper sulfate, nickel chloride, and hypophosphite in the presence of a palladium disk electrode provide an indication of the reducing power of the solutions.

  16. H2O: An Autonomic, Resource-Aware Distributed Database System

    CERN Document Server

    Macdonald, Angus; Kirby, Graham

    2010-01-01

    This paper presents the design of an autonomic, resource-aware distributed database which enables data to be backed up and shared without complex manual administration. The database, H2O, is designed to make use of unused resources on workstation machines. Creating and maintaining highly-available, replicated database systems can be difficult for untrained users, and costly for IT departments. H2O reduces the need for manual administration by autonomically replicating data and load-balancing across machines in an enterprise. Provisioning hardware to run a database system can be unnecessarily costly as most organizations already possess large quantities of idle resources in workstation machines. H2O is designed to utilize this unused capacity by using resource availability information to place data and plan queries over workstation machines that are already being used for other tasks. This paper discusses the requirements for such a system and presents the design and implementation of H2O.

  17. CardioTF, a database of deconstructing transcriptional circuits in the heart system

    Directory of Open Access Journals (Sweden)

    Yisong Zhen

    2016-08-01

    Full Text Available Background: Information on cardiovascular gene transcription is fragmented and far behind the present requirements of the systems biology field. To create a comprehensive source of data for cardiovascular gene regulation and to facilitate a deeper understanding of genomic data, the CardioTF database was constructed. The purpose of this database is to collate information on cardiovascular transcription factors (TFs, position weight matrices (PWMs, and enhancer sequences discovered using the ChIP-seq method. Methods: The Naïve-Bayes algorithm was used to classify literature and identify all PubMed abstracts on cardiovascular development. The natural language learning tool GNAT was then used to identify corresponding gene names embedded within these abstracts. Local Perl scripts were used to integrate and dump data from public databases into the MariaDB management system (MySQL. In-house R scripts were written to analyze and visualize the results. Results: Known cardiovascular TFs from humans and human homologs from fly, Ciona, zebrafish, frog, chicken, and mouse were identified and deposited in the database. PWMs from Jaspar, hPDI, and UniPROBE databases were deposited in the database and can be retrieved using their corresponding TF names. Gene enhancer regions from various sources of ChIP-seq data were deposited into the database and were able to be visualized by graphical output. Besides biocuration, mouse homologs of the 81 core cardiac TFs were selected using a Naïve-Bayes approach and then by intersecting four independent data sources: RNA profiling, expert annotation, PubMed abstracts and phenotype. Discussion: The CardioTF database can be used as a portal to construct transcriptional network of cardiac development. Availability and Implementation: Database URL: http://www.cardiosignal.org/database/cardiotf.html.

  18. Customizable neuroinformatics database system: XooNIps and its application to the pupil platform.

    Science.gov (United States)

    Yamaji, Kazutsuna; Sakai, Hiroyuki; Okumura, Yoshihiro; Usui, Shiro

    2007-07-01

    The developing field of neuroinformatics includes technologies for the collection and sharing of neuro-related digital resources. These resources will be of increasing value for understanding the brain. Developing a database system to integrate these disparate resources is necessary to make full use of these resources. This study proposes a base database system termed XooNIps that utilizes the content management system called XOOPS. XooNIps is designed for developing databases in different research fields through customization of the option menu. In a XooNIps-based database, digital resources are stored according to their respective categories, e.g., research articles, experimental data, mathematical models, stimulations, each associated with their related metadata. Several types of user authorization are supported for secure operations. In addition to the directory and keyword searches within a certain database, XooNIps searches simultaneously across other XooNIps-based databases on the Internet. Reviewing systems for user registration and for data submission are incorporated to impose quality control. Furthermore, XOOPS modules containing news, forums schedules, blogs and other information can be combined to enhance XooNIps functionality. These features provide better scalability, extensibility, and customizability to the general neuroinformatics community. The application of this system to data, models, and other information related to human pupils is described here.

  19. A high performance, ad-hoc, fuzzy query processing system for relational databases

    Science.gov (United States)

    Mansfield, William H., Jr.; Fleischman, Robert M.

    1992-01-01

    Database queries involving imprecise or fuzzy predicates are currently an evolving area of academic and industrial research. Such queries place severe stress on the indexing and I/O subsystems of conventional database environments since they involve the search of large numbers of records. The Datacycle architecture and research prototype is a database environment that uses filtering technology to perform an efficient, exhaustive search of an entire database. It has recently been modified to include fuzzy predicates in its query processing. The approach obviates the need for complex index structures, provides unlimited query throughput, permits the use of ad-hoc fuzzy membership functions, and provides a deterministic response time largely independent of query complexity and load. This paper describes the Datacycle prototype implementation of fuzzy queries and some recent performance results.

  20. Thermodynamic database development-modeling and phase diagram calculations in oxide systems

    Institute of Scientific and Technical Information of China (English)

    Arthur D. Pelton

    2006-01-01

    The databases of the FactSage thermodynamic computer system have been under development for 30 years. These databases contain critically evaluated and optimized data for thousands of compounds and hundreds of multicomponent solutions of solid and liquid metals, oxides, salts, sulfides, etc. The databases are automatically accessed by user-friendly software that calculates complex multiphase equilibria in large multicomponent systems for a wide variety of possible input/output constraints. The databases for solutions have been developed by critical evaluation/optimization of all available phase equilibrium and thermodynamic data. The databases contain parameters of models specifically developed for different types of solutions involving sublattices, ordering, etc. Through the optimization process, model parameters are found which reproduce all thermodynamic and phase equilibrium data within experimental error limits and permit extrapolation into regions of temperature and composition where data are unavailable. The present article focuses on the databases for solid and liquid oxide phases involving 25 elements. A short review of the available databases is presented along with the models used for the molten slag and the solid solutions such as spinel, pyroxene, olivine, monoxide, corundum, etc. The critical evaluation/optimization procedure is outlined using examples from the A12O3-SiO2-CaO-FeO-Fe2O3 system. Sample calculations are presented in which the oxide databases are used in conjunction with the FactSage databases for metallic and other phases. In particular, the use of the FactSage module for the calculation of multicomponent phase diagrams is illustrated.

  1. Data-base driven graphics animation and simulation system

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, H.D.; Curtis, J.N.

    1985-01-01

    Most attempts at the graphics animation of data involve rather large and expensive development of problem-specific systems. This paper discusses a general graphics animation system created to be a tool for the design of a wide variety of animated simulations. By using relational data base storage of graphics and control information, considerable flexibility in the design and development of animated displays is achieved.

  2. Abstraction Without Regret in Database Systems Building: a Manifesto

    OpenAIRE

    Christoph Koch

    2014-01-01

    It has been said that all problems in computer science can be solved by adding another level of indirection, except for performance problems, which are solved by removing levels of indirection. Compilers are our tools for removing levels of indirection automatically. However, we do not trust them when it comes to systems building. Most performance-critical systems are built in low-level programming languages such as C. Some of the downsides of this compared to using modern high-level programm...

  3. Nonmaterialized Relations and the Support of Information Retrieval Applications by Relational Database Systems.

    Science.gov (United States)

    Lynch, Clifford A.

    1991-01-01

    Describes several aspects of the problem of supporting information retrieval system query requirements in the relational database management system (RDBMS) environment and proposes an extension to query processing called nonmaterialized relations. User interactions with information retrieval systems are discussed, and nonmaterialized relations are…

  4. The method of design for seismic data database system based on tree structure

    Institute of Scientific and Technical Information of China (English)

    王喜珍; 滕云田; 高孟潭; 陈步云; 姜慧

    2005-01-01

    With the massive growth of the seismic data, it is required a new method to manage them. In this paper, the design method will be reported about relational database based on tree structure. Comparing with other designs, it is not only simpler and easier to organize data, but also can simplify the design process of the database. This method has been used to design database of the earthquake monitor center station of the earthquake monitoring system for the Yangtze River Three Gorges Project and has shown good results.

  5. Role of Database Management Systems in Selected Engineering Institutions of Andhra Pradesh: An Analytical Survey

    Directory of Open Access Journals (Sweden)

    Kutty Kumar

    2016-06-01

    Full Text Available This paper aims to analyze the function of database management systems from the perspective of librarians working in engineering institutions in Andhra Pradesh. Ninety-eight librarians from one hundred thirty engineering institutions participated in the study. The paper reveals that training by computer suppliers and software packages are the significant mode of acquiring DBMS skills by librarians; three-fourths of the librarians are postgraduate degree holders. Most colleges use database applications for automation purposes and content value. Electrical problems and untrained staff seem to be major constraints faced by respondents for managing library databases.

  6. Rapid storage and retrieval of genomic intervals from a relational database system using nested containment lists.

    Science.gov (United States)

    Wiley, Laura K; Sivley, R Michael; Bush, William S

    2013-01-01

    Efficient storage and retrieval of genomic annotations based on range intervals is necessary, given the amount of data produced by next-generation sequencing studies. The indexing strategies of relational database systems (such as MySQL) greatly inhibit their use in genomic annotation tasks. This has led to the development of stand-alone applications that are dependent on flat-file libraries. In this work, we introduce MyNCList, an implementation of the NCList data structure within a MySQL database. MyNCList enables the storage, update and rapid retrieval of genomic annotations from the convenience of a relational database system. Range-based annotations of 1 million variants are retrieved in under a minute, making this approach feasible for whole-genome annotation tasks. Database URL: https://github.com/bushlab/mynclist.

  7. Comparison of scientific and administrative database management systems

    Science.gov (United States)

    Stoltzfus, J. C.

    1983-01-01

    Some characteristics found to be different for scientific and administrative data bases are identified and some of the corresponding generic requirements for data base management systems (DBMS) are discussed. The requirements discussed are especially stringent for either the scientific or administrative data bases. For some, no commercial DBMS is fully satisfactory, and the data base designer must invent a suitable approach. For others, commercial systems are available with elegant solutions, and a wrong choice would mean an expensive work-around to provide the missing features. It is concluded that selection of a DBMS must be based on the requirements for the information system. There is no unique distinction between scientific and administrative data bases or DBMS. The distinction comes from the logical structure of the data, and understanding the data and their relationships is the key to defining the requirements and selecting an appropriate DBMS for a given set of applications.

  8. Comparison of scientific and administrative database management systems

    Science.gov (United States)

    Stoltzfus, J. C.

    1983-01-01

    Some characteristics found to be different for scientific and administrative data bases are identified and some of the corresponding generic requirements for data base management systems (DBMS) are discussed. The requirements discussed are especially stringent for either the scientific or administrative data bases. For some, no commercial DBMS is fully satisfactory, and the data base designer must invent a suitable approach. For others, commercial systems are available with elegant solutions, and a wrong choice would mean an expensive work-around to provide the missing features. It is concluded that selection of a DBMS must be based on the requirements for the information system. There is no unique distinction between scientific and administrative data bases or DBMS. The distinction comes from the logical structure of the data, and understanding the data and their relationships is the key to defining the requirements and selecting an appropriate DBMS for a given set of applications.

  9. Studying on the Fuzzy-QFD System Based on Database Class Encapsulation Technology

    Institute of Scientific and Technical Information of China (English)

    FANG Xifeng; ZHANG Shengwen; LU Yuping; WU Hongtao

    2006-01-01

    Complicated product QFD system design information including design and manufacturing, operation and maintenance as well as relative supply information, all are tightly related to the product life cycle cooperative design and the process of establishing the QFD system. In the early stage of product design, we can only get the fuzzy and unreliable information. With design going, the fuzzy and unreliable information become less and less. The defect of the traditional QFD is not deal with the fuzzy contents very well. Adopt database class encapsulation and fuzzy inference technology, and then discuss the realization of QFD system based on VFP database. The structure of the fuzzy QFD system based on database class's encapsulation is built and the work flow of fuzzy algorithm based on VFP software is presented. In the analysis of fuzzy QFD process, fuzzy inference is adopted. A developed prototype system and an example have verified some presented techniques and the research results are the basis of the future development.

  10. Targeted Therapy Database (TTD: a model to match patient's molecular profile with current knowledge on cancer biology.

    Directory of Open Access Journals (Sweden)

    Simone Mocellin

    Full Text Available BACKGROUND: The efficacy of current anticancer treatments is far from satisfactory and many patients still die of their disease. A general agreement exists on the urgency of developing molecularly targeted therapies, although their implementation in the clinical setting is in its infancy. In fact, despite the wealth of preclinical studies addressing these issues, the difficulty of testing each targeted therapy hypothesis in the clinical arena represents an intrinsic obstacle. As a consequence, we are witnessing a paradoxical situation where most hypotheses about the molecular and cellular biology of cancer remain clinically untested and therefore do not translate into a therapeutic benefit for patients. OBJECTIVE: To present a computational method aimed to comprehensively exploit the scientific knowledge in order to foster the development of personalized cancer treatment by matching the patient's molecular profile with the available evidence on targeted therapy. METHODS: To this aim we focused on melanoma, an increasingly diagnosed malignancy for which the need for novel therapeutic approaches is paradigmatic since no effective treatment is available in the advanced setting. Relevant data were manually extracted from peer-reviewed full-text original articles describing any type of anti-melanoma targeted therapy tested in any type of experimental or clinical model. To this purpose, Medline, Embase, Cancerlit and the Cochrane databases were searched. RESULTS AND CONCLUSIONS: We created a manually annotated database (Targeted Therapy Database, TTD where the relevant data are gathered in a formal representation that can be computationally analyzed. Dedicated algorithms were set up for the identification of the prevalent therapeutic hypotheses based on the available evidence and for ranking treatments based on the molecular profile of individual patients. In this essay we describe the principles and computational algorithms of an original method

  11. Targeted Therapy Database (TTD): A Model to Match Patient's Molecular Profile with Current Knowledge on Cancer Biology

    Science.gov (United States)

    Mocellin, Simone; Shrager, Jeff; Scolyer, Richard; Pasquali, Sandro; Verdi, Daunia; Marincola, Francesco M.; Briarava, Marta; Gobbel, Randy; Rossi, Carlo; Nitti, Donato

    2010-01-01

    Background The efficacy of current anticancer treatments is far from satisfactory and many patients still die of their disease. A general agreement exists on the urgency of developing molecularly targeted therapies, although their implementation in the clinical setting is in its infancy. In fact, despite the wealth of preclinical studies addressing these issues, the difficulty of testing each targeted therapy hypothesis in the clinical arena represents an intrinsic obstacle. As a consequence, we are witnessing a paradoxical situation where most hypotheses about the molecular and cellular biology of cancer remain clinically untested and therefore do not translate into a therapeutic benefit for patients. Objective To present a computational method aimed to comprehensively exploit the scientific knowledge in order to foster the development of personalized cancer treatment by matching the patient's molecular profile with the available evidence on targeted therapy. Methods To this aim we focused on melanoma, an increasingly diagnosed malignancy for which the need for novel therapeutic approaches is paradigmatic since no effective treatment is available in the advanced setting. Relevant data were manually extracted from peer-reviewed full-text original articles describing any type of anti-melanoma targeted therapy tested in any type of experimental or clinical model. To this purpose, Medline, Embase, Cancerlit and the Cochrane databases were searched. Results and Conclusions We created a manually annotated database (Targeted Therapy Database, TTD) where the relevant data are gathered in a formal representation that can be computationally analyzed. Dedicated algorithms were set up for the identification of the prevalent therapeutic hypotheses based on the available evidence and for ranking treatments based on the molecular profile of individual patients. In this essay we describe the principles and computational algorithms of an original method developed to fully exploit

  12. ReefBase - a global database of coral reef systems and their resources

    OpenAIRE

    McManus, J.

    1994-01-01

    ReefBase a global database of coral reefs systems and their resources was initiated at International Center for Living Aquatic Resources Management (ICLARM), Philippines in November 1993. The CEC has provided funding for the first two years and the database was developed in collaboration with the World Conservation Monitoring Centre in Cambridge, UK, as well as other national, regional, and international institutions. The ReefBase project activities and what ICLARM will do to accomplish the p...

  13. Signal Detection of Adverse Drug Reaction of Amoxicillin Using the Korea Adverse Event Reporting System Database

    OpenAIRE

    Soukavong, Mick; Kim, Jungmee; Park, Kyounghoon; Yang, Bo Ram; Lee, Joongyub; Jin, Xue-Mei; Park, Byung-Joo

    2016-01-01

    We conducted pharmacovigilance data mining for a β-lactam antibiotics, amoxicillin, and compare the adverse events (AEs) with the drug labels of 9 countries including Korea, USA, UK, Japan, Germany, Swiss, Italy, France, and Laos. We used the Korea Adverse Event Reporting System (KAERS) database, a nationwide database of AE reports, between December 1988 and June 2014. Frequentist and Bayesian methods were used to calculate disproportionality distribution of drug-AE pairs. The AE which was de...

  14. TheSNPpit—A High Performance Database System for Managing Large Scale SNP Data

    OpenAIRE

    Groeneveld, Eildert; Lichtenberg, Helmut

    2016-01-01

    The fast development of high throughput genotyping has opened up new possibilities in genetics while at the same time producing considerable data handling issues. TheSNPpit is a database system for managing large amounts of multi panel SNP genotype data from any genotyping platform. With an increasing rate of genotyping in areas like animal and plant breeding as well as human genetics, already now hundreds of thousand of individuals need to be managed. While the common database design with on...

  15. Development of a Student Database Management System for a University

    Directory of Open Access Journals (Sweden)

    Dr. K. Venkata Subbiah

    2016-08-01

    Full Text Available In this scholarly thesis pertinent to the setting up of a automated student performance record management system which enables the users of a university like student and faculty to access the important information with ease through a user friendly web application. This proposed system aims at eliminating the practice of time consuming and vulnerable tradition of manual maintenance of student information in paper at the very basic level. In a university there are many departments all these departments provide various records regarding student. Most of these track records need to maintain information about the students. Thus by proposing a computerizes student record management system will enable the users to access data at any time and any place. The student web portal enables huge storage of data and easy retrieval. There are many departments in a college thus but introducing a student web portal will centralize the administration and the entire system will work as one single entity. The paper work would be reduced and number of workers in each department staff also reduces as one single operator can run this web application

  16. Superconducting Current Leads for Cryogenic Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Space flight cryocoolers will be able to handle limited heat loads at their expected operating temperatures and the current leads may be the dominant contributor to...

  17. Received Signal Strength Database Interpolation by Kriging for a Wi-Fi Indoor Positioning System.

    Science.gov (United States)

    Jan, Shau-Shiun; Yeh, Shuo-Ju; Liu, Ya-Wen

    2015-08-28

    The main approach for a Wi-Fi indoor positioning system is based on the received signal strength (RSS) measurements, and the fingerprinting method is utilized to determine the user position by matching the RSS values with the pre-surveyed RSS database. To build a RSS fingerprint database is essential for an RSS based indoor positioning system, and building such a RSS fingerprint database requires lots of time and effort. As the range of the indoor environment becomes larger, labor is increased. To provide better indoor positioning services and to reduce the labor required for the establishment of the positioning system at the same time, an indoor positioning system with an appropriate spatial interpolation method is needed. In addition, the advantage of the RSS approach is that the signal strength decays as the transmission distance increases, and this signal propagation characteristic is applied to an interpolated database with the Kriging algorithm in this paper. Using the distribution of reference points (RPs) at measured points, the signal propagation model of the Wi-Fi access point (AP) in the building can be built and expressed as a function. The function, as the spatial structure of the environment, can create the RSS database quickly in different indoor environments. Thus, in this paper, a Wi-Fi indoor positioning system based on the Kriging fingerprinting method is developed. As shown in the experiment results, with a 72.2% probability, the error of the extended RSS database with Kriging is less than 3 dBm compared to the surveyed RSS database. Importantly, the positioning error of the developed Wi-Fi indoor positioning system with Kriging is reduced by 17.9% in average than that without Kriging.

  18. Received Signal Strength Database Interpolation by Kriging for a Wi-Fi Indoor Positioning System

    Directory of Open Access Journals (Sweden)

    Shau-Shiun Jan

    2015-08-01

    Full Text Available The main approach for a Wi-Fi indoor positioning system is based on the received signal strength (RSS measurements, and the fingerprinting method is utilized to determine the user position by matching the RSS values with the pre-surveyed RSS database. To build a RSS fingerprint database is essential for an RSS based indoor positioning system, and building such a RSS fingerprint database requires lots of time and effort. As the range of the indoor environment becomes larger, labor is increased. To provide better indoor positioning services and to reduce the labor required for the establishment of the positioning system at the same time, an indoor positioning system with an appropriate spatial interpolation method is needed. In addition, the advantage of the RSS approach is that the signal strength decays as the transmission distance increases, and this signal propagation characteristic is applied to an interpolated database with the Kriging algorithm in this paper. Using the distribution of reference points (RPs at measured points, the signal propagation model of the Wi-Fi access point (AP in the building can be built and expressed as a function. The function, as the spatial structure of the environment, can create the RSS database quickly in different indoor environments. Thus, in this paper, a Wi-Fi indoor positioning system based on the Kriging fingerprinting method is developed. As shown in the experiment results, with a 72.2% probability, the error of the extended RSS database with Kriging is less than 3 dBm compared to the surveyed RSS database. Importantly, the positioning error of the developed Wi-Fi indoor positioning system with Kriging is reduced by 17.9% in average than that without Kriging.

  19. Weak Serializable Concurrency Control in Distributed Real-Time Database Systems

    Institute of Scientific and Technical Information of China (English)

    党德鹏; 刘云生; 等

    2002-01-01

    Most of the proposed concurrency control protocols for real-time database systems are based on serializability theorem.Owing to the unique characteristics of real-time database applications and the importance of satisfying the timing constraints of transactions,serializability is too strong as a correctness criterion and not suitable for real-time databases in most cases.On the other hand,relaxed serializability including epsilon-serializability and similarity-serializability can allow more real-time transactions to satisfy their timing constraints,but database consistency may be sacrificed to some extent.We thus propose the use of weak serializability(WSR)that is more relaxed than conflicting serializability while database consistency is maintained.In this paper,we first formally define the new notion of correctness called weak serializability.After the necessary and sufficient conditions for weak serializability are shown,corresponding concurrency control protocol WDHP(weak serializable distributed high prority protocol)is outlined for distributed real time databases,where a new lock mode called mask lock mode is proposed for simplifying the condition of global consistency.Finally,through a series of simulation studies,it is shown that using the new concurrency control protocol the performance of distributed realtime databases can be greatly improved.

  20. Brasilia’s Database Administrators

    Directory of Open Access Journals (Sweden)

    Jane Adriana

    2016-06-01

    Full Text Available Database administration has gained an essential role in the management of new database technologies. Different data models are being created for supporting the enormous data volume, from the traditional relational database. These new models are called NoSQL (Not only SQL databases. The adoption of best practices and procedures, has become essential for the operation of database management systems. Thus, this paper investigates some of the techniques and tools used by database administrators. The study highlights features and particularities in databases within the area of Brasilia, the Capital of Brazil. The results point to which new technologies regarding database management are currently the most relevant, as well as the central issues in this area.

  1. TaxCollector: Modifying Current 16S rRNA Databases for the Rapid Classification at Six Taxonomic Levels

    Directory of Open Access Journals (Sweden)

    Eric W. Triplett

    2010-07-01

    Full Text Available The high level of conservation of 16S ribosomal RNA gene (16S rRNA in all Prokaryotes makes this gene an ideal tool for the rapid identification and classification of these microorganisms. Databases such as the Ribosomal Database Project II (RDP-II and the Greengenes Project offer access to sets of ribosomal RNA sequence databases useful in identification of microbes in a culture-independent analysis of microbial communities. However, these databases do not contain all of the taxonomic levels attached to the published names of the bacterial and archaeal sequences. TaxCollector is a set of scripts developed in Python language that attaches taxonomic information to all 16S rRNA sequences in the RDP-II and Greengenes databases. These modified databases are referred to as TaxCollector databases, which when used in conjunction with BLAST allow for rapid classification of sequences from any environmental or clinical source at six different taxonomic levels, from domain to species. The TaxCollector database prepared from the RDP-II database is an important component of a new 16S rRNA pipeline called PANGEA. The usefulness of TaxCollector databases is demonstrated with two very different datasets obtained using samples from a clinical setting and an agricultural soil. The six TaxCollector scripts are freely available on http://taxcollector.sourceforge.net and on http://www.microgator.org.

  2. The iterative and incremental development on real-time database systems

    Science.gov (United States)

    Guo, Shuang; Li, Haiying; Ding, Chunfang; Ren, Honghong

    2011-12-01

    Our new idea can make the system classification technology requirements with value-oriented requirements more easily and less ambiguous. So, the new concept is platform to refine the value orientation for the requirements of iterative and incremental development real-time database system. This idea gives us life time keep a single platform of real-time database update user needs and full availability, guarantees the reliability of the real-time database system. The method of the value orientation is the evolution of proof that the batter support requirements and specifications. This new model more system structure of the definition and model is better than the existing iterative and incremental model. All kinds of relations attribute and traceability value the requirement to alleviate.

  3. Spent fuel composition database system on WWW. SFCOMPO on WWW Ver.2

    Energy Technology Data Exchange (ETDEWEB)

    Mochizuki, Hiroki [Japan Research Institute, Ltd., Tokyo (Japan); Suyama, Kenya; Nomura, Yasushi; Okuno, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-08-01

    'SFCOMPO on WWW Ver.2' is an advanced version of 'SFCOMPO on WWW' ('Spent Fuel Composition Database System on WWW') released in 1997. This new version has a function of database management by an introduced relational database software 'PostgreSQL' and has various searching methods. All of the data required for the calculation of isotopic composition is available from the web site of this system. This report describes the outline of this system and the searching method using Internet. In addition, the isotopic composition data and the reactor data of the 14 LWRs (7 PWR and 7 BWR) registered in this system are described. (author)

  4. A Database Design for a Unit Status Reporting System.

    Science.gov (United States)

    1987-03-01

    unlimited. OTIC ELECTE SEP 23 198 ’ 9 *57_3 A 87 9 15 07 DISCLAIMER NOTICE THIS DOCUMENT IS BEST QUALITY PRACTICABLE . THE COPY FURNISHED TO DTIC CONTAINED A...that ame avosable. (6) Dervelop ad - gadame 0 the - - (9) Comander. U S. Army Tritisisg (J) Comsar at all levels wsl review OF IS stan* 4Mt darnin...Meilir. The Practical Guide to Structured -Systems Design. New York: Yourdon Press, 1980. Sprague, Ralph H. Jr., and Carlson, Eric D. Building Effective

  5. Air Force Integrated Readiness Measurement System (AFIRMS). Wing Database Specification.

    Science.gov (United States)

    1985-09-30

    Description, Contract No. F33700-83-G-002005701, 8 April 1983. (Unclassified) z . AFR 700-3, Information Systems Requirements Processing, 30 November 1984. W...STATUS 3 4 72 1i1N AIRCRAFT PRESELECT INDICATOR 4 80 1920 11iN AIRCRAFT GEERATION FACTOR 4 80 1920 lip AIRCRAFT TAIL NMBER 5 80 2400 a󈧐 AIRMAN LAST NA...ACRFT + I ACRW + 34 RESOURCE) 20 38 4560 71E RESOURCE QUANTITY REQUIRED POR TOTAL ORDER 4 2280 54720 71F SORTIE AIRCRAFT RATE (60 DAY z 3 MW3) 4 180

  6. Multi-dimensional database design and implementation of dam safety monitoring system

    Directory of Open Access Journals (Sweden)

    Er-feng ZHAO

    2008-09-01

    Full Text Available To improve the effectiveness of dam safety monitoring database systems, the development process of a multi-dimensional conceptual data model was analyzed and a logic design was achieved in multi-dimensional database mode. The optimal data model was confirmed by identifying data objects, defining relations and reviewing entities. The conversion of relations among entities to external keys and entities and physical attributes to tables and fields was interpreted completely. On this basis, a multi-dimensional database that reflects the management and analysis of a dam safety monitoring system on monitoring data information has been established, for which factual tables and dimensional tables have been designed. Finally, based on service design and user interface design, the dam safety monitoring system has been developed with Delphi as the development tool. This development project shows that the multi-dimensional database can simplify the development process and minimize hidden dangers in the database structure design. It is superior to other dam safety monitoring system development models and can provide a new research direction for system developers.

  7. Multi-dimensional database design and implementation of dam safety monitoring system

    Institute of Scientific and Technical Information of China (English)

    Zhao Erfeng; Wang Yachao; Jiang Yufeng; Zhang Lei; Yu Hong

    2008-01-01

    To improve the effectiveness of dam safety monitoring database systems, the development process of a multi-dimensional conceptual data model was analyzed and a logic design was achieved in multi-dimensional database mode. The optimal data model was confirmed by identifying data objects, defining relations and reviewing entities. The conversion of relations among entities to external keys and entities and physical attributes to tables and fields was interpreted completely. On this basis, a multi-dimensional database that reflects the management and analysis of a dam safety monitoring system on monitoring data information has been established, for which factual tables and dimensional tables have been designed. Finally, based on service design and user interface design, the dam safety monitoring system has been developed with Delphi as the development tool. This development project shows that the multi-dimensional database can simplify the development process and minimize hidden dangers in the database structure design. It is superior to other dam safety monitoring system development models and can provide a new research direction for system developers.

  8. ARACHNID: A prototype object-oriented database tool for distributed systems

    Science.gov (United States)

    Younger, Herbert; Oreilly, John; Frogner, Bjorn

    1994-01-01

    This paper discusses the results of a Phase 2 SBIR project sponsored by NASA and performed by MIMD Systems, Inc. A major objective of this project was to develop specific concepts for improved performance in accessing large databases. An object-oriented and distributed approach was used for the general design, while a geographical decomposition was used as a specific solution. The resulting software framework is called ARACHNID. The Faint Source Catalog developed by NASA was the initial database testbed. This is a database of many giga-bytes, where an order of magnitude improvement in query speed is being sought. This database contains faint infrared point sources obtained from telescope measurements of the sky. A geographical decomposition of this database is an attractive approach to dividing it into pieces. Each piece can then be searched on individual processors with only a weak data linkage between the processors being required. As a further demonstration of the concepts implemented in ARACHNID, a tourist information system is discussed. This version of ARACHNID is the commercial result of the project. It is a distributed, networked, database application where speed, maintenance, and reliability are important considerations. This paper focuses on the design concepts and technologies that form the basis for ARACHNID.

  9. Functional description for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB)

    Energy Technology Data Exchange (ETDEWEB)

    Truett, L.F.; Rollow, J.P.; Shipe, P.C. [Oak Ridge National Lab., TN (United States); Faby, E.Z.; Fluker, J.; Hancock, W.R.; Grubb, J.W.; Russell, D.L. [Univ. of Tennessee, Knoxville, TN (United States); Ferguson, R.A. [SAIC, Oak Ridge, TN (United States)

    1995-12-15

    This Functional Description for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB) documents the purpose of and requirements for the ICDB in order to ensure a mutual understanding between the development group and the user group of the system. This Functional Description defines ICDB and provides a clear statement of the initial operational capability to be developed.

  10. Current Mode Data Converters for Sensor Systems

    DEFF Research Database (Denmark)

    Jørgensen, Ivan Herald Holger

    This thesis is mainly concerned with data conversion. Especially data conversion using current mode signal processing is treated.A tutorial chapter introducing D/A conversion is presented. In this chapter the effects that cause static and dynamic nonlinearities are discussed along with methods to...

  11. DAQ System of Current Based on MNSR

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    The flux or power should be acquired using the detector in the operation of MNSR. As usual, the signal of detector is current, and it is very width range with 10-11-10-6 A. It is hard to satisfy the linearity to amplify this signal by using fix gain

  12. Design of a Hospital-Based Database System (A Case Study of BIRDEM

    Directory of Open Access Journals (Sweden)

    Rosina Surovi Khan,

    2010-11-01

    Full Text Available As technology advances, information in different organizations of Bangladesh can no more be maintained manually. There is a growing need for the information to become computerized so that it can be suitably stored. This is where databases come into the picture. Databases are convenient storage systems which can store large amounts of data and together with application programs such as interfaces they can aid in faster retrieval of data. An initiative was taken to design a complete database system for a hospital management such as Bangladesh Institute of Research and rehabilitation in Diabetes, Endocrine and Metabolic disorders (BIRDEM in Dhaka so that its information can be stored, maintained, updated and retrieved conveniently and efficiently. The existing information in BIRDEM is partly computerized via databases only in patients’ admissions, doctors’ appointments and medical tests and reports sections. A partly slow and tedious manual system still exists in BIRDEM for example, in record of ambulances in service,assigning ward boys and nurses to rooms, the billing process andrecord of doctors’ prescriptions etc. However, this paper outlinesone complete database design for the entire BIRDEM hospital inwhich data maintenance and retrieval are in perfect harmony and speedy. Sample SQL-based queries executed on the designed system are also demonstrated.

  13. Structure design and establishment of database application system for alien species in Shandong Province, China

    Institute of Scientific and Technical Information of China (English)

    GUO Wei-hua; LIU Heng; DU Ning; ZHANG Xin-shi; WANG Ren-qing

    2007-01-01

    This paper presents a case study on structure design and establishment of database application system for alien species in Shandong Province, integrating with Geographic Information System, computer network, and database technology to the research of alien species. The modules of alien species database, including classified data input, statistics and analysis, species pictures and distribution maps,and out date input, were approached by Visual Studio.net 2003 and Microsoft SQL server 2000. The alien species information contains the information of classification, species distinction characteristics, biological characteristics, original area, distribution area, the entering fashion and route, invasion time, invasion reason, interaction with the endemic species, growth state, danger state and spatial information, i.e.distribution map. Based on the above bases, several models including application, checking, modifying, printing, adding and returning models were developed. Furthermore, through the establishment of index tables and index maps, we can also spatially query the data like picture,text and GIS map data. This research established the technological platform of sharing information about scientific resource of alien species in Shandong Province, offering the basis for the dynamic inquiry of alien species, the warning technology of prevention and the fast reaction system. The database application system possessed the principles of good practicability, friendly user interface and convenient usage. It can supply full and accurate information inquiry services of alien species for the users and provide functions of dynamically managing the database for the administrator.

  14. 17th East European Conference on Advances in Databases and Information Systems and Associated Satellite Events

    CERN Document Server

    Cerquitelli, Tania; Chiusano, Silvia; Guerrini, Giovanna; Kämpf, Mirko; Kemper, Alfons; Novikov, Boris; Palpanas, Themis; Pokorný, Jaroslav; Vakali, Athena

    2014-01-01

    This book reports on state-of-art research and applications in the field of databases and information systems. It includes both fourteen selected short contributions, presented at the East-European Conference on Advances in Databases and Information Systems (ADBIS 2013, September 1-4, Genova, Italy), and twenty-six papers from ADBIS 2013 satellite events. The short contributions from the main conference are collected in the first part of the book, which covers a wide range of topics, like data management, similarity searches, spatio-temporal and social network data, data mining, data warehousing, and data management on novel architectures, such as graphics processing units, parallel database management systems, cloud and MapReduce environments. In contrast, the contributions from the satellite events are organized in five different parts, according to their respective ADBIS satellite event: BiDaTA 2013 - Special Session on Big Data: New Trends and Applications); GID 2013 – The Second International Workshop ...

  15. Preferance of computer technology for analytical support of large database of medical information systems

    Directory of Open Access Journals (Sweden)

    Biryukov А.P.

    2013-12-01

    Full Text Available Aim: to study the use of intelligent technologies for analytical support of large databases of medical information systems. Material and methods. We used the techniques of object-oriented software design and database design. Results. Based on expert review of models and algorithms for analysis of clinical and epidemiological data and principles of knowledge representation in large-scale health information systems, data mining schema were implemented in the software package of the register of Research Center n.a. A. I. Burnazyan of Russia. Identified areas for effective implementation of abstract data model of EAV and procedures Data Maning for the design of database of biomedical registers. Conclusions. Using intelligent software platform that supports different sets of APIs and object models for different operations in different software environments, allows you to build and maintain an information system through the procedures of data biomedical processing.

  16. Outside Mainstream Electronic Databases: Review of Studies Conducted in the USSR and Post-Soviet Countries on Electric Current-Assisted Consolidation of Powder Materials

    Directory of Open Access Journals (Sweden)

    Eugene G. Grigoryev

    2013-09-01

    Full Text Available This paper reviews research articles published in the former USSR and post-soviet countries on the consolidation of powder materials using electric current that passes through the powder sample and/or a conductive die-punch set-up. Having been published in Russian, many of the reviewed papers are not included in the mainstream electronic databases of the scientific articles and thus are not known to the scientific community. The present review is aimed at filling this information gap. In the paper, the electric current-assisted sintering techniques based on high- and low-voltage approaches are presented. The main results of the theoretical modeling of the processes of electromagnetic field-assisted consolidation of powder materials are discussed. Sintering experiments and related equipment are described and the major experimental results are analyzed. Sintering conditions required to achieve the desired properties of the sintered materials are provided for selected material systems. Tooling materials used in the electric current-assisted consolidation set-ups are also described.

  17. Outside Mainstream Electronic Databases: Review of Studies Conducted in the USSR and Post-Soviet Countries on Electric Current-Assisted Consolidation of Powder Materials

    Science.gov (United States)

    Olevsky, Eugene A.; Aleksandrova, Elena V.; Ilyina, Alexandra M.; Dudina, Dina V.; Novoselov, Alexander N.; Pelve, Kirill Y.; Grigoryev, Eugene G.

    2013-01-01

    This paper reviews research articles published in the former USSR and post-soviet countries on the consolidation of powder materials using electric current that passes through the powder sample and/or a conductive die-punch set-up. Having been published in Russian, many of the reviewed papers are not included in the mainstream electronic databases of the scientific articles and thus are not known to the scientific community. The present review is aimed at filling this information gap. In the paper, the electric current-assisted sintering techniques based on high- and low-voltage approaches are presented. The main results of the theoretical modeling of the processes of electromagnetic field-assisted consolidation of powder materials are discussed. Sintering experiments and related equipment are described and the major experimental results are analyzed. Sintering conditions required to achieve the desired properties of the sintered materials are provided for selected material systems. Tooling materials used in the electric current-assisted consolidation set-ups are also described. PMID:28788337

  18. A High Speed Mobile Courier Data Access System That Processes Database Queries in Real-Time

    Science.gov (United States)

    Gatsheni, Barnabas Ndlovu; Mabizela, Zwelakhe

    A secure high-speed query processing mobile courier data access (MCDA) system for a Courier Company has been developed. This system uses the wireless networks in combination with wired networks for updating a live database at the courier centre in real-time by an offsite worker (the Courier). The system is protected by VPN based on IPsec. There is no system that we know of to date that performs the task for the courier as proposed in this paper.

  19. Information Systems: Current Developments and Future Expansion.

    Science.gov (United States)

    1970

    On May 20, 1970, a one-day seminar was held for Congressional members and staff. The papers given at this seminar and included in the proceedings are: (1) "Understanding Information Systems" by J. D. Aron, (2) "Computer Applications in Political Science" by Kenneth Janda, (3) "Who's the Master of Your Information System?" by Marvin Kornbluh, (4)…

  20. Large Scale Landslide Database System Established for the Reservoirs in Southern Taiwan

    Science.gov (United States)

    Tsai, Tsai-Tsung; Tsai, Kuang-Jung; Shieh, Chjeng-Lun

    2017-04-01

    Typhoon Morakot seriously attack southern Taiwan awaken the public awareness of large scale landslide disasters. Large scale landslide disasters produce large quantity of sediment due to negative effects on the operating functions of reservoirs. In order to reduce the risk of these disasters within the study area, the establishment of a database for hazard mitigation / disaster prevention is necessary. Real time data and numerous archives of engineering data, environment information, photo, and video, will not only help people make appropriate decisions, but also bring the biggest concern for people to process and value added. The study tried to define some basic data formats / standards from collected various types of data about these reservoirs and then provide a management platform based on these formats / standards. Meanwhile, in order to satisfy the practicality and convenience, the large scale landslide disasters database system is built both provide and receive information abilities, which user can use this large scale landslide disasters database system on different type of devices. IT technology progressed extreme quick, the most modern system might be out of date anytime. In order to provide long term service, the system reserved the possibility of user define data format /standard and user define system structure. The system established by this study was based on HTML5 standard language, and use the responsive web design technology. This will make user can easily handle and develop this large scale landslide disasters database system.

  1. Design document for the Surface Currents Data Base (SCDB) Management System (SCDBMS), version 1.0

    Science.gov (United States)

    Krisnnamagaru, Ramesh; Cesario, Cheryl; Foster, M. S.; Das, Vishnumohan

    1994-01-01

    The Surface Currents Database Management System (SCDBMS) provides access to the Surface Currents Data Base (SCDB) which is maintained by the Naval Oceanographic Office (NAVOCEANO). The SCDBMS incorporates database technology in providing seamless access to surface current data. The SCDBMS is an interactive software application with a graphical user interface (GUI) that supports user control of SCDBMS functional capabilities. The purpose of this document is to define and describe the structural framework and logistical design of the software components/units which are integrated into the major computer software configuration item (CSCI) identified as the SCDBMS, Version 1.0. The preliminary design is based on functional specifications and requirements identified in the governing Statement of Work prepared by the Naval Oceanographic Office (NAVOCEANO) and distributed as a request for proposal by the National Aeronautics and Space Administration (NASA).

  2. Using AMDD method for Database Design in Mobile Cloud Computing Systems

    Directory of Open Access Journals (Sweden)

    Silviu Claudiu POPA

    2013-01-01

    Full Text Available The development of the technologies of wireless telecommunications gave birth of new kinds of e-commerce, the so called Mobile e-Commerce or m-Commerce. Mobile Cloud Computing (MCC represents a new IT research area that combines mobile computing and cloud compu-ting techniques. Behind a cloud mobile commerce system there is a database containing all necessary information for transactions. By means of Agile Model Driven Development (AMDD method, we are able to achieve many benefits that smooth the progress of designing the databases utilized for cloud m-Commerce systems.

  3. Database System Mode Structure%数据库系统的模式结构

    Institute of Scientific and Technical Information of China (English)

    林立云

    2011-01-01

    数据库系统的三级模式结构是指数据库系统是由模式、外模式和内模式三级构成的。本文就是针对这三个方面以及两级映像内容做了分析。%The structure of the database system is a three-level model is a database system model,mode of three external mode and internal composition.This article is for these three areas and two image content were analyzed.

  4. The Design and Implementation of the Ariel Active Database Rule System

    Science.gov (United States)

    1991-10-01

    the database query processor. These include DIPS [SLR89, RSL891 and RPL [DE88a, DE88b]. Another, HiPAC [DBB+88, Cha89, MD891, has been implemented...implemented in a tightly-coupled fashion with their respective database systems. However, neither the PRS, SRS, DIPS, RPL, nor HiPAC have a rule condition...placed on an arbitrary attribute (e.g., one without an index) (POSTGRES rule system [SHP88, SHP89, SR1I90], HiPAC [C+891, DIPS [SLR89], Alert [SPAM91

  5. The Integration Of The LHC Cryogenics Control System Data Into The CERN Layout Database

    CERN Document Server

    Fortescue-Beck, E; Gomes, P

    2011-01-01

    The Large Hadron Collider’s Cryogenic Control System makes extensive use of several databases to manage data appertaining to over 34,000 cryogenic instrumentation channels. This data is essential for populating the software of the PLCs which are responsible for maintaining the LHC at the appropriate temperature. In order to reduce the number of data sources and the overall complexity of the system, the databases have been rationalised and the automatic tool, that extracts data for the control software, has been simplified. This paper describes the main improvements that have been made and considers the success of the project.

  6. Analysis of Sqp current systems by using corrected geomagneticcoordinates

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The Spq equivalent current system of the quiet day geomagnetic variation in the polar region is very complicated. It is composed of several currents, such as the ionospheric dynamo current and the auroral electrojet caused by the field-aligned current. Spq is unsymmetrical in both polar regions. In this paper, the Spq current systems are analyzed in the corrected geomagnetic coordinates (CGM) instead of the conventional geomagnetic coordinates (GM), and the symmetries of the Spq current indifferent systems are compared. Then the causes of Spq asymmetry in the GM coordinates are discussed; the effects of each component in Spq are determined.

  7. Transport and Environment Database System (TRENDS): Maritime air pollutant emission modelling

    Science.gov (United States)

    Georgakaki, Aliki; Coffey, Robert A.; Lock, Graham; Sorenson, Spencer C.

    This paper reports the development of the maritime module within the framework of the Transport and Environment Database System (TRENDS) project. A detailed database has been constructed for the calculation of energy consumption and air pollutant emissions. Based on an in-house database of commercial vessels kept at the Technical University of Denmark, relationships between the fuel consumption and size of different vessels have been developed, taking into account the fleet's age and service speed. The technical assumptions and factors incorporated in the database are presented, including changes from findings reported in Methodologies for Estimating air pollutant Emissions from Transport (MEET). The database operates on statistical data provided by Eurostat, which describe vessel and freight movements from and towards EU 15 major ports. Data are at port to Maritime Coastal Area (MCA) level, so a bottom-up approach is used. A port to MCA distance database has also been constructed for the purpose of the study. This was the first attempt to use Eurostat maritime statistics for emission modelling; and the problems encountered, since the statistical data collection was not undertaken with a view to this purpose, are mentioned. Examples of the results obtained by the database are presented. These include detailed air pollutant emission calculations for bulk carriers entering the port of Helsinki, as an example of the database operation, and aggregate results for different types of movements for France. Overall estimates of SO x and NO x emission caused by shipping traffic between the EU 15 countries are in the area of 1 and 1.5 million tonnes, respectively.

  8. Current dental adhesives systems. A narrative review.

    Science.gov (United States)

    Milia, Egle; Cumbo, Enzo; Cardoso, Rielson Jose A; Gallina, Giuseppe

    2012-01-01

    Adhesive dentistry is based on the development of materials which establish an effective bond with the tooth tissues. In this context, adhesive systems have attracted considerable research interest in recent years. Successful adhesive bonding depends on the chemistry of the adhesive, on appropriate clinical handling of the material as well as on the knowledge of the morphological changes caused on dental tissue by different bonding procedures. This paper outlines the status of contemporary adhesive systems, with particular emphasis on chemical characteristics and mode of interaction of the adhesives with enamel and dentinal tissues. Dental adhesives are used for several clinical applications and they can be classified based on the clinical regimen in "etch-and-rinse adhesives" and "self-etch adhesives". Other important considerations concern the different anatomical characteristics of enamel and dentine which are involved in the bonding procedures that have also implications for the technique used as well as for the quality of the bond. Etch-and-rinse adhesive systems generally perform better on enamel than self-etching systems which may be more suitable for bonding to dentine. In order to avoid a possible loss of the restoration, secondary caries or pulp damage due to bacteria penetration or due to cytotoxicity effects of eluted adhesive components, careful consideration of several factors is essential in selecting the suitable bonding procedure and adhesive system for the individual patient situation.

  9. Development of a database system for operational use in the selection of titanium alloys

    Science.gov (United States)

    Han, Yuan-Fei; Zeng, Wei-Dong; Sun, Yu; Zhao, Yong-Qing

    2011-08-01

    The selection of titanium alloys has become a complex decision-making task due to the growing number of creation and utilization for titanium alloys, with each having its own characteristics, advantages, and limitations. In choosing the most appropriate titanium alloys, it is very essential to offer a reasonable and intelligent service for technical engineers. One possible solution of this problem is to develop a database system (DS) to help retrieve rational proposals from different databases and information sources and analyze them to provide useful and explicit information. For this purpose, a design strategy of the fuzzy set theory is proposed, and a distributed database system is developed. Through ranking of the candidate titanium alloys, the most suitable material is determined. It is found that the selection results are in good agreement with the practical situation.

  10. Dietary Supplement Ingredient Database

    Science.gov (United States)

    ... and US Department of Agriculture Dietary Supplement Ingredient Database Toggle navigation Menu Home About DSID Mission Current ... values can be saved to build a small database or add to an existing database for national, ...

  11. Current status of the TSensor systems roadmap

    NARCIS (Netherlands)

    Walsh, Steven Thomas; Bryzek, Janusz; Pisano, Albert P.

    2014-01-01

    We apply our work from the contemporary pharmaceutical industry to generate a third generation-style technology roadmap for TSensor Systems. First we identify drivers and consortia. We then identify relevant technology components, namely multiple root technologies, multiple unit cells, multiple crit

  12. Current status of the TSensor systems roadmap

    NARCIS (Netherlands)

    Walsh, Steven; Bryzek, Janusz; Pisano, Albert P.

    2014-01-01

    We apply our work from the contemporary pharmaceutical industry to generate a third generation-style technology roadmap for TSensor Systems. First we identify drivers and consortia. We then identify relevant technology components, namely multiple root technologies, multiple unit cells, multiple crit

  13. Current status of the TSensor systems roadmap

    NARCIS (Netherlands)

    Walsh, Steven Thomas; Bryzek, Janusz; Pisano, Albert P.

    2014-01-01

    We apply our work from the contemporary pharmaceutical industry to generate a third generation-style technology roadmap for TSensor Systems. First we identify drivers and consortia. We then identify relevant technology components, namely multiple root technologies, multiple unit cells, multiple

  14. Avibase – a database system for managing and organizing taxonomic concepts

    Directory of Open Access Journals (Sweden)

    Denis Lepage

    2014-06-01

    Full Text Available Scientific names of biological entities offer an imperfect resolution of the concepts that they are intended to represent. Often they are labels applied to entities ranging from entire populations to individual specimens representing those populations, even though such names only unambiguously identify the type specimen to which they were originally attached. Thus the real-life referents of names are constantly changing as biological circumscriptions are redefined and thereby alter the sets of individuals bearing those names. This problem is compounded by other characteristics of names that make them ambiguous identifiers of biological concepts, including emendations, homonymy and synonymy. Taxonomic concepts have been proposed as a way to address issues related to scientific names, but they have yet to receive broad recognition or implementation. Some efforts have been made towards building systems that address these issues by cataloguing and organizing taxonomic concepts, but most are still in conceptual or proof-of-concept stage. We present the on-line database Avibase as one possible approach to organizing taxonomic concepts. Avibase has been successfully used to describe and organize 844,000 species-level and 705,000 subspecies-level taxonomic concepts across every major bird taxonomic checklist of the last 125 years. The use of taxonomic concepts in place of scientific names, coupled with efficient resolution services, is a major step toward addressing some of the main deficiencies in the current practices of scientific name dissemination and use.

  15. Transparent image access in a distributed picture archiving and communications system: The master database broker

    OpenAIRE

    Cox, R D; Henri, C. J.; Rubin, R. K.

    1999-01-01

    A distributed design is the most cost-effective system for small- to medium-scale picture archiving and communications systems (PACS) implementations. However, the design presents an interesting challenge to developers and implementers: to make stored image data, distributed throughout the PACS network, appear to be centralized with a single access point for users. A key component for the distributed system is a central or master database, containing all the studies that have been scanned int...

  16. A Methodology for Implementing Clinical Algorithms Using Expert-System and Database Tools

    OpenAIRE

    Rucker, Donald W.; Shortliffe, Edward H.

    1989-01-01

    The HyperLipid Advisory System is a combination of an expert system and a database that uses an augmented transition network methodology for implementing clinical algorithms. These algorithms exist as tables from which the separate expert-system rule base sequentially extracts the steps in the algorithm. The rule base assumes that the algorithm has a binary branching structure and models episodes of clinical care, but otherwise makes no assumption regarding the specific clinical domain. Hyper...

  17. A Database as a Service for the Healthcare System to Store Physiological Signal Data.

    Science.gov (United States)

    Chang, Hsien-Tsung; Lin, Tsai-Huei

    2016-01-01

    Wearable devices that measure physiological signals to help develop self-health management habits have become increasingly popular in recent years. These records are conducive for follow-up health and medical care. In this study, based on the characteristics of the observed physiological signal records- 1) a large number of users, 2) a large amount of data, 3) low information variability, 4) data privacy authorization, and 5) data access by designated users-we wish to resolve physiological signal record-relevant issues utilizing the advantages of the Database as a Service (DaaS) model. Storing a large amount of data using file patterns can reduce database load, allowing users to access data efficiently; the privacy control settings allow users to store data securely. The results of the experiment show that the proposed system has better database access performance than a traditional relational database, with a small difference in database volume, thus proving that the proposed system can improve data storage performance.

  18. RESEARCH ON HLR MOBILITY DATABASE FAILURE RECOVERY AND PERFORMANCE ANALYSIS FOR CDMA2000 SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Liu Caixia; Yu Dingjiu; Cheng Dongnian; Tang Hongbo; Wu Jiangxing

    2004-01-01

    In this paper, a novel Home Location Register(HLR) mobility database recovery scheme is proposed. With database backing-up and signal sending as its key processes, the presented scheme is designed for the purpose of both decreasing system costs and reducing number of lost calls. In our scheme, an algorithm is developed for an HLR to identify such VLRs that there are new MSs roaming into them since the latest HLR database backing up. The identification of those VLRs is used by the HLR to send Unreliable Roaming Data Directive messages to each of them to get the correct location information of those new MSs. Additionally, two kinds of relationships, one between the number of lost calls and the database backing-up period and the other between the backing-up cost and the period, are well analyzed. Both analytical and numerical results indicate that there will be an optimal HLR database backing-up period if certain system parameters are given and the total cost can be consequently minimized.

  19. The Israel DNA database--the establishment of a rapid, semi-automated analysis system.

    Science.gov (United States)

    Zamir, Ashira; Dell'Ariccia-Carmon, Aviva; Zaken, Neomi; Oz, Carla

    2012-03-01

    The Israel Police DNA database, also known as IPDIS (Israel Police DNA Index System), has been operating since February 2007. During that time more than 135,000 reference samples have been uploaded and more than 2000 hits reported. We have developed an effective semi-automated system that includes two automated punchers, three liquid handler robots and four genetic analyzers. An inhouse LIMS program enables full tracking of every sample through the entire process of registration, pre-PCR handling, analysis of profiles, uploading to the database, hit reports and ultimately storage. The LIMS is also responsible for the future tracking of samples and their profiles to be expunged from the database according to the Israeli DNA legislation. The database is administered by an in-house developed software program, where reference and evidentiary profiles are uploaded, stored, searched and matched. The DNA database has proven to be an effective investigative tool which has gained the confidence of the Israeli public and on which the Israel National Police force has grown to rely.

  20. NVST Data Archiving System Based On FastBit NoSQL Database

    Science.gov (United States)

    Liu, Ying-bo; Wang, Feng; Ji, Kai-fan; Deng, Hui; Dai, Wei; Liang, Bo

    2014-06-01

    The New Vacuum Solar Telescope (NVST) is a 1-meter vacuum solar telescope that aims to observe the fine structures of active regions on the Sun. The main tasks of the NVST are high resolution imaging and spectral observations, including the measurements of the solar magnetic field. The NVST has been collecting more than 20 million FITS files since it began routine observations in 2012 and produces a maximum observational records of 120 thousand files in a day. Given the large amount of files, the effective archiving and retrieval of files becomes a critical and urgent problem. In this study, we implement a new data archiving system for the NVST based on the Fastbit Not Only Structured Query Language (NoSQL) database. Comparing to the relational database (i.e., MySQL; My Structured Query Language), the Fastbit database manifests distinctive advantages on indexing and querying performance. In a large scale database of 40 million records, the multi-field combined query response time of Fastbit database is about 15 times faster and fully meets the requirements of the NVST. Our study brings a new idea for massive astronomical data archiving and would contribute to the design of data management systems for other astronomical telescopes.

  1. Moving Observer Support for Databases

    DEFF Research Database (Denmark)

    Bukauskas, Linas

    Interactive visual data explorations impose rigid requirements on database and visualization systems. Systems that visualize huge amounts of data tend to request large amounts of memory resources and heavily use the CPU to process and visualize data. Current systems employ a loosely coupled...... architecture to exchange data between database and visualization. Thus, the interaction of the visualizer and the database is kept to the minimum, which most often leads to superfluous data being passed from database to visualizer. This Ph.D. thesis presents a novel tight coupling of database and visualizer...... together with the VR-tree enables the fast extraction of appearing and disappearing objects from the observer's view as he navigates through the data space. Usage of VAST structure significantly reduces the number of objects to be extracted from the VR-tree and VAST enables a fast interaction of database...

  2. Current status of dentin adhesive systems.

    Science.gov (United States)

    Leinfelder, K F

    1998-12-01

    Undoubtedly, dentin bonding agents have undergone a major evolution during the last several years. The shear bond strength of composite resin to the surface of dentin is actually greater than the inherent strength of the dentin itself under well-controlled conditions. No longer must the clinician depend only upon the bonding to enamel as the sole bonding mechanism. Bonding to both types of dental structure permits even better reinforcement of the tooth itself. Perhaps even more important than the high level of bonding exhibited by the current dentin adhesives is their ability to seal the dentin. So effective is this sealing capability that it is now possible to protect the pulpal tissue from microbial invasion through the dentinal tubules. Further, by enclosing the odontoblastic processes and preventing fluid flow, the potential for postoperative sensitivity is diminished considerably. In fact, so evolutionary is the concept of bonding that the procedures associated with the restoration of teeth has changed dramatically. Undoubtedly, far greater improvements can be anticipated in the future.

  3. Development of BSCCO persistent current system

    Energy Technology Data Exchange (ETDEWEB)

    Joo, Jin Ho; Nah, Wan Soo; Kang, Hyung Koo; Yoo, Jung Hoon [Sungkyunkwan University, Seoul (Korea)

    1998-05-01

    We have developed temperature-variable critical current measurement device for high Tc superconducting wires. For this end, vacuum shroud was designed and fabricated, and that both signal lines and power lines into the vacuum shroud were installed on it. Secondly, the design procedures for the PCS were established for the high Tc superconducting wires based on the electrical circuit analyses during energizations. We have also evaluated mechanical properties such as hardness, strength and elongation of sheath alloys made by addition of Cu, Mg, Ti, Zr and Ni to Ag matrix using induction melting furnace. It was observed that hardness and strength were improved by increasing additive contents from 0.05 to 0.2 at.%. Specifically, the increment of strength was relatively higher for alloys made by addition of Mg, Cu and Zr elements than that made by Ni and Ti addition. On the other hand, elongation was measured to be significantly reduced for former sheath alloy materials. (author). 12 refs., 13 figs., 4 tabs.

  4. The database of the PREDICTS (Projecting Responses of Ecological Diversity In Changing Terrestrial Systems) project

    NARCIS (Netherlands)

    Hudson, Lawrence N; Newbold, Tim; Contu, Sara; Hill, Samantha L L; Lysenko, Igor; De Palma, Adriana; Phillips, Helen R P; Alhusseini, Tamera I; Bedford, Felicity E; Bennett, Dominic J; Booth, Hollie; Burton, Victoria J; Chng, Charlotte W T; Choimes, Argyrios; Correia, David L P; Day, Julie; Echeverría-Londoño, Susy; Emerson, Susan R; Gao, Di; Garon, Morgan; Harrison, Michelle L K; Ingram, Daniel J; Jung, Martin; Kemp, Victoria; Kirkpatrick, Lucinda; Martin, Callum D; Pan, Yuan; Pask-Hale, Gwilym D; Pynegar, Edwin L; Robinson, Alexandra N; Sanchez-Ortiz, Katia; Senior, Rebecca A; Simmons, Benno I; White, Hannah J; Zhang, Hanbin; Aben, Job; Abrahamczyk, Stefan; Adum, Gilbert B; Aguilar-Barquero, Virginia; Aizen, Marcelo A; Albertos, Belén; Alcala, E L; Del Mar Alguacil, Maria; Alignier, Audrey; Ancrenaz, Marc; Andersen, Alan N; Arbeláez-Cortés, Enrique; Armbrecht, Inge; Arroyo-Rodríguez, Víctor; Aumann, Tom; Axmacher, Jan C; Azhar, Badrul; Azpiroz, Adrián B; Baeten, Lander; Bakayoko, Adama; Báldi, András; Banks, John E; Baral, Sharad K; Barlow, Jos; Barratt, Barbara I P; Barrico, Lurdes; Bartolommei, Paola; Barton, Diane M; Basset, Yves; Batáry, Péter; Bates, Adam J; Baur, Bruno; Bayne, Erin M; Beja, Pedro; Benedick, Suzan; Berg, Åke; Bernard, Henry; Berry, Nicholas J; Bhatt, Dinesh; Bicknell, Jake E; Bihn, Jochen H; Blake, Robin J; Bobo, Kadiri S; Bóçon, Roberto; Boekhout, Teun; Böhning-Gaese, Katrin; Bonham, Kevin J; Borges, Paulo A V; Borges, Sérgio H; Boutin, Céline; Bouyer, Jérémy; Bragagnolo, Cibele; Brandt, Jodi S; Brearley, Francis Q; Brito, Isabel; Bros, Vicenç; Brunet, Jörg; Buczkowski, Grzegorz; Buddle, Christopher M; Bugter, Rob; Buscardo, Erika; Buse, Jörn; Cabra-García, Jimmy; Cáceres, Nilton C; Cagle, Nicolette L; Calviño-Cancela, María; Cameron, Sydney A; Cancello, Eliana M; Caparrós, Rut; Cardoso, Pedro; Carpenter, Dan; Carrijo, Tiago F; Carvalho, Anelena L; Cassano, Camila R; Castro, Helena; Castro-Luna, Alejandro A; Rolando, Cerda B; Cerezo, Alexis; Chapman, Kim Alan; Chauvat, Matthieu; Christensen, Morten; Clarke, Francis M; Cleary, Daniel F R; Colombo, Giorgio; Connop, Stuart P; Craig, Michael D; Cruz-López, Leopoldo; Cunningham, Saul A; D'Aniello, Biagio; D'Cruze, Neil; da Silva, Pedro Giovâni; Dallimer, Martin; Danquah, Emmanuel; Darvill, Ben; Dauber, Jens; Davis, Adrian L V; Dawson, Jeff; de Sassi, Claudio; de Thoisy, Benoit; Deheuvels, Olivier; Dejean, Alain; Devineau, Jean-Louis; Diekötter, Tim; Dolia, Jignasu V; Domínguez, Erwin; Dominguez-Haydar, Yamileth; Dorn, Silvia; Draper, Isabel; Dreber, Niels; Dumont, Bertrand; Dures, Simon G; Dynesius, Mats; Edenius, Lars; Eggleton, Paul; Eigenbrod, Felix; Elek, Zoltán; Entling, Martin H; Esler, Karen J; de Lima, Ricardo F; Faruk, Aisyah; Farwig, Nina; Fayle, Tom M; Felicioli, Antonio; Felton, Annika M; Fensham, Roderick J; Fernandez, Ignacio C; Ferreira, Catarina C; Ficetola, Gentile F; Fiera, Cristina; Filgueiras, Bruno K C; Fırıncıoğlu, Hüseyin K; Flaspohler, David; Floren, Andreas; Fonte, Steven J; Fournier, Anne; Fowler, Robert E; Franzén, Markus; Fraser, Lauchlan H; Fredriksson, Gabriella M; Freire, Geraldo B; Frizzo, Tiago L M; Fukuda, Daisuke; Furlani, Dario; Gaigher, René; Ganzhorn, Jörg U; García, Karla P; Garcia-R, Juan C; Garden, Jenni G; Garilleti, Ricardo; Ge, Bao-Ming; Gendreau-Berthiaume, Benoit; Gerard, Philippa J; Gheler-Costa, Carla; Gilbert, Benjamin; Giordani, Paolo; Giordano, Simonetta; Golodets, Carly; Gomes, Laurens G L; Gould, Rachelle K; Goulson, Dave; Gove, Aaron D; Granjon, Laurent; Grass, Ingo; Gray, Claudia L; Grogan, James; Gu, Weibin; Guardiola, Moisès; Gunawardene, Nihara R; Gutierrez, Alvaro G; Gutiérrez-Lamus, Doris L; Haarmeyer, Daniela H; Hanley, Mick E; Hanson, Thor; Hashim, Nor R; Hassan, Shombe N; Hatfield, Richard G; Hawes, Joseph E; Hayward, Matt W; Hébert, Christian; Helden, Alvin J; Henden, John-André; Henschel, Philipp; Hernández, Lionel; Herrera, James P; Herrmann, Farina; Herzog, Felix; Higuera-Diaz, Diego; Hilje, Branko; Höfer, Hubert; Hoffmann, Anke; Horgan, Finbarr G; Hornung, Elisabeth; Horváth, Roland; Hylander, Kristoffer; Isaacs-Cubides, Paola; Ishida, Hiroaki; Ishitani, Masahiro; Jacobs, Carmen T; Jaramillo, Víctor J; Jauker, Birgit; Hernández, F Jiménez; Johnson, McKenzie F; Jolli, Virat; Jonsell, Mats; Juliani, S Nur; Jung, Thomas S; Kapoor, Vena; Kappes, Heike; Kati, Vassiliki; Katovai, Eric; Kellner, Klaus; Kessler, Michael; Kirby, Kathryn R; Kittle, Andrew M; Knight, Mairi E; Knop, Eva; Kohler, Florian; Koivula, Matti; Kolb, Annette; Kone, Mouhamadou; Kőrösi, Ádám; Krauss, Jochen; Kumar, Ajith; Kumar, Raman; Kurz, David J; Kutt, Alex S; Lachat, Thibault; Lantschner, Victoria; Lara, Francisco; Lasky, Jesse R; Latta, Steven C; Laurance, William F; Lavelle, Patrick; Le Féon, Violette; LeBuhn, Gretchen; Légaré, Jean-Philippe; Lehouck, Valérie; Lencinas, María V; Lentini, Pia E; Letcher, Susan G; Li, Qi; Litchwark, Simon A; Littlewood, Nick A; Liu, Yunhui; Lo-Man-Hung, Nancy; López-Quintero, Carlos A; Louhaichi, Mounir; Lövei, Gabor L; Lucas-Borja, Manuel Esteban; Luja, Victor H; Luskin, Matthew S; MacSwiney G, M Cristina; Maeto, Kaoru; Magura, Tibor; Mallari, Neil Aldrin; Malone, Louise A; Malonza, Patrick K; Malumbres-Olarte, Jagoba; Mandujano, Salvador; Måren, Inger E; Marin-Spiotta, Erika; Marsh, Charles J; Marshall, E J P; Martínez, Eliana; Martínez Pastur, Guillermo; Moreno Mateos, David; Mayfield, Margaret M; Mazimpaka, Vicente; McCarthy, Jennifer L; McCarthy, Kyle P; McFrederick, Quinn S; McNamara, Sean; Medina, Nagore G; Medina, Rafael; Mena, Jose L; Mico, Estefania; Mikusinski, Grzegorz; Milder, Jeffrey C; Miller, James R; Miranda-Esquivel, Daniel R; Moir, Melinda L; Morales, Carolina L; Muchane, Mary N; Muchane, Muchai; Mudri-Stojnic, Sonja; Munira, A Nur; Muoñz-Alonso, Antonio; Munyekenye, B F; Naidoo, Robin; Naithani, A; Nakagawa, Michiko; Nakamura, Akihiro; Nakashima, Yoshihiro; Naoe, Shoji; Nates-Parra, Guiomar; Navarrete Gutierrez, Dario A; Navarro-Iriarte, Luis; Ndang'ang'a, Paul K; Neuschulz, Eike L; Ngai, Jacqueline T; Nicolas, Violaine; Nilsson, Sven G; Noreika, Norbertas; Norfolk, Olivia; Noriega, Jorge Ari; Norton, David A; Nöske, Nicole M; Nowakowski, A Justin; Numa, Catherine; O'Dea, Niall; O'Farrell, Patrick J; Oduro, William; Oertli, Sabine; Ofori-Boateng, Caleb; Oke, Christopher Omamoke; Oostra, Vicencio; Osgathorpe, Lynne M; Otavo, Samuel Eduardo; Page, Navendu V; Paritsis, Juan; Parra-H, Alejandro; Parry, Luke; Pe'er, Guy; Pearman, Peter B; Pelegrin, Nicolás; Pélissier, Raphaël; Peres, Carlos A; Peri, Pablo L; Persson, Anna S; Petanidou, Theodora; Peters, Marcell K; Pethiyagoda, Rohan S; Phalan, Ben; Philips, T Keith; Pillsbury, Finn C; Pincheira-Ulbrich, Jimmy; Pineda, Eduardo; Pino, Joan; Pizarro-Araya, Jaime; Plumptre, A J; Poggio, Santiago L; Politi, Natalia; Pons, Pere; Poveda, Katja; Power, Eileen F; Presley, Steven J; Proença, Vânia; Quaranta, Marino; Quintero, Carolina; Rader, Romina; Ramesh, B R; Ramirez-Pinilla, Martha P; Ranganathan, Jai; Rasmussen, Claus; Redpath-Downing, Nicola A; Reid, J Leighton; Reis, Yana T; Rey Benayas, José M; Rey-Velasco, Juan Carlos; Reynolds, Chevonne; Ribeiro, Danilo Bandini; Richards, Miriam H; Richardson, Barbara A; Richardson, Michael J; Ríos, Rodrigo Macip; Robinson, Richard; Robles, Carolina A; Römbke, Jörg; Romero-Duque, Luz Piedad; Rös, Matthias; Rosselli, Loreta; Rossiter, Stephen J; Roth, Dana S; Roulston, T'ai H; Rousseau, Laurent; Rubio, André V; Ruel, Jean-Claude; Sadler, Jonathan P; Sáfián, Szabolcs; Saldaña-Vázquez, Romeo A; Sam, Katerina; Samnegård, Ulrika; Santana, Joana; Santos, Xavier; Savage, Jade; Schellhorn, Nancy A; Schilthuizen, Menno; Schmiedel, Ute; Schmitt, Christine B; Schon, Nicole L; Schüepp, Christof; Schumann, Katharina; Schweiger, Oliver; Scott, Dawn M; Scott, Kenneth A; Sedlock, Jodi L; Seefeldt, Steven S; Shahabuddin, Ghazala; Shannon, Graeme; Sheil, Douglas; Sheldon, Frederick H; Shochat, Eyal; Siebert, Stefan J; Silva, Fernando A B; Simonetti, Javier A; Slade, Eleanor M; Smith, Jo; Smith-Pardo, Allan H; Sodhi, Navjot S; Somarriba, Eduardo J; Sosa, Ramón A; Soto Quiroga, Grimaldo; St-Laurent, Martin-Hugues; Starzomski, Brian M; Stefanescu, Constanti; Steffan-Dewenter, Ingolf; Stouffer, Philip C; Stout, Jane C; Strauch, Ayron M; Struebig, Matthew J; Su, Zhimin; Suarez-Rubio, Marcela; Sugiura, Shinji; Summerville, Keith S; Sung, Yik-Hei; Sutrisno, Hari; Svenning, Jens-Christian; Teder, Tiit; Threlfall, Caragh G; Tiitsaar, Anu; Todd, Jacqui H; Tonietto, Rebecca K; Torre, Ignasi; Tóthmérész, Béla; Tscharntke, Teja; Turner, Edgar C; Tylianakis, Jason M; Uehara-Prado, Marcio; Urbina-Cardona, Nicolas; Vallan, Denis; Vanbergen, Adam J; Vasconcelos, Heraldo L; Vassilev, Kiril; Verboven, Hans A F; Verdasca, Maria João; Verdú, José R; Vergara, Carlos H; Vergara, Pablo M; Verhulst, Jort; Virgilio, Massimiliano; Vu, Lien Van; Waite, Edward M; Walker, Tony R; Wang, Hua-Feng; Wang, Yanping; Watling, James I; Weller, Britta; Wells, Konstans; Westphal, Catrin; Wiafe, Edward D; Williams, Christopher D; Willig, Michael R; Woinarski, John C Z; Wolf, Jan H D; Wolters, Volkmar; Woodcock, Ben A; Wu, Jihua; Wunderle, Joseph M; Yamaura, Yuichi; Yoshikura, Satoko; Yu, Douglas W; Zaitsev, Andrey S; Zeidler, Juliane; Zou, Fasheng; Collen, Ben; Ewers, Rob M; Mace, Georgina M; Purves, Drew W; Scharlemann, Jörn P W; Purvis, Andy

    2017-01-01

    The PREDICTS project-Projecting Responses of Ecological Diversity In Changing Terrestrial Systems (www.predicts.org.uk)-has collated from published studies a large, reasonably representative database of comparable samples of biodiversity from multiple sites that differ in the nature or intensity of

  5. A database system for the management of severe accident risk information, SARD

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, K. I.; Kim, D. H. [KAERI, Taejon (Korea, Republic of)

    2003-10-01

    The purpose of this paper is to introduce main features and functions of a PC Windows-based database management system, SARD, which has been developed at Korea Atomic Energy Research Institute for automatic management and search of the severe accident risk information. Main functions of the present database system are implemented by three closely related, but distinctive modules: (1) fixing of an initial environment for data storage and retrieval, (2) automatic loading and management of accident information, and (3) automatic search and retrieval of accident information. For this, the present database system manipulates various form of the plant-specific severe accident risk information, such as dominant severe accident sequences identified from the plant-specific Level 2 Probabilistic Safety Assessment (PSA) and accident sequence-specific information obtained from the representative severe accident codes (e.g., base case and sensitivity analysis results, and summary for key plant responses). The present database system makes it possible to implement fast prediction and intelligent retrieval of the required severe accident risk information for various accident sequences, and in turn it can be used for the support of the Level 2 PSA of similar plants and for the development of plant-specific severe accident management strategies.

  6. Overlap articles of respiratory system in databases Scopus and Web of Science: brief report

    Directory of Open Access Journals (Sweden)

    Seyed Javad Ghazimirsaeed

    2015-03-01

    Conclusion: Because of overlaping the contents of two information databases such as scopus and web of sciences searching for accessing to respiratory system from scopus is better due to containing unique papers ,However it is highly recommended to pay to this point while buying and sharing the mentioned resources.

  7. The database of the PREDICTS (Projecting Responses of Ecological Diversity in Changing Terrestrial Systems) project

    DEFF Research Database (Denmark)

    Hudson, Lawrence N; Newbold, Tim; Contu, Sara

    2017-01-01

    The PREDICTS project-Projecting Responses of Ecological Diversity In Changing Terrestrial Systems (www.predicts.org.uk)-has collated from published studies a large, reasonably representative database of comparable samples of biodiversity from multiple sites that differ in the nature or intensity ...

  8. The database of the PREDICTS (Projecting Responses of Ecological Diversity In Changing Terrestrial Systems) project

    NARCIS (Netherlands)

    Hudson, Lawrence N; Newbold, Tim; Contu, Sara; Hill, Samantha L L; Lysenko, Igor; De Palma, Adriana; Phillips, Helen R P; Alhusseini, Tamera I; Bedford, Felicity E; Bennett, Dominic J; Booth, Hollie; Burton, Victoria J; Chng, Charlotte W T; Choimes, Argyrios; Correia, David L P; Day, Julie; Echeverría-Londoño, Susy; Emerson, Susan R; Gao, Di; Garon, Morgan; Harrison, Michelle L K; Ingram, Daniel J; Jung, Martin; Kemp, Victoria; Kirkpatrick, Lucinda; Martin, Callum D; Pan, Yuan; Pask-Hale, Gwilym D; Pynegar, Edwin L; Robinson, Alexandra N; Sanchez-Ortiz, Katia; Senior, Rebecca A; Simmons, Benno I; White, Hannah J; Zhang, Hanbin; Aben, Job; Abrahamczyk, Stefan; Adum, Gilbert B; Aguilar-Barquero, Virginia; Aizen, Marcelo A; Albertos, Belén; Alcala, E L; Del Mar Alguacil, Maria; Alignier, Audrey; Ancrenaz, Marc; Andersen, Alan N; Arbeláez-Cortés, Enrique; Armbrecht, Inge; Arroyo-Rodríguez, Víctor; Aumann, Tom; Axmacher, Jan C; Azhar, Badrul; Azpiroz, Adrián B; Baeten, Lander; Bakayoko, Adama; Báldi, András; Banks, John E; Baral, Sharad K; Barlow, Jos; Barratt, Barbara I P; Barrico, Lurdes; Bartolommei, Paola; Barton, Diane M; Basset, Yves; Batáry, Péter; Bates, Adam J; Baur, Bruno; Bayne, Erin M; Beja, Pedro; Benedick, Suzan; Berg, Åke; Bernard, Henry; Berry, Nicholas J; Bhatt, Dinesh; Bicknell, Jake E; Bihn, Jochen H; Blake, Robin J; Bobo, Kadiri S; Bóçon, Roberto; Boekhout, Teun; Böhning-Gaese, Katrin; Bonham, Kevin J; Borges, Paulo A V; Borges, Sérgio H; Boutin, Céline; Bouyer, Jérémy; Bragagnolo, Cibele; Brandt, Jodi S; Brearley, Francis Q; Brito, Isabel; Bros, Vicenç; Brunet, Jörg; Buczkowski, Grzegorz; Buddle, Christopher M; Bugter, Rob; Buscardo, Erika; Buse, Jörn; Cabra-García, Jimmy; Cáceres, Nilton C; Cagle, Nicolette L; Calviño-Cancela, María; Cameron, Sydney A; Cancello, Eliana M; Caparrós, Rut; Cardoso, Pedro; Carpenter, Dan; Carrijo, Tiago F; Carvalho, Anelena L; Cassano, Camila R; Castro, Helena; Castro-Luna, Alejandro A; Rolando, Cerda B; Cerezo, Alexis; Chapman, Kim Alan; Chauvat, Matthieu; Christensen, Morten; Clarke, Francis M; Cleary, Daniel F R; Colombo, Giorgio; Connop, Stuart P; Craig, Michael D; Cruz-López, Leopoldo; Cunningham, Saul A; D'Aniello, Biagio; D'Cruze, Neil; da Silva, Pedro Giovâni; Dallimer, Martin; Danquah, Emmanuel; Darvill, Ben; Dauber, Jens; Davis, Adrian L V; Dawson, Jeff; de Sassi, Claudio; de Thoisy, Benoit; Deheuvels, Olivier; Dejean, Alain; Devineau, Jean-Louis; Diekötter, Tim; Dolia, Jignasu V; Domínguez, Erwin; Dominguez-Haydar, Yamileth; Dorn, Silvia; Draper, Isabel; Dreber, Niels; Dumont, Bertrand; Dures, Simon G; Dynesius, Mats; Edenius, Lars; Eggleton, Paul; Eigenbrod, Felix; Elek, Zoltán; Entling, Martin H; Esler, Karen J; de Lima, Ricardo F; Faruk, Aisyah; Farwig, Nina; Fayle, Tom M; Felicioli, Antonio; Felton, Annika M; Fensham, Roderick J; Fernandez, Ignacio C; Ferreira, Catarina C; Ficetola, Gentile F; Fiera, Cristina; Filgueiras, Bruno K C; Fırıncıoğlu, Hüseyin K; Flaspohler, David; Floren, Andreas; Fonte, Steven J; Fournier, Anne; Fowler, Robert E; Franzén, Markus; Fraser, Lauchlan H; Fredriksson, Gabriella M; Freire, Geraldo B; Frizzo, Tiago L M; Fukuda, Daisuke; Furlani, Dario; Gaigher, René; Ganzhorn, Jörg U; García, Karla P; Garcia-R, Juan C; Garden, Jenni G; Garilleti, Ricardo; Ge, Bao-Ming; Gendreau-Berthiaume, Benoit; Gerard, Philippa J; Gheler-Costa, Carla; Gilbert, Benjamin; Giordani, Paolo; Giordano, Simonetta; Golodets, Carly; Gomes, Laurens G L; Gould, Rachelle K; Goulson, Dave; Gove, Aaron D; Granjon, Laurent; Grass, Ingo; Gray, Claudia L; Grogan, James; Gu, Weibin; Guardiola, Moisès; Gunawardene, Nihara R; Gutierrez, Alvaro G; Gutiérrez-Lamus, Doris L; Haarmeyer, Daniela H; Hanley, Mick E; Hanson, Thor; Hashim, Nor R; Hassan, Shombe N; Hatfield, Richard G; Hawes, Joseph E; Hayward, Matt W; Hébert, Christian; Helden, Alvin J; Henden, John-André; Henschel, Philipp; Hernández, Lionel; Herrera, James P; Herrmann, Farina; Herzog, Felix; Higuera-Diaz, Diego; Hilje, Branko; Höfer, Hubert; Hoffmann, Anke; Horgan, Finbarr G; Hornung, Elisabeth; Horváth, Roland; Hylander, Kristoffer; Isaacs-Cubides, Paola; Ishida, Hiroaki; Ishitani, Masahiro; Jacobs, Carmen T; Jaramillo, Víctor J; Jauker, Birgit; Hernández, F Jiménez; Johnson, McKenzie F; Jolli, Virat; Jonsell, Mats; Juliani, S Nur; Jung, Thomas S; Kapoor, Vena; Kappes, Heike; Kati, Vassiliki; Katovai, Eric; Kellner, Klaus; Kessler, Michael; Kirby, Kathryn R; Kittle, Andrew M; Knight, Mairi E; Knop, Eva; Kohler, Florian; Koivula, Matti; Kolb, Annette

    The PREDICTS project-Projecting Responses of Ecological Diversity In Changing Terrestrial Systems (www.predicts.org.uk)-has collated from published studies a large, reasonably representative database of comparable samples of biodiversity from multiple sites that differ in the nature or intensity of

  9. Development of a database system for the calculation of indicators of environmental pressure caused by transport

    DEFF Research Database (Denmark)

    Giannouli, Myrsini; Samaras, Zissis; Keller, Mario

    2006-01-01

    The scope of this paper is to summarise a methodology developed for TRENDS (TRansport and ENvironment Database System-TRENDS). The main objective of TRENDS was the calculation of environmental pressure indicators caused by transport. The environmental pressures considered are associated with air...

  10. Another Look at Taurus Littrow: An Interactive Geographic Information System DataBase

    Science.gov (United States)

    Coombs, Cassandra R.; Meisburger, J. L.; Nettles, J. W.

    1998-01-01

    A variety of data has been amassed for the Apollo 17 landing site, including topography, sample locations, and imagery. These data were compiled into a Geographic Information System (GIS) to analyze their interrelationships more easily. The database will allow the evaluation of the resource potential of the Taurus Littrow region pyroclastic deposits. The database also serves as a catalog for the returned lunar samples. This catalog includes rock type, size, and location. While this project specifically targets the Taurus Littrow region, it is applicable to other regions as well.

  11. CURRENT VIEWS OF THE GLEASON GRADING SYSTEM

    Directory of Open Access Journals (Sweden)

    N. A. Gorban

    2014-07-01

    Full Text Available The authors provide the proceedings of the 2005 First International Society of Urological Pathology Consensus Conference and the basic provisions that differ the modified Gleason grading system from its original interpretation. In particular, we should do away with Gleason grade 1 (or 1 + 1 = 2 while assessing the needle biopsy specimens. Contrary to the recommendations by Gleason himself, the conference decided to apply stringent criteria for using Gleason grades 3 and 4. This is due to the fact that these grades are of special prognostic value so it is important to have clear criteria in defining each Gleason grade. Notions, such as secondary and tertiary Gleason patterns, are considered; detailed recommendations are given on the lesion extent sufficient to diagnose these components.

  12. Ultra-Structure database design methodology for managing systems biology data and analyses

    Directory of Open Access Journals (Sweden)

    Hemminger Bradley M

    2009-08-01

    Full Text Available Abstract Background Modern, high-throughput biological experiments generate copious, heterogeneous, interconnected data sets. Research is dynamic, with frequently changing protocols, techniques, instruments, and file formats. Because of these factors, systems designed to manage and integrate modern biological data sets often end up as large, unwieldy databases that become difficult to maintain or evolve. The novel rule-based approach of the Ultra-Structure design methodology presents a potential solution to this problem. By representing both data and processes as formal rules within a database, an Ultra-Structure system constitutes a flexible framework that enables users to explicitly store domain knowledge in both a machine- and human-readable form. End users themselves can change the system's capabilities without programmer intervention, simply by altering database contents; no computer code or schemas need be modified. This provides flexibility in adapting to change, and allows integration of disparate, heterogenous data sets within a small core set of database tables, facilitating joint analysis and visualization without becoming unwieldy. Here, we examine the application of Ultra-Structure to our ongoing research program for the integration of large proteomic and genomic data sets (proteogenomic mapping. Results We transitioned our proteogenomic mapping information system from a traditional entity-relationship design to one based on Ultra-Structure. Our system integrates tandem mass spectrum data, genomic annotation sets, and spectrum/peptide mappings, all within a small, general framework implemented within a standard relational database system. General software procedures driven by user-modifiable rules can perform tasks such as logical deduction and location-based computations. The system is not tied specifically to proteogenomic research, but is rather designed to accommodate virtually any kind of biological research. Conclusion We find

  13. Computer-aided diagnosis system for bone scintigrams from Japanese patients: importance of training database

    DEFF Research Database (Denmark)

    Horikoshi, Hiroyuki; Kikuchi, Akihiro; Onoguchi, Masahisa

    2012-01-01

    Computer-aided diagnosis (CAD) software for bone scintigrams have recently been introduced as a clinical quality assurance tool. The purpose of this study was to compare the diagnostic accuracy of two CAD systems, one based on a European and one on a Japanese training database, in a group of bone...... scans from Japanese patients.The two CAD software are trained to interpret bone scans using training databases consisting of bone scans with the desired interpretation, metastatic disease or not. One software was trained using 795 bone scans from European patients and the other with 904 bone scans from...... a higher specificity and accuracy compared to the European CAD software [81 vs. 57 % (p database showed significantly...

  14. Database development and management

    CERN Document Server

    Chao, Lee

    2006-01-01

    Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database

  15. A superconducting transformer system for high current cable testing.

    Science.gov (United States)

    Godeke, A; Dietderich, D R; Joseph, J M; Lizarazo, J; Prestemon, S O; Miller, G; Weijers, H W

    2010-03-01

    This article describes the development of a direct-current (dc) superconducting transformer system for the high current test of superconducting cables. The transformer consists of a core-free 10,464 turn primary solenoid which is enclosed by a 6.5 turn secondary. The transformer is designed to deliver a 50 kA dc secondary current at a dc primary current of about 50 A. The secondary current is measured inductively using two toroidal-wound Rogowski coils. The Rogowski coil signal is digitally integrated, resulting in a voltage signal that is proportional to the secondary current. This voltage signal is used to control the secondary current using a feedback loop which automatically compensates for resistive losses in the splices to the superconducting cable samples that are connected to the secondary. The system has been commissioned up to 28 kA secondary current. The reproducibility in the secondary current measurement is better than 0.05% for the relevant current range up to 25 kA. The drift in the secondary current, which results from drift in the digital integrator, is estimated to be below 0.5 A/min. The system's performance is further demonstrated through a voltage-current measurement on a superconducting cable sample at 11 T background magnetic field. The superconducting transformer system enables fast, high resolution, economic, and safe tests of the critical current of superconducting cable samples.

  16. Reliability of piping system components. Volume 4: The pipe failure event database

    Energy Technology Data Exchange (ETDEWEB)

    Nyman, R.; Erixon, S. [Swedish Nuclear Power Inspectorate, Stockholm (Sweden); Tomic, B. [ENCONET Consulting GmbH, Vienna (Austria); Lydell, B. [RSA Technologies, Visat, CA (United States)

    1996-07-01

    Available public and proprietary databases on piping system failures were searched for relevant information. Using a relational database to identify groupings of piping failure modes and failure mechanisms, together with insights from published PSAs, the project team determined why, how and where piping systems fail. This report represents a compendium of technical issues important to the analysis of pipe failure events, and statistical estimation of failure rates. Inadequacies of traditional PSA methodology are addressed, with directions for PSA methodology enhancements. A `data driven and systems oriented` analysis approach is proposed to enable assignment of unique identities to risk-significant piping system component failure. Sufficient operating experience does exist to generate quality data on piping failures. Passive component failures should be addressed by today`s PSAs to allow for aging analysis and effective, on-line risk management. 42 refs, 25 figs.

  17. Tailored patient information using a database system: Increasing patient compliance in a day surgery setting

    DEFF Research Database (Denmark)

    Grode, Jesper Nicolai Riis; Grode, Louise; Steinsøe, Ulla

    2013-01-01

    rehabilitation. The hospital is responsible of providing the patients with accurate information enabling the patient to prepare for surgery. Often patients are overloaded with uncoordinated information, letters and leaflets. The contribution of this project is a database system enabling health professionals...... was established to support these requirements. A relational database system holds all information pieces in a granular, structured form. Each individual piece of information can be joined with other pieces thus supporting the tailoring of information. A web service layer caters for integration with output systems....../media (word processing engines, web, mobile apps, and information kiosks). To lower the adoption bar of the system, an MS Word user interface was integrated with the web service layer, and information can now quickly be categorised and grouped according to purpose of use, users can quickly setup information...

  18. Data-based fault-tolerant control for affine nonlinear systems with actuator faults.

    Science.gov (United States)

    Xie, Chun-Hua; Yang, Guang-Hong

    2016-09-01

    This paper investigates the fault-tolerant control (FTC) problem for unknown nonlinear systems with actuator faults including stuck, outage, bias and loss of effectiveness. The upper bounds of stuck faults, bias faults and loss of effectiveness faults are unknown. A new data-based FTC scheme is proposed. It consists of the online estimations of the bounds and a state-dependent function. The estimations are adjusted online to compensate automatically the actuator faults. The state-dependent function solved by using real system data helps to stabilize the system. Furthermore, all signals in the resulting closed-loop system are uniformly bounded and the states converge asymptotically to zero. Compared with the existing results, the proposed approach is data-based. Finally, two simulation examples are provided to show the effectiveness of the proposed approach.

  19. Database Foundation For The Configuration Management Of The CERN Accelerator Controls Systems

    CERN Document Server

    Zaharieva, Z; Peryt, M

    2011-01-01

    The Controls Configuration Database (CCDB) and its interfaces have been developed over the last 25 years in order to become nowadays the basis for the Configuration Management of the Controls System for all accelerators at CERN. The CCDB contains data for all configuration items and their relationships, required for the correct functioning of the Controls System. The configuration items are quite heterogeneous, depicting different areas of the Controls System – ranging from 3000 Front-End Computers, 75 000 software devices allowing remote control of the accelerators, to valid states of the Accelerators Timing System. The article will describe the different areas of the CCDB, their interdependencies and the challenges to establish the data model for such a diverse configuration management database, serving a multitude of clients. The CCDB tracks the life of the configuration items by allowing their clear identification, triggering of change management processes as well as providing status accounting and aud...

  20. Parallel recovery method in shared-nothing spatial database cluster system

    Institute of Scientific and Technical Information of China (English)

    YOU Byeong-seob; KIM Myung-keun; ZOU Yong-gui; BAE Hae-young

    2004-01-01

    Shared-nothing spatial database cluster system provides high availability since a replicated node can continue service even if any node in cluster system was crashed.However if the failed node wouldn't be recovered quickly, whole system performance will decrease since the other nodes must process the queries which the failed node may be processed. Therefore the recovery of cluster system is very important to provide the stable service. In most previous proposed techniques, external logs should be recorded in all nodes even if the failed node does not exist. So update transactions are processed slowly.Also recovery time of the failed node increases since a single storage for all database is used to record external logs in each node. Therefore we propose a parallel recovery method for recovering the failed node quickly.

  1. 16th East-European Conference on Advances in Databases and Information Systems (ADBIS 2012)

    CERN Document Server

    Wojciechowski, Marek; New Trends in Databases and Information Systems

    2013-01-01

    Database and information systems technologies have been rapidly evolving in several directions over the past years. New types and kinds of data, new types of applications and information systems to support them raise diverse challenges to be addressed. The so-called big data challenge, streaming data management and processing, social networks and other complex data analysis, including semantic reasoning into information systems supporting for instance trading, negotiations, and bidding mechanisms are just some of the emerging research topics. This volume contains papers contributed by six workshops: ADBIS Workshop on GPUs in Databases (GID 2012), Mining Complex and Stream Data (MCSD'12), International Workshop on Ontologies meet Advanced Information Systems (OAIS'2012), Second Workshop on Modeling Multi-commodity Trade: Data models and processing (MMT'12), 1st ADBIS Workshop on Social Data Processing (SDP'12), 1st ADBIS Workshop on Social and Algorithmic Issues in Business Support (SAIBS), and the Ph.D. Conso...

  2. Comparative radiopacity of six current adhesive systems.

    Science.gov (United States)

    de Moraes Porto, Isabel Cristina Celerino; Honório, Naira Cândido; Amorim, Dayse Annie Nicácio; de Melo Franco, Aurea Valéria; Penteado, Luiz Alexandre Moura; Parolia, Abhishek

    2014-01-01

    The radiopacity of contemporary adhesive systems has been mentioned as the indication for replacement of restorations due to misinterpretation of radiographic images. This study aimed to evaluate the radiopacity of contemporary bonding agents and to compare their radiodensities with those of enamel and dentin. To measure the radiopacity, eight specimens were fabricated from Clearfil SE Bond (CF), Xeno V (XE), Adper SE Bond (ASE), Magic Bond (MB), Single Bond 2 (SB), Scotchbond Multipurpose (SM), and gutta-percha (positive control). The optical densities of enamel, dentin, the bonding agents, gutta-percha, and an aluminium (Al) step wedge were obtained from radiographic images using image analysis software. The radiographic density data were analyzed statistically by analysis of variance and Tukey's test (α =0.05). Significant differences were found between ASE and all other groups tested and between XE and CF. No statistical difference was observed between the radiodensity of 1 mm of Al and 1 mm of dentin, between 2 mm of Al and enamel, and between 5 mm of Al and gutta-percha. Five of the six adhesive resins had radiopacity values that fell below the value for dentin, whereas the radiopacity of ASE adhesive was greater than that of dentin but below that of enamel. This investigation demonstrates that only ASE presented a radiopacity within the values of dentin and enamel. CF, XE, MB, SB, and SM adhesives are all radiolucent and require alterations to their composition to facilitate their detection by means of radiographic images.

  3. Virtual smile design systems: a current review.

    Science.gov (United States)

    Zimmermann, Moritz; Mehl, Albert

    2015-01-01

    In the age of digital dentistry, virtual treatment planning is becoming an increasingly important element of dental practice. Thanks to new technological advances in the computer- assisted design and computer-assisted manufacturing (CAD/CAM) of dental restorations, predictable interdisciplinary treatment using the backward planning approach appears useful and feasible. Today, a virtual smile design can be used as the basis for creating an esthetic virtual setup of the desired final result. The virtual setup, in turn, is used to plan further treatment steps in an interdisciplinary team approach, and communicate the results to the patient. The smile design concept and the esthetic analyses required for it are described in this article. We include not only a step-by-step description of the virtual smile design workflow, but also describe and compare the several available smile design options and systems. Subsequently, a brief discussion of the advantages and limitations of virtual smile design is followed by a section on different ways to integrate a two-dimensional (2D) smile design into the digital three-dimensional (3D) workflow. New technological developments are also described, such as the integration of smile designs in digital face scans, and 3D diagnostic follow-up using intraoral scanners.

  4. Thermodynamic database of the phase diagrams in the Mg-Al-Zn-Y-Ce system

    Institute of Scientific and Technical Information of China (English)

    LIU Xingjun; WANG Cuiping; WEN Mingzhong; CHEN Xing; PAN Fusheng

    2006-01-01

    The Mg-Al-Zn-Y-Ce system is one of the key systems for designing high-strength Mg alloys. The purpose of the present article is to develop a thermodynamic database for the Mg-Al-Zn-Y-Ce multicomponent system to design Mg alloys using the calculation of phase diagrams (CALPHAD) method, where the Gibbs energies of solution phases such as liquid,fcc, bcc, and hcp phases were described by the subregular solution model, whereas those of all the compounds were described by the sublattice model. The thermodynamic parameters describing Gibbs energies of the different phases in this database were evaluated by fitting the experimental data for phase equilibria and thermodynamic properties. On the basis of this database, a lot of information concerning stable and metastable phase equilibria of isothermal and vertical sections, mo lar fractions of constituent phases, the liquidus projection, etc., can be predicted. This database is expected to play an important role in the design of Mg alloys.

  5. Development of the Lymphoma Enterprise Architecture Database: a caBIG Silver level compliant system.

    Science.gov (United States)

    Huang, Taoying; Shenoy, Pareen J; Sinha, Rajni; Graiser, Michael; Bumpers, Kevin W; Flowers, Christopher R

    2009-04-03

    Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid (caBIG) Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system (LEAD), which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK) provided by National Cancer Institute's Center for Bioinformatics to establish the LEAD platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG to the management of clinical and biological data.

  6. High-performance Negative Database for Massive Data Management System of The Mingantu Spectral Radioheliograph

    Science.gov (United States)

    Shi, Congming; Wang, Feng; Deng, Hui; Liu, Yingbo; Liu, Cuiyin; Wei, Shoulin

    2017-08-01

    As a dedicated synthetic aperture radio interferometer in China, the MingantU SpEctral Radioheliograph (MUSER), initially known as the Chinese Spectral RadioHeliograph (CSRH), has entered the stage of routine observation. More than 23 million data records per day need to be effectively managed to provide high-performance data query and retrieval for scientific data reduction. In light of these massive amounts of data generated by the MUSER, in this paper, a novel data management technique called the negative database (ND) is proposed and used to implement a data management system for the MUSER. Based on the key-value database, the ND technique makes complete utilization of the complement set of observational data to derive the requisite information. Experimental results showed that the proposed ND can significantly reduce storage volume in comparison with a relational database management system (RDBMS). Even when considering the time needed to derive records that were absent, its overall performance, including querying and deriving the data of the ND, is comparable with that of a relational database management system (RDBMS). The ND technique effectively solves the problem of massive data storage for the MUSER and is a valuable reference for the massive data management required in next-generation telescopes.

  7. Database Research: Achievements and Challenges

    Institute of Scientific and Technical Information of China (English)

    Shan Wang; Xiao-Yong Du; Xiao-Feng Meng; Hong Chen

    2006-01-01

    Database system is the infrastructure of the modern information system. The R&D in the database system moves along by giant steps. This report presents the achievements Renmin University of China (RUC) has made in the past 25 years and at the same time addresses some of the research projects we, RUC, are currently working on. The National Natural Science Foundation of China supports and initiates most of our research projects and these successfully conducted projects have produced fruitful results.

  8. Data-mining analysis of the global distribution of soil carbon in observational databases and Earth system models

    Science.gov (United States)

    Hashimoto, Shoji; Nanko, Kazuki; Ťupek, Boris; Lehtonen, Aleksi

    2017-03-01

    Future climate change will dramatically change the carbon balance in the soil, and this change will affect the terrestrial carbon stock and the climate itself. Earth system models (ESMs) are used to understand the current climate and to project future climate conditions, but the soil organic carbon (SOC) stock simulated by ESMs and those of observational databases are not well correlated when the two are compared at fine grid scales. However, the specific key processes and factors, as well as the relationships among these factors that govern the SOC stock, remain unclear; the inclusion of such missing information would improve the agreement between modeled and observational data. In this study, we sought to identify the influential factors that govern global SOC distribution in observational databases, as well as those simulated by ESMs. We used a data-mining (machine-learning) (boosted regression trees - BRT) scheme to identify the factors affecting the SOC stock. We applied BRT scheme to three observational databases and 15 ESM outputs from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) and examined the effects of 13 variables/factors categorized into five groups (climate, soil property, topography, vegetation, and land-use history). Globally, the contributions of mean annual temperature, clay content, carbon-to-nitrogen (CN) ratio, wetland ratio, and land cover were high in observational databases, whereas the contributions of the mean annual temperature, land cover, and net primary productivity (NPP) were predominant in the SOC distribution in ESMs. A comparison of the influential factors at a global scale revealed that the most distinct differences between the SOCs from the observational databases and ESMs were the low clay content and CN ratio contributions, and the high NPP contribution in the ESMs. The results of this study will aid in identifying the causes of the current mismatches between observational SOC databases and ESM outputs

  9. Beam Current Measurement and Adjustment System on AMS

    Institute of Scientific and Technical Information of China (English)

    WUShao-yong; HEMING; SUSheng-yong; WANGZhen-jun; JIANGShan

    2003-01-01

    The beam current measurement and adjustment system of HI-13 tandem accelerator mass spectrometry detector system is consisted of the faraday cup, fluorescent target and a series of adjustable vertical slits(Fig. 1). The system's operation is very complicated and the transmission is low for the old system. A new system is instalated for improvement. We put the adjustable vertical slit, Faraday cup.

  10. Database improvements of national student enrollment registry. Link external systems to NSER DB

    Directory of Open Access Journals (Sweden)

    Ph. D. Cosmin Catalin Olteanu

    2012-05-01

    Full Text Available The general idea is to improve NSER database to have a strong unique informatic system where all the data should be collected from all universities. Employers and all academic institutions can check someone’s background easy through a national portal just by log in. As a result of the paper, the author found that this system has it’s flows but can be improved.

  11. SYSTOMONAS — an integrated database for systems biology analysis of Pseudomonas

    OpenAIRE

    Choi, Claudia; Münch, Richard; Leupold, Stefan; Klein, Johannes; Siegel, Inga; Thielen, Bernhard; Benkert, Beatrice; Kucklick, Martin; Schobert, Max; Barthelmes, Jens; Ebeling, Christian; Haddad, Isam; Scheer, Maurice; Grote, Andreas; Hiller, Karsten

    2007-01-01

    To provide an integrated bioinformatics platform for a systems biology approach to the biology of pseudomonads in infection and biotechnology the database SYSTOMONAS (SYSTems biology of pseudOMONAS) was established. Besides our own experimental metabolome, proteome and transcriptome data, various additional predictions of cellular processes, such as gene-regulatory networks were stored. Reconstruction of metabolic networks in SYSTOMONAS was achieved via comparative genomics. Broad data integr...

  12. Modified Delphi study to determine optimal data elements for inclusion in an emergency management database system

    Directory of Open Access Journals (Sweden)

    A. Jabar

    2012-03-01

    Conclusion: The use of a modified Expert Delphi study achieved consensus in aspects of hospital institutional capacity that can be translated into practical recommendations for implementation by the local emergency management database system. Additionally, areas of non-consensus have been identified where further work is required. This purpose of this study is to contribute to and aid in the development of this new system.

  13. Developing a Comprehensive Database Management System for Organization and Evaluation of Mammography Datasets

    OpenAIRE

    Wu, Yirong; Rubin, Daniel L.; WOODS, RYAN W.; Elezaby, Mai; Burnside, Elizabeth S.

    2014-01-01

    We aimed to design and develop a comprehensive mammography database system (CMDB) to collect clinical datasets for outcome assessment and development of decision support tools. A Health Insurance Portability and Accountability Act (HIPAA) compliant CMDB was created to store multi-relational datasets of demographic risk factors and mammogram results using the Breast Imaging Reporting and Data System (BI-RADS) lexicon. The CMDB collected both biopsy pathology outcomes, in a breast pathology lex...

  14. Demonstration of SLUMIS: a clinical database and management information system for a multi organ transplant program.

    OpenAIRE

    Kurtz, M.; Bennett, T; Garvin, P.; Manuel, F; Williams, M.; Langreder, S.

    1991-01-01

    Because of the rapid evolution of the heart, heart/lung, liver, kidney and kidney/pancreas transplant programs at our institution, and because of a lack of an existing comprehensive database, we were required to develop a computerized management information system capable of supporting both clinical and research requirements of a multifaceted transplant program. SLUMIS (ST. LOUIS UNIVERSITY MULTI-ORGAN INFORMATION SYSTEM) was developed for the following reasons: 1) to comply with the reportin...

  15. The Implementation of a Entity-Relationship Interface for the Multi-Lingual Database System.

    Science.gov (United States)

    1985-12-01

    relatively easy for us to learn. The main advantage of C is the programming environment in which it resides, the UNIX operating system. This environment...available in UNIX for use with C, but we chose to use conditional computation and diagnostic print statements to aid in the debugging process. To...the Multi-Lingual Database System, M.S. Thesis, Naval Postgraduate School, Monterey, California, June 1985. 14. Kernighan .B.W and Ritchie, D.M., The C

  16. Long Duration Exposure Facility (LDEF) optical systems SIG summary and database

    Science.gov (United States)

    Bohnhoff-Hlavacek, Gail

    1992-01-01

    The main objectives of the Long Duration Exposure Facility (LDEF) Optical Systems Special Investigative Group (SIG) Discipline are to develop a database of experimental findings on LDEF optical systems and elements hardware, and provide an optical system overview. Unlike the electrical and mechanical disciplines, the optics effort relies primarily on the testing of hardware at the various principal investigator's laboratories, since minimal testing of optical hardware was done at Boeing. This is because all space-exposed optics hardware are part of other individual experiments. At this time, all optical systems and elements testing by experiment investigator teams is not complete, and in some cases has hardly begun. Most experiment results to date, document observations and measurements that 'show what happened'. Still to come from many principal investigators is a critical analysis to explain 'why it happened' and future design implications. The original optical system related concerns and the lessons learned at a preliminary stage in the Optical Systems Investigations are summarized. The design of the Optical Experiments Database and how to acquire and use the database to review the LDEF results are described.

  17. Earth History databases and visualization - the TimeScale Creator system

    Science.gov (United States)

    Ogg, James; Lugowski, Adam; Gradstein, Felix

    2010-05-01

    The "TimeScale Creator" team (www.tscreator.org) and the Subcommission on Stratigraphic Information (stratigraphy.science.purdue.edu) of the International Commission on Stratigraphy (www.stratigraphy.org) has worked with numerous geoscientists and geological surveys to prepare reference datasets for global and regional stratigraphy. All events are currently calibrated to Geologic Time Scale 2004 (Gradstein et al., 2004, Cambridge Univ. Press) and Concise Geologic Time Scale (Ogg et al., 2008, Cambridge Univ. Press); but the array of intercalibrations enable dynamic adjustment to future numerical age scales and interpolation methods. The main "global" database contains over 25,000 events/zones from paleontology, geomagnetics, sea-level and sequence stratigraphy, igneous provinces, bolide impacts, plus several stable isotope curves and image sets. Several regional datasets are provided in conjunction with geological surveys, with numerical ages interpolated using a similar flexible inter-calibration procedure. For example, a joint program with Geoscience Australia has compiled an extensive Australian regional biostratigraphy and a full array of basin lithologic columns with each formation linked to public lexicons of all Proterozoic through Phanerozoic basins - nearly 500 columns of over 9,000 data lines plus hot-curser links to oil-gas reference wells. Other datapacks include New Zealand biostratigraphy and basin transects (ca. 200 columns), Russian biostratigraphy, British Isles regional stratigraphy, Gulf of Mexico biostratigraphy and lithostratigraphy, high-resolution Neogene stable isotope curves and ice-core data, human cultural episodes, and Circum-Arctic stratigraphy sets. The growing library of datasets is designed for viewing and chart-making in the free "TimeScale Creator" JAVA package. This visualization system produces a screen display of the user-selected time-span and the selected columns of geologic time scale information. The user can change the

  18. Geroprotectors.org: a new, structured and curated database of current therapeutic interventions in aging and age-related disease

    Science.gov (United States)

    Moskalev, Alexey; Chernyagina, Elizaveta; de Magalhães, João Pedro; Barardo, Diogo; Thoppil, Harikrishnan; Shaposhnikov, Mikhail; Budovsky, Arie; Fraifeld, Vadim E.; Garazha, Andrew; Tsvetkov, Vasily; Bronovitsky, Evgeny; Bogomolov, Vladislav; Scerbacov, Alexei; Kuryan, Oleg; Gurinovich, Roman; Jellen, Leslie C.; Kennedy, Brian; Mamoshina, Polina; Dobrovolskaya, Evgeniya; Aliper, Alex; Kaminsky, Dmitry; Zhavoronkov, Alex

    2015-01-01

    As the level of interest in aging research increases, there is a growing number of geroprotectors, or therapeutic interventions that aim to extend the healthy lifespan and repair or reduce aging-related damage in model organisms and, eventually, in humans. There is a clear need for a manually-curated database of geroprotectors to compile and index their effects on aging and age-related diseases and link these effects to relevant studies and multiple biochemical and drug databases. Here, we introduce the first such resource, Geroprotectors (http://geroprotectors.org). Geroprotectors is a public, rapidly explorable database that catalogs over 250 experiments involving over 200 known or candidate geroprotectors that extend lifespan in model organisms. Each compound has a comprehensive profile complete with biochemistry, mechanisms, and lifespan effects in various model organisms, along with information ranging from chemical structure, side effects, and toxicity to FDA drug status. These are presented in a visually intuitive, efficient framework fit for casual browsing or in-depth research alike. Data are linked to the source studies or databases, providing quick and convenient access to original data. The Geroprotectors database facilitates cross-study, cross-organism, and cross-discipline analysis and saves countless hours of inefficient literature and web searching. Geroprotectors is a one-stop, knowledge-sharing, time-saving resource for researchers seeking healthy aging solutions. PMID:26342919

  19. Current Scientific Impact of Ss Cyril and Methodius University of Skopje, Republic of Macedonia in the Scopus Database (1960-2014)

    OpenAIRE

    2015-01-01

    Aim: The aim of this study was to analyze current scientific impact of Ss Cyril and Methodius University of Skopje, Republic of Macedonia in the Scopus Database (1960-2014). Material and Methods: Affiliation search of the Scopus database was performed on November 23, 2014 in order to identify published papers from the Ss Cyril and Methodius University of Skopje (UC&M), Republic of Macedonia. A total number of 3960 articles (3055 articles from UC&M, 861 articles from Faculty of Medicine, U...

  20. MetaMetaDB: a database and analytic system for investigating microbial habitability.

    Directory of Open Access Journals (Sweden)

    Ching-chia Yang

    Full Text Available MetaMetaDB (http://mmdb.aori.u-tokyo.ac.jp/ is a database and analytic system for investigating microbial habitability, i.e., how a prokaryotic group can inhabit different environments. The interaction between prokaryotes and the environment is a key issue in microbiology because distinct prokaryotic communities maintain distinct ecosystems. Because 16S ribosomal RNA (rRNA sequences play pivotal roles in identifying prokaryotic species, a system that comprehensively links diverse environments to 16S rRNA sequences of the inhabitant prokaryotes is necessary for the systematic understanding of the microbial habitability. However, existing databases are biased to culturable prokaryotes and exhibit limitations in the comprehensiveness of the data because most prokaryotes are unculturable. Recently, metagenomic and 16S rRNA amplicon sequencing approaches have generated abundant 16S rRNA sequence data that encompass unculturable prokaryotes across diverse environments; however, these data are usually buried in large databases and are difficult to access. In this study, we developed MetaMetaDB (Meta-Metagenomic DataBase, which comprehensively and compactly covers 16S rRNA sequences retrieved from public datasets. Using MetaMetaDB, users can quickly generate hypotheses regarding the types of environments a prokaryotic group may be adapted to. We anticipate that MetaMetaDB will improve our understanding of the diversity and evolution of prokaryotes.

  1. Development of the Lymphoma Enterprise Architecture Database: A caBIG(TM Silver Level Compliant System

    Directory of Open Access Journals (Sweden)

    Taoying Huang

    2009-04-01

    Full Text Available Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid™ (caBIG™ Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system™ (LEAD™, which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK provided by National Cancer Institute’s Center for Bioinformatics to establish the LEAD™ platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD™ could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG™ can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG™ to the management of clinical and biological data.

  2. Development of the Lymphoma Enterprise Architecture Database: A caBIG(TM Silver Level Compliant System

    Directory of Open Access Journals (Sweden)

    Taoying Huang

    2009-01-01

    Full Text Available Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid™ (caBIG™ Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system™ (LEAD™, which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK provided by National Cancer Institute’s Center for Bioinformatics to establish the LEAD™ platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD™ could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG™ can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG™ to the management of clinical and biological data.

  3. Development of the Lymphoma Enterprise Architecture Database: A caBIG(tm) Silver level compliant System

    Science.gov (United States)

    Huang, Taoying; Shenoy, Pareen J.; Sinha, Rajni; Graiser, Michael; Bumpers, Kevin W.; Flowers, Christopher R.

    2009-01-01

    Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid™ (caBIG™) Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system™ (LEAD™), which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK) provided by National Cancer Institute’s Center for Bioinformatics to establish the LEAD™ platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD™ could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG™ can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG™ to the management of clinical and biological data. PMID:19492074

  4. Development of Vision Based Multiview Gait Recognition System with MMUGait Database

    Directory of Open Access Journals (Sweden)

    Hu Ng

    2014-01-01

    Full Text Available This paper describes the acquisition setup and development of a new gait database, MMUGait. This database consists of 82 subjects walking under normal condition and 19 subjects walking with 11 covariate factors, which were captured under two views. This paper also proposes a multiview model-based gait recognition system with joint detection approach that performs well under different walking trajectories and covariate factors, which include self-occluded or external occluded silhouettes. In the proposed system, the process begins by enhancing the human silhouette to remove the artifacts. Next, the width and height of the body are obtained. Subsequently, the joint angular trajectories are determined once the body joints are automatically detected. Lastly, crotch height and step-size of the walking subject are determined. The extracted features are smoothened by Gaussian filter to eliminate the effect of outliers. The extracted features are normalized with linear scaling, which is followed by feature selection prior to the classification process. The classification experiments carried out on MMUGait database were benchmarked against the SOTON Small DB from University of Southampton. Results showed correct classification rate above 90% for all the databases. The proposed approach is found to outperform other approaches on SOTON Small DB in most cases.

  5. Conceptual Level Design of Semi-structured Database System: Graph-semantic Based Approach

    Directory of Open Access Journals (Sweden)

    Anirban Sarkar

    2011-10-01

    Full Text Available This paper has proposed a Graph – semantic based conceptual model for semi-structured database system, called GOOSSDM, to conceptualize the different facets of such system in object oriented paradigm. The model defines a set of graph based formal constructs, variety of relationship types with participation constraints and rich set of graphical notations to specify the conceptual level design of semi-structured database system. The proposed design approach facilitates modeling of irregular, heterogeneous, hierarchical and non-hierarchical semi-structured data at the conceptual level. Moreover, the proposed GOOSSDM is capable to model XML document at conceptual level with the facility of document-centric design, ordering and disjunction characteristic. A rule based transformation mechanism of GOOSSDM schema into the equivalent XML Schema Definition (XSD also has been proposed in this paper. The concepts of the proposed conceptual model have been implemented using Generic Modeling Environment (GME.

  6. Ontological Enrichment of the Genes-to-Systems Breast Cancer Database

    Science.gov (United States)

    Viti, Federica; Mosca, Ettore; Merelli, Ivan; Calabria, Andrea; Alfieri, Roberta; Milanesi, Luciano

    Breast cancer research need the development of specific and suitable tools to appropriately manage biomolecular knowledge. The presented work deals with the integrative storage of breast cancer related biological data, in order to promote a system biology approach to this network disease. To increase data standardization and resource integration, annotations maintained in Genes-to-Systems Breast Cancer (G2SBC) database are associated to ontological terms, which provide a hierarchical structure to organize data enabling more effective queries, statistical analysis and semantic web searching. Exploited ontologies, which cover all levels of the molecular environment, from genes to systems, are among the most known and widely used bioinformatics resources. In G2SBC database ontology terms both provide a semantic layer to improve data storage, accessibility and analysis and represent a user friendly instrument to identify relations among biological components.

  7. Analysis of army-wide hearing conservation database for hearing profiles related to crew-served and individual weapon systems

    Directory of Open Access Journals (Sweden)

    William A Ahroon

    2011-01-01

    Full Text Available Damage-risk criteria (DRC for noise exposures are designed to protect 95% of the exposed populations from hearing injuries caused by those noise exposures. The current DRC used by the US military follows OSHA guidelines for continuous noise. The current military DRC for impulse exposures follows the recommendations from the National Academy of Sciences - National Research Council Committee on Hearing, Bioacoustics, and Biomechanics (CHABA and are contained in the current military standard, MIL-STD-1474D "Noise Limits." Suggesting that the MIL-STD for impulse exposure is too stringent, various individuals have proposed that the DRC for exposure to high-level impulses be relaxed. The purpose of this study is to evaluate the current hearing status of US Army Soldiers, some of whom can be, by their military occupational specialties (MOS, reasonably expected to be routinely exposed to high-level impulses from weapon systems. The Defense Occupational and Environmental Health Readiness System - Hearing Conservation (DOEHRS-HC was queried for the hearing status of enlisted Soldiers of 32 different MOSs. The results indicated that less than 95% of the Soldiers in the DOEHRS-HC database were classified as having normal hearing. In other words, the goal of the DRC used for limiting noise injuries (from continuous and impulse exposures was not stringent enough to prevent hearing injuries in all but the most susceptible Soldiers. These results suggest that the current military noise DRC should not be relaxed.

  8. An Efficient Middleware for Storing and Querying XML Data in Relational Database Management System

    Directory of Open Access Journals (Sweden)

    Mohammed A.I. Fakheraldien

    2011-01-01

    Full Text Available Problem statement: In this study, we propose a middleware that provides a transformation utility for storing and querying XML data in relational databases using model mapping method. Approach: To store XML documents in RDBMS, several mapping approaches can be used. We chose structure independent approach. In this middleware the model mapping method XParent and free of cost available technologies MYSQL, PhpMyAdmin and PHPclasses are used as examples. Results: This middleware stores XML tables and does not require a direct extension of SQL thus this middleware can be used with any relational databases management system with little changes in the middleware interface. The middleware offers two alternative methods -namely XParent and XReal- for storing XML in the database. Conclusion: The key to proposed middleware is to store XML document in a relational database through a user interface and with an XPath query processor. We present a comparative experimental study on the performance of insertion and retrieval of two types of XML documents with a set of XPath queries executed though the XPath. XML and Relational databases cannot be kept separately because XML is becoming the universal standard data format for the representation and exchanging the information whereas most existing data lies in RDBMS and their power of data capabilities cannot be degraded so the solution to this problem a middleware prototype is required. The proposed schema dependent solutions have a drawback that evens a small change in the logical structure of XML documents influence on the database schemas and several problems occur during the updating process. A new efficient data middleware is proposed in the study to face these issues.

  9. Integrating a modern knowledge-based system architecture with a legacy VA database: the ATHENA and EON projects at Stanford.

    Science.gov (United States)

    Advani, A; Tu, S; O'Connor, M; Coleman, R; Goldstein, M K; Musen, M

    1999-01-01

    We present a methodology and database mediator tool for integrating modern knowledge-based systems, such as the Stanford EON architecture for automated guideline-based decision-support, with legacy databases, such as the Veterans Health Information Systems & Technology Architecture (VISTA) systems, which are used nation-wide. Specifically, we discuss designs for database integration in ATHENA, a system for hypertension care based on EON, at the VA Palo Alto Health Care System. We describe a new database mediator that affords the EON system both physical and logical data independence from the legacy VA database. We found that to achieve our design goals, the mediator requires two separate mapping levels and must itself involve a knowledge-based component.

  10. Logical database design principles

    CERN Document Server

    Garmany, John; Clark, Terry

    2005-01-01

    INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint

  11. The RDB: A parallel, spatial database for the IES/BTI system

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, K.; Winter, L.

    1995-04-01

    The manipulation and representation of spatial data on computers is an important issue in many computer applications. Spatial data, which consists of points, lines, and regions in 2-dimensions, can be difficult to manage efficiently because it is often quite voluminous. For instance, the number of picture elements in even a small digital image is on the order of a million, while the number of locations stored in a terrain database can easily include billions of points. Furthermore, the kinds of operations performed on spatial data -- set operations, insertion, deletion, searches such as ``near`` -- are compute intensive, and hence slow unless the data is structured to reflect its underlying topology. Hence a conventional database which is organized on search keys is often not adequate for handling spatial data. The Image Exploitation System, which is an automated image analysis system, has a great need for efficient storage and manipulation of spatial data. The Image Exploitation System is part of the Advanced Research Project Agency`s Balanced Technology Initiative and is abbreviated IES/BTI. IES/BTI must process tens to hundreds of megabytes of imagery in a few minutes, and is composed of many independent components which need to access and share spatial data. The system needed an efficient parallel spatial database, hence the motivation for our work on the Region Database, or RDB. The RDB is our attempt to meet the needs of the IES/BTI Cycle 2 system. The RDB provides for storage and retrieval of both raster and vector based spatial data as well as attribute-based retrievals. It also provides facilities for conversion between the two representations of spatial data (raster and vector) and for efficient, parallel boolean operations on vector data. In this paper we discuss the research and development performed to design and implement the RDB.

  12. [Utilization of Multi-Institutional Laboratory Data as an Evidence Database: The Current Status and Future Tasks--Chairmen's Introductory Remarks].

    Science.gov (United States)

    Miyake, Kazunori; Nishibori, Masahiro

    2015-01-01

    The construction of a database that integrates raw laboratory data and diagnostic information with patient backgrounds is an effective tool in the practice of Evidence-Based Laboratory Medicine (EBLM). By exploring this type of database, it is possible to understand the diagnostic characteristics of the tests for a specific patient subgroup or condition. Although several studies have been carried out recently, these databases contain single-hospital data, and are thus limited regarding their external validity. In order to improve the reliability of the evidence, joint multi-institutional research is required. Therefore, the EBLM Committee of the Japanese Society of Clinical Laboratory Medicine arranged the symposium, entitled: "Utilization of multi-institutional laboratory data as an evidence database", which discusses current problems and solutions for the integration of multi-institutional laboratory data. In the symposium, five speakers presented on the following subjects: 1) Standardization of laboratory test coding (JLAC10); 2) The construction of a data warehouse in the hospital; 3) Multi-institutional study on long-term data changes; 4) Multi-institutional study on diagnostic accuracy; and 5) The construction of databases for the practice of EBLM and the need for the standardization/harmonization of laboratory data.

  13. Research on the J2EE-based product database management system

    Institute of Scientific and Technical Information of China (English)

    LIN Lin; YAO Yu; ZHONG Shi-sheng

    2007-01-01

    The basic frame and the design idea of J2EE-based Product Data Management (PDM) system are presented. This paper adopts the technology of Object-Oriented to realize the database design and builds the information model of this PDM system. The integration key technology of PDM and CAD systems are discussed,the isomerous interface characteristics between CAD and PDM systems are analyzed, and finally, the integration mode of the PDM and CAD systems is given. Using these technologies, the integration of PDM and CAD systems is realized and the consistence of data in PDM and CAD systems is kept. Finally, the Product Data Management system is developed, which has been tested on development process of the hydraulic generator. The running process is stable and safety.

  14. The water vapour continuum in near-infrared windows - Current understanding and prospects for its inclusion in spectroscopic databases

    Science.gov (United States)

    Shine, Keith P.; Campargue, Alain; Mondelain, Didier; McPheat, Robert A.; Ptashnik, Igor V.; Weidmann, Damien

    2016-09-01

    Spectroscopic catalogues, such as GEISA and HITRAN, do not yet include information on the water vapour continuum that pervades visible, infrared and microwave spectral regions. This is partly because, in some spectral regions, there are rather few laboratory measurements in conditions close to those in the Earth's atmosphere; hence understanding of the characteristics of the continuum absorption is still emerging. This is particularly so in the near-infrared and visible, where there has been renewed interest and activity in recent years. In this paper we present a critical review focusing on recent laboratory measurements in two near-infrared window regions (centred on 4700 and 6300 cm-1) and include reference to the window centred on 2600 cm-1 where more measurements have been reported. The rather few available measurements, have used Fourier transform spectroscopy (FTS), cavity ring down spectroscopy, optical-feedback - cavity enhanced laser spectroscopy and, in very narrow regions, calorimetric interferometry. These systems have different advantages and disadvantages. Fourier Transform Spectroscopy can measure the continuum across both these and neighbouring windows; by contrast, the cavity laser techniques are limited to fewer wavenumbers, but have a much higher inherent sensitivity. The available results present a diverse view of the characteristics of continuum absorption, with differences in continuum strength exceeding a factor of 10 in the cores of these windows. In individual windows, the temperature dependence of the water vapour self-continuum differs significantly in the few sets of measurements that allow an analysis. The available data also indicate that the temperature dependence differs significantly between different near-infrared windows. These pioneering measurements provide an impetus for further measurements. Improvements and/or extensions in existing techniques would aid progress to a full characterisation of the continuum - as an example, we

  15. A microcomputer based system for current-meter data acquisition

    Science.gov (United States)

    Cheng, R.T.; Gartner, J.W.

    1979-01-01

    The U.S. Geological Survey is conducting current measurements as part of an interdisciplinary study of the San Francisco Bay estuarine system. The current meters used in the study record current speed, direction, temperature, and conductivity in digital codes on magnetic tape cartridges. Upon recovery of the current meters, the data tapes are translated by a tape reader into computer codes for further analyses. Quite often the importance of the data processing phase of a current-measurement program is underestimated and downplayed. In this paper a data-processing system which performs the complete data processing and analyses is described. The system, which is configured around an LSI-11 microcomputer, has been assembled to provide the capabilities of data translation, reduction, and tabulation and graphical display immediately following recovery of current meters. The flexibility inherent in a microcomputer has made it available to perform many other research functions which would normally be done on an institutional computer.

  16. NoSQL Databases

    OpenAIRE

    2013-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  17. Collecting Taxes Database

    Data.gov (United States)

    US Agency for International Development — The Collecting Taxes Database contains performance and structural indicators about national tax systems. The database contains quantitative revenue performance...

  18. Multimedia human brain database system for surgical candidacy determination in temporal lobe epilepsy with content-based image retrieval

    Science.gov (United States)

    Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid; Fotouhi, Farshad A.; Elisevich, Kost

    2003-01-01

    This paper presents the development of a human brain multimedia database for surgical candidacy determination in temporal lobe epilepsy. The focus of the paper is on content-based image management, navigation and retrieval. Several medical image-processing methods including our newly developed segmentation method are utilized for information extraction/correlation and indexing. The input data includes T1-, T2-Weighted MRI and FLAIR MRI and ictal and interictal SPECT modalities with associated clinical data and EEG data analysis. The database can answer queries regarding issues such as the correlation between the attribute X of the entity Y and the outcome of a temporal lobe epilepsy surgery. The entity Y can be a brain anatomical structure such as the hippocampus. The attribute X can be either a functionality feature of the anatomical structure Y, calculated with SPECT modalities, such as signal average, or a volumetric/morphological feature of the entity Y such as volume or average curvature. The outcome of the surgery can be any surgery assessment such as memory quotient. A determination is made regarding surgical candidacy by analysis of both textual and image data. The current database system suggests a surgical determination for the cases with relatively small hippocampus and high signal intensity average on FLAIR images within the hippocampus. This indication pretty much fits with the surgeons" expectations/observations. Moreover, as the database gets more populated with patient profiles and individual surgical outcomes, using data mining methods one may discover partially invisible correlations between the contents of different modalities of data and the outcome of the surgery.

  19. TheSNPpit—A High Performance Database System for Managing Large Scale SNP Data

    Science.gov (United States)

    Groeneveld, Eildert; Lichtenberg, Helmut

    2016-01-01

    The fast development of high throughput genotyping has opened up new possibilities in genetics while at the same time producing considerable data handling issues. TheSNPpit is a database system for managing large amounts of multi panel SNP genotype data from any genotyping platform. With an increasing rate of genotyping in areas like animal and plant breeding as well as human genetics, already now hundreds of thousand of individuals need to be managed. While the common database design with one row per SNP can manage hundreds of samples this approach becomes progressively slower as the size of the data sets increase until it finally fails completely once tens or even hundreds of thousands of individuals need to be managed. TheSNPpit has implemented three ideas to also accomodate such large scale experiments: highly compressed vector storage in a relational database, set based data manipulation, and a very fast export written in C with Perl as the base for the framework and PostgreSQL as the database backend. Its novel subset system allows the creation of named subsets based on the filtering of SNP (based on major allele frequency, no-calls, and chromosomes) and manually applied sample and SNP lists at negligible storage costs, thus avoiding the issue of proliferating file copies. The named subsets are exported for down stream analysis. PLINK ped and map files are processed as in- and outputs. TheSNPpit allows management of different panel sizes in the same population of individuals when higher density panels replace previous lower density versions as it occurs in animal and plant breeding programs. A completely generalized procedure allows storage of phenotypes. TheSNPpit only occupies 2 bits for storing a single SNP implying a capacity of 4 mio SNPs per 1MB of disk storage. To investigate performance scaling, a database with more than 18.5 mio samples has been created with 3.4 trillion SNPs from 12 panels ranging from 1000 through 20 mio SNPs resulting in a

  20. TheSNPpit-A High Performance Database System for Managing Large Scale SNP Data.

    Science.gov (United States)

    Groeneveld, Eildert; Lichtenberg, Helmut

    2016-01-01

    The fast development of high throughput genotyping has opened up new possibilities in genetics while at the same time producing considerable data handling issues. TheSNPpit is a database system for managing large amounts of multi panel SNP genotype data from any genotyping platform. With an increasing rate of genotyping in areas like animal and plant breeding as well as human genetics, already now hundreds of thousand of individuals need to be managed. While the common database design with one row per SNP can manage hundreds of samples this approach becomes progressively slower as the size of the data sets increase until it finally fails completely once tens or even hundreds of thousands of individuals need to be managed. TheSNPpit has implemented three ideas to also accomodate such large scale experiments: highly compressed vector storage in a relational database, set based data manipulation, and a very fast export written in C with Perl as the base for the framework and PostgreSQL as the database backend. Its novel subset system allows the creation of named subsets based on the filtering of SNP (based on major allele frequency, no-calls, and chromosomes) and manually applied sample and SNP lists at negligible storage costs, thus avoiding the issue of proliferating file copies. The named subsets are exported for down stream analysis. PLINK ped and map files are processed as in- and outputs. TheSNPpit allows management of different panel sizes in the same population of individuals when higher density panels replace previous lower density versions as it occurs in animal and plant breeding programs. A completely generalized procedure allows storage of phenotypes. TheSNPpit only occupies 2 bits for storing a single SNP implying a capacity of 4 mio SNPs per 1MB of disk storage. To investigate performance scaling, a database with more than 18.5 mio samples has been created with 3.4 trillion SNPs from 12 panels ranging from 1000 through 20 mio SNPs resulting in a

  1. Study on Mandatory Access Control in a Secure Database Management System

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper proposes a security policy model for mandatory access control in class B1 database management system whose level of labeling is tuple. The relation-hierarchical data model is extended to multilevel relation-hierarchical data model. Based on the multilevel relation-hierarchical data model, the concept of upper-lower layer relational integrity is presented after we analyze and eliminate the covert channels caused by the database integrity. Two SQL statements are extended to process polyinstantiation in the multilevel secure environment. The system is based on the multilevel relation-hierarchical data model and is capable of integratively storing and manipulating multilevel complicated objects (e. g., multilevel spatial data) and multilevel conventional data ( e. g., integer. real number and character string).

  2. Development of a comprehensive infrared-material database for the Opticam fabrication system

    Science.gov (United States)

    Cerqua-Richardson, Kathleen A.; Schmidt, Stacy; Platt, George R.; Vakiner, John G.

    1993-12-01

    A comprehensive infrared material database has been created to aid in the development of a rule-based deterministic microgrinding process for infrared optics. The process science effort on infrared materials taking place at the University of Central Florida has compiled a concise collation of easily accessible physical property data to more thoroughly understand the material/processing interactions which occur during optical fabrication. The information exists in a user-friendly form whereby designers and system operators utilizing the Opticam system, can quickly and conveniently access and incorporate material property data into the design and fabrication process. We report on the motivation, organization and application of such a database into the Opticim network. The use of the information in establishment of part design, tooling design, machine parameters and predicting the form, figure and surface quality of the resulting optic is presented.

  3. An integrated medical image database and retrieval system using a web application server.

    Science.gov (United States)

    Cao, Pengyu; Hashiba, Masao; Akazawa, Kouhei; Yamakawa, Tomoko; Matsuto, Takayuki

    2003-08-01

    We developed an Integrated Medical Image Database and Retrieval System (INIS) for easy access by medical staff. The INIS mainly consisted of four parts: specific servers to save medical images from multi-vendor modalities of CT, MRI, CR, ECG and endoscopy; an integrated image database (DB) server to save various kinds of images in a DICOM format; a Web application server to connect clients to the integrated image DB and the Web browser terminals connected to an HIS system. The INIS provided a common screen design to retrieve CT, MRI, CR, endoscopic and ECG images, and radiological reports, which would allow doctors to retrieve radiological images and corresponding reports, or ECG images of a patient simultaneously on a screen. Doctors working in internal medicine on average accessed information 492 times a month. Doctors working in cardiological and gastroenterological accessed information 308 times a month. Using the INIS, medical staff could browse all or parts of a patient's medical images and reports.

  4. Semantic – Based Querying Using Ontology in Relational Database of Library Management System

    Directory of Open Access Journals (Sweden)

    Ayesha Banu

    2011-11-01

    Full Text Available The traditional Web stores huge amount of data in the form of Relational Databases (RDB as it is good atstoring objects and relationships between them. Relational Databases are dynamic in nature which allowsbringing tables together helping user to search for related material across multiple tables. RDB arescalable to expand as the data grows. The RDB uses a Structured Query Language called SQL to accessthe databases for several data retrieval purposes. As the world is moving today from the Syntactic form toSemantic form and the Web is also taking its new form of Semantic Web. The Structured Query of the RDBon web can be a Semantic Query on Semantic Web. The SPARQL is the Query Language recommended byW3C for the RDF(Resource Description Framework. RDF is a directed, labeled graph data format forrepresenting information in the Web and is a very important layer of the Semantic Web Architecture. In thispaper we consider the Library Management System (LMS database, taking some tuples of the LMSRelational Schema. We discuss how the RDF code is scripted and validated using RDF Validator and howRDF Triples are generated. Later we give the graphical representation of the RDF triples and see theprocess of extracting ontology from the RDF Schema and application of the Semantic Query.

  5. An Arabic Natural Language Interface System for a Database of the Holy Quran

    Directory of Open Access Journals (Sweden)

    Khaled Nasser ElSayed

    2015-07-01

    Full Text Available In the time being, the need for searching in the words, objects, subjects, and statistics of words and parts of the Holy Quran has grown rapidly concurrently with the grow of number of Moslems and the huge usage of smart mobiles, tablets and lab tops. Because, databases are used almost in all activities of our life, some DBs have been built to store information about words and surah of Quran. The need for accessing Quran DBs became very important and wide uses, which could be done through database applications or using SQL commands, directly from database site or indirectly by a special format through LAN or even through the WEB. Most of peoples are not experienced in SQL language, but they need to build SQL commands for their retrievals. The proposed system will translate their natural Arabic requests such as questions or imperative sentences into SQL commands to retrieve answers from a Quran DB. It will perform parsing and little morphological processes according to a sub set of Arabic context-free grammar rules to work as an interface layer between users and Database.

  6. Environmental Factor{trademark} system: Superfund site information from five EPA databases

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    Environmental Factor puts today`s technology to work to provide a better, more cost-efficient and time-saving way to access EPA information on hazardous waste sites. Environmental consultants, insurers, and reinsurers, corporate risk assessors and companies actively involved in the generation, transport, storage or cleanup of hazardous waste materials can use its user-friendly information retrieval system to gain rapid access to vital information in immediately-usable form. Search, retrieve, and export information in real time. No more waiting for the mail or overnight delivery services to deliver hard copies of voluminous listings and individual site reports. More than 200,000 pages of EPA hazardous waste site information are contained in 5 related databases: (1) Site data from the National Priority List (NPL) and CERCLIS databases, Potentially Responsible Parties (PRP) and Records of Decision (RODs) summaries; (2) Complete PRP information; (3) EPA Records of Decision (Full Text); (4) entire Civil Enforcement Docket; and (5) Glossary of EPA terms, abbreviations and acronyms. Environmental Factor`s powerful database management engine gives even the most inexperienced computer user extensive search capabilities, including wildcard, phonetic and direct cross reference searches across multiple databases.

  7. The database of the PREDICTS (Projecting Responses of Ecological Diversity In Changing Terrestrial Systems) project.

    Science.gov (United States)

    Hudson, Lawrence N; Newbold, Tim; Contu, Sara; Hill, Samantha L L; Lysenko, Igor; De Palma, Adriana; Phillips, Helen R P; Alhusseini, Tamera I; Bedford, Felicity E; Bennett, Dominic J; Booth, Hollie; Burton, Victoria J; Chng, Charlotte W T; Choimes, Argyrios; Correia, David L P; Day, Julie; Echeverría-Londoño, Susy; Emerson, Susan R; Gao, Di; Garon, Morgan; Harrison, Michelle L K; Ingram, Daniel J; Jung, Martin; Kemp, Victoria; Kirkpatrick, Lucinda; Martin, Callum D; Pan, Yuan; Pask-Hale, Gwilym D; Pynegar, Edwin L; Robinson, Alexandra N; Sanchez-Ortiz, Katia; Senior, Rebecca A; Simmons, Benno I; White, Hannah J; Zhang, Hanbin; Aben, Job; Abrahamczyk, Stefan; Adum, Gilbert B; Aguilar-Barquero, Virginia; Aizen, Marcelo A; Albertos, Belén; Alcala, E L; Del Mar Alguacil, Maria; Alignier, Audrey; Ancrenaz, Marc; Andersen, Alan N; Arbeláez-Cortés, Enrique; Armbrecht, Inge; Arroyo-Rodríguez, Víctor; Aumann, Tom; Axmacher, Jan C; Azhar, Badrul; Azpiroz, Adrián B; Baeten, Lander; Bakayoko, Adama; Báldi, András; Banks, John E; Baral, Sharad K; Barlow, Jos; Barratt, Barbara I P; Barrico, Lurdes; Bartolommei, Paola; Barton, Diane M; Basset, Yves; Batáry, Péter; Bates, Adam J; Baur, Bruno; Bayne, Erin M; Beja, Pedro; Benedick, Suzan; Berg, Åke; Bernard, Henry; Berry, Nicholas J; Bhatt, Dinesh; Bicknell, Jake E; Bihn, Jochen H; Blake, Robin J; Bobo, Kadiri S; Bóçon, Roberto; Boekhout, Teun; Böhning-Gaese, Katrin; Bonham, Kevin J; Borges, Paulo A V; Borges, Sérgio H; Boutin, Céline; Bouyer, Jérémy; Bragagnolo, Cibele; Brandt, Jodi S; Brearley, Francis Q; Brito, Isabel; Bros, Vicenç; Brunet, Jörg; Buczkowski, Grzegorz; Buddle, Christopher M; Bugter, Rob; Buscardo, Erika; Buse, Jörn; Cabra-García, Jimmy; Cáceres, Nilton C; Cagle, Nicolette L; Calviño-Cancela, María; Cameron, Sydney A; Cancello, Eliana M; Caparrós, Rut; Cardoso, Pedro; Carpenter, Dan; Carrijo, Tiago F; Carvalho, Anelena L; Cassano, Camila R; Castro, Helena; Castro-Luna, Alejandro A; Rolando, Cerda B; Cerezo, Alexis; Chapman, Kim Alan; Chauvat, Matthieu; Christensen, Morten; Clarke, Francis M; Cleary, Daniel F R; Colombo, Giorgio; Connop, Stuart P; Craig, Michael D; Cruz-López, Leopoldo; Cunningham, Saul A; D'Aniello, Biagio; D'Cruze, Neil; da Silva, Pedro Giovâni; Dallimer, Martin; Danquah, Emmanuel; Darvill, Ben; Dauber, Jens; Davis, Adrian L V; Dawson, Jeff; de Sassi, Claudio; de Thoisy, Benoit; Deheuvels, Olivier; Dejean, Alain; Devineau, Jean-Louis; Diekötter, Tim; Dolia, Jignasu V; Domínguez, Erwin; Dominguez-Haydar, Yamileth; Dorn, Silvia; Draper, Isabel; Dreber, Niels; Dumont, Bertrand; Dures, Simon G; Dynesius, Mats; Edenius, Lars; Eggleton, Paul; Eigenbrod, Felix; Elek, Zoltán; Entling, Martin H; Esler, Karen J; de Lima, Ricardo F; Faruk, Aisyah; Farwig, Nina; Fayle, Tom M; Felicioli, Antonio; Felton, Annika M; Fensham, Roderick J; Fernandez, Ignacio C; Ferreira, Catarina C; Ficetola, Gentile F; Fiera, Cristina; Filgueiras, Bruno K C; Fırıncıoğlu, Hüseyin K; Flaspohler, David; Floren, Andreas; Fonte, Steven J; Fournier, Anne; Fowler, Robert E; Franzén, Markus; Fraser, Lauchlan H; Fredriksson, Gabriella M; Freire, Geraldo B; Frizzo, Tiago L M; Fukuda, Daisuke; Furlani, Dario; Gaigher, René; Ganzhorn, Jörg U; García, Karla P; Garcia-R, Juan C; Garden, Jenni G; Garilleti, Ricardo; Ge, Bao-Ming; Gendreau-Berthiaume, Benoit; Gerard, Philippa J; Gheler-Costa, Carla; Gilbert, Benjamin; Giordani, Paolo; Giordano, Simonetta; Golodets, Carly; Gomes, Laurens G L; Gould, Rachelle K; Goulson, Dave; Gove, Aaron D; Granjon, Laurent; Grass, Ingo; Gray, Claudia L; Grogan, James; Gu, Weibin; Guardiola, Moisès; Gunawardene, Nihara R; Gutierrez, Alvaro G; Gutiérrez-Lamus, Doris L; Haarmeyer, Daniela H; Hanley, Mick E; Hanson, Thor; Hashim, Nor R; Hassan, Shombe N; Hatfield, Richard G; Hawes, Joseph E; Hayward, Matt W; Hébert, Christian; Helden, Alvin J; Henden, John-André; Henschel, Philipp; Hernández, Lionel; Herrera, James P; Herrmann, Farina; Herzog, Felix; Higuera-Diaz, Diego; Hilje, Branko; Höfer, Hubert; Hoffmann, Anke; Horgan, Finbarr G; Hornung, Elisabeth; Horváth, Roland; Hylander, Kristoffer; Isaacs-Cubides, Paola; Ishida, Hiroaki; Ishitani, Masahiro; Jacobs, Carmen T; Jaramillo, Víctor J; Jauker, Birgit; Hernández, F Jiménez; Johnson, McKenzie F; Jolli, Virat; Jonsell, Mats; Juliani, S Nur; Jung, Thomas S; Kapoor, Vena; Kappes, Heike; Kati, Vassiliki; Katovai, Eric; Kellner, Klaus; Kessler, Michael; Kirby, Kathryn R; Kittle, Andrew M; Knight, Mairi E; Knop, Eva; Kohler, Florian; Koivula, Matti; Kolb, Annette

    2017-01-01

    The PREDICTS project-Projecting Responses of Ecological Diversity In Changing Terrestrial Systems (www.predicts.org.uk)-has collated from published studies a large, reasonably representative database of comparable samples of biodiversity from multiple sites that differ in the nature or intensity of human impacts relating to land use. We have used this evidence base to develop global and regional statistical models of how local biodiversity responds to these measures. We describe and make freely available this 2016 release of the database, containing more than 3.2 million records sampled at over 26,000 locations and representing over 47,000 species. We outline how the database can help in answering a range of questions in ecology and conservation biology. To our knowledge, this is the largest and most geographically and taxonomically representative database of spatial comparisons of biodiversity that has been collated to date; it will be useful to researchers and international efforts wishing to model and understand the global status of biodiversity.

  8. Analysis and Design of Soils and Terrain Digital Database (SOTER) Management System Based on Object-Oriented Method

    Institute of Scientific and Technical Information of China (English)

    ZHANG HAITAO; ZHOU YONG; R. V. BIRNIE; A. SIBBALD; REN YI

    2003-01-01

    A SOTER management system was developed by analyzing, designing, programming, testing, repeated proceeding and progressing based on the object-oriented method. The function of the attribute database management is inherited and expanded in the new system. The integrity and security of the SOTER database are enhanced. The attribute database management, the spatial database management and the model base are integrated into SOTER based on the component object model (COM), and the graphical user interface (GUI) for Windows is used to interact with clients, thus being easy to create and maintain the SOTER, and convenient to promote the quantification and automation of soil information application.

  9. 提高数据库系统教学质量的几点思考%Several Thoughts about Improving Quality of Database System

    Institute of Scientific and Technical Information of China (English)

    程录庆

    2011-01-01

    通过对数据库技术专业人才能力结构和特征的研究,提出数据库系统的知识结构,分析当前高校数据库系统相关课程的教学现状,从数据库系统知识结构、教学层次的区分、实践教学的创新等几个方面,探讨提高数据库系统教学质量的途径。%Studied the ability structure and characteristics of Database System technology professionals,proposed database system's knowledge structure,it's analyzed the current status related to university teaching about database system knowledge,discussed the ways of improving teaching quality of database systems from several aspects of the knowledge structure of database system,the distinction of teaching level,practice teaching innovation,and so on.

  10. Planning the future of JPL's management and administrative support systems around an integrated database

    Science.gov (United States)

    Ebersole, M. M.

    1983-01-01

    JPL's management and administrative support systems have been developed piece meal and without consistency in design approach over the past twenty years. These systems are now proving to be inadequate to support effective management of tasks and administration of the Laboratory. New approaches are needed. Modern database management technology has the potential for providing the foundation for more effective administrative tools for JPL managers and administrators. Plans for upgrading JPL's management and administrative systems over a six year period evolving around the development of an integrated management and administrative data base are discussed.

  11. Data-based controllability analysis of discrete-time linear time-delay systems

    Science.gov (United States)

    Liu, Yang; Chen, Hong-Wei; Lu, Jian-Quan

    2014-11-01

    In this paper, a data-based method is used to analyse the controllability of discrete-time linear time-delay systems. By this method, one can directly construct a controllability matrix using the measured state data without identifying system parameters. Hence, one can save time in practice and avoid corresponding identification errors. Moreover, its calculation precision is higher than some other traditional approaches, which need to identify unknown parameters. Our methods are feasible to the study of characteristics of deterministic systems. A numerical example is given to show the advantage of our results.

  12. Planning the future of JPL's management and administrative support systems around an integrated database

    Science.gov (United States)

    Ebersole, M. M.

    1983-01-01

    JPL's management and administrative support systems have been developed piece meal and without consistency in design approach over the past twenty years. These systems are now proving to be inadequate to support effective management of tasks and administration of the Laboratory. New approaches are needed. Modern database management technology has the potential for providing the foundation for more effective administrative tools for JPL managers and administrators. Plans for upgrading JPL's management and administrative systems over a six year period evolving around the development of an integrated management and administrative data base are discussed.

  13. The database of the PREDICTS (Projecting Responses of Ecological Diversity In Changing Terrestrial Systems) project

    OpenAIRE

    Hudson, Lawrence N; Newbold, Tim; Contu, Sara; Hill, Samantha L L; Lysenko, Igor; De Palma, Adriana; Phillips, Helen R P; Alhusseini, Tamera I.; Bedford, Felicity E.; Bennett, Dominic J.; Booth, Hollie; Burton, Victoria J.; Chng, Charlotte W. T.; Choimes, Argyrios; Correia, David L.P.

    2016-01-01

    The PREDICTS project—Projecting Responses of Ecological Diversity In Changing Terrestrial Systems (www.predicts.org.uk)—has collated from published studies a large, reasonably representative database of comparable samples of biodiversity from multiple sites that differ in the nature or intensity of human impacts relating to land use. We have used this evidence base to develop global and regional statistical models of how local biodiversity responds to these measures. We describe and make free...

  14. The database of the PREDICTS (Projecting Responses of Ecological Diversity In Changing Terrestrial Systems) project

    OpenAIRE

    Hudson, Lawrence N; Newbold, Tim; Contu, Sara; Hill, Samantha L.L.; Lysenko, Igor; De Palma, Adriana; Phillips, Helen R. P.; Alhusseini, Tamera I.; Bedford, Felicity E.; Bennett, Dominic J.; Booth, Hollie; Burton, Victoria J.; Chng , Charlotte W. T.; Choimes, Argyrios; Correia, David L.P.

    2017-01-01

    The PREDICTS project-Projecting Responses of Ecological Diversity In Changing Terrestrial Systems (www.predicts.org.uk)-has collated from published studies a large, reasonably representative database of comparable samples of biodiversity from multiple sites that differ in the nature or intensity of human impacts relating to land use. We have used this evidence base to develop global and regional statistical models of how local biodiversity responds to these measures. We describe and make free...

  15. The database of the PREDICTS (Projecting Responses of Ecological Diversity In Changing Terrestrial Systems) project

    OpenAIRE

    Hudson, Lawrence N; Newbold, Tim; Contu, Sara; Hill, Samantha L.L.; Lysenko, Igor; De Palma, Adriana; Phillips, Helen R. P.; Alhusseini, Tamera I.; Bedford, Felicity E.; Bennett, Dominic J.; Booth, Hollie; Burton, Victoria J.; Chng , Charlotte W. T.; Choimes, Argyrios; Correia, David L.P.

    2016-01-01

    Abstract The PREDICTS project—Projecting Responses of Ecological Diversity In Changing Terrestrial Systems (www.predicts.org.uk)—has collated from published studies a large, reasonably representative database of comparable samples of biodiversity from multiple sites that differ in the nature or intensity of human impacts relating to land use. We have used this evidence base to develop global and regional statistical models of how local biodiversity responds to these measures. We describe and ...

  16. The database of the PREDICTS (Projecting Responses of Ecological Diversity In Changing Terrestrial Systems) project

    OpenAIRE

    Hudson, Lawrence N; Newbold, Tim; Contu, Sara; Hill, Samantha L L; Lysenko, Igor; De Palma, Adriana; Phillips, Helen R P; Alhusseini, Tamera I.; Bedford, Felicity E.; Bennett, Dominic J.; Booth, Hollie; Burton, Victoria J.; Chng, Charlotte W. T.; Choimes, Argyrios; Correia, David L.P.

    2017-01-01

    The PREDICTS project—Projecting Responses of Ecological Diversity In Changing Terrestrial Systems (www.predicts.org.uk)—has collated from published studies a large, reasonably representative database of comparable samples of biodiversity from multiple sites that differ in the nature or intensity of human impacts relating to land use. We have used this evidence base to develop global and regional statistical models of how local biodiversity responds to these measures. We describe and make free...

  17. Knowledge Representation and Inference for Analysis and Design of Database and Tabular Rule-Based Systems

    Directory of Open Access Journals (Sweden)

    Antoni Ligeza

    2001-01-01

    Full Text Available Rulebased systems constitute a powerful tool for specification of knowledge in design and implementation of knowledge based systems. They provide also a universal programming paradigm for domains such as intelligent control, decision support, situation classification and operational knowledge encoding. In order to assure safe and reliable performance, such system should satisfy certain formal requirements, including completeness and consistency. This paper addresses the issue of analysis and verification of selected properties of a class of such system in a systematic way. A uniform, tabular scheme of single-level rule-based systems is considered. Such systems can be applied as a generalized form of databases for specification of data pattern (unconditional knowledge, or can be used for defining attributive decision tables (conditional knowledge in form of rules. They can also serve as lower-level components of a hierarchical multi-level control and decision support knowledge-based systems. An algebraic knowledge representation paradigm using extended tabular representation, similar to relational database tables is presented and algebraic bases for system analysis, verification and design support are outlined.

  18. Technical report on implementation of reactor internal 3D modeling and visual database system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yeun Seung; Eom, Young Sam; Lee, Suk Hee; Ryu, Seung Hyun [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-06-01

    In this report was described a prototype of reactor internal 3D modeling and VDB system for NSSS design quality improvement. For improving NSSS design quality several cases of the nuclear developed nation`s integrated computer aided engineering system, such as Mitsubishi`s NUWINGS (Japan), AECL`s CANDID (Canada) and Duke Power`s PASCE (USA) were studied. On the basis of these studies the strategy for NSSS design improvement system was extracted and detail work scope was implemented as follows : 3D modelling of the reactor internals were implemented by using the parametric solid modeler, a prototype system of design document computerization and database was suggested, and walk-through simulation integrated with 3D modeling and VDB was accomplished. Major effects of NSSS design quality improvement system by using 3D modeling and VDB are the plant design optimization by simulation, improving the reliability through the single design database system and engineering cost reduction by improving productivity and efficiency. For applying the VDB to full scope of NSSS system design, 3D modelings of reactor coolant system and nuclear fuel assembly and fuel rod were attached as appendix. 2 tabs., 31 figs., 7 refs. (Author) .new.

  19. The GEISA 2009 Spectroscopic Database System and its CNES/CNRS Ether Products and Services Center Interactive Distribution

    Science.gov (United States)

    Jacquinet-Husson, Nicole; Crépeau, Laurent; Capelle, Virginie; Scott, Noëlle; Armante, Raymond; Chédin, Alain; Boonne, Cathy; Poulet-Crovisier, Nathalie

    2010-05-01

    The GEISA (1) (Gestion et Etude des Informations Spectroscopiques Atmosphériques: Management and Study of Atmospheric Spectroscopic Information) computer-accessible database, initiated in 1976, is developed and maintained at LMD (Laboratoire de Météorologie Dynamique, France) a system comprising three independent sub-databases devoted respectively to : line transition parameters, infrared and ultraviolet/visible absorption cross-sections, microphysical and optical properties of atmospheric aerosols. The updated 2009 edition (GEISA-09) archives, in its line transition parameters sub-section, 50 molecules, corresponding to 111 isotopes, for a total of 3,807,997 entries, in the spectral range from 10-6 to 35,877.031 cm-1. Detailed description of the whole database contents will be documented. GEISA and GEISA/IASI are implemented on the CNES/CNRS Ether Products and Services Centre WEB site (http://ether.ipsl.jussieu.fr), where all archived spectroscopic data can be handled through general and user friendly associated management software facilities. These facilities will be described and widely illustrated, as well. Interactive demonstrations will be given if technical possibilities are feasible at the time of the Poster Display Session. More than 350 researchers are registered for on line use of GEISA on Ether. Currently, GEISA is involved in activities (2) related to the remote sensing of the terrestrial atmosphere thanks to the sounding performances of new generation of hyperspectral Earth' atmospheric sounders, like AIRS (Atmospheric Infrared Sounder -http://www-airs.jpl.nasa.gov/), in the USA, and IASI (Infrared Atmospheric Sounding Interferometer -http://earth-sciences.cnes.fr/IASI/) in Europe, using the 4A radiative transfer model (3) (4A/LMD http://ara.lmd.polytechnique.fr; 4A/OP co-developed by LMD and NOVELTIS -http://www.noveltis.fr/) with the support of CNES (2006). Refs: (1) Jacquinet-Husson N., N.A. Scott, A. Chédin,L. Crépeau, R. Armante, V. Capelle

  20. A Method to Ease the Deployment of Web Applications that Involve Database Systems A Method to Ease the Deployment of Web Applications that Involve Database Systems

    Directory of Open Access Journals (Sweden)

    Antonio Vega Corona

    2012-02-01

    Full Text Available El crecimiento continuo de la Internet ha permitido a las personas, alrededor de todo mundo, realizar transacciones en línea, buscar información o navegar usando el explorador de la Web. A medida que más gente se siente cómoda usando los exploradores de Web, más empresas productoras de software tratan de ofrecer interfaces Web como una forma alternativa para proporcionar acceso a sus aplicaciones. La naturaleza de la conexión Web y las restricciones impuestas por el ancho de banda disponible, hacen la integración de aplicaciones Web y los sistemas de bases de datos críticas. Debido a que las aplicaciones que usan bases de datos proporcionan una interfase gráfica para editar la información en la base de datos y debido a que cada columna en una tabla de una base de datos corresponde a un control en una interfase gráfica, el desarrollo de estas aplicaciones puede consumirun tiempo considerable, ya que la validación de campos y reglas de integridad referencial deben ser respetadas. Se propone un diseño orientado a objetos para así facilitar el desarrollo de aplicaciones que usan sistemas de bases de datos.The continuous growth of the Internet has driven people, all around the globe, to performtransactions on-line, search information or navigate using a browser. As more people feelcomfortable using a Web browser, more software companies are trying to alternatively offerWeb interfaces to provide access to their applications. The consequent nature of the Webconnection and the restrictions imposed by the available bandwidth make the successfulintegration of Web applications and database systems critical. Because popular databaseapplications provide a user interface to edit and maintain the information in the databaseand because each column in the database table maps to a graphic user interface control,the deployment of these applications can be time consuming; appropriate fi eld validationand referential integrity rules must be observed

  1. The global rock art database: developing a rock art reference model for the RADB system using the CIDOC CRM and Australian heritage examples

    Science.gov (United States)

    Haubt, R. A.

    2015-08-01

    The Rock Art Database (RADB) is a virtual organisation that aims to build a global rock art community. It brings together rock art enthusiasts and professionals from around the world in one centralized location through the deployed publicly available RADB Management System. This online platform allows users to share, manage and discuss rock art information and offers a new look at rock art data through the use of new technologies in rich media formats. Full access to the growing platform is currently only available for a selected group of users but it already links over 200 rock art projects around the globe. This paper forms a part of the larger Rock Art Database (RADB) project. It discusses the design stage of the RADB System and the development of a conceptual RADB Reference Model (RARM) that is used to inform the design of the Rock Art Database Management System. It examines the success and failure of international and national systems and uses the Australian heritage sector and Australian rock art as a test model to develop a method for the RADB System design. The system aims to help improve rock art management by introducing the CIDOC CRM in conjunction with a rock art specific domain model. It seeks to improve data compatibility and data sharing to help with the integration of a variety of resources to create the global Rock Art Database Management System.

  2. Current development of UAV sense and avoid system

    Science.gov (United States)

    Zhahir, A.; Razali, A.; Mohd Ajir, M. R.

    2016-10-01

    As unmanned aerial vehicles (UAVs) are now gaining high interests from civil and commercialised market, the automatic sense and avoid (SAA) system is currently one of the essential features in research spotlight of UAV. Several sensor types employed in current SAA research and technology of sensor fusion that offers a great opportunity in improving detection and tracking system are presented here. The purpose of this paper is to provide an overview of SAA system development in general, as well as the current challenges facing UAV researchers and designers.

  3. Data model and relational database design for the New England Water-Use Data System (NEWUDS)

    Science.gov (United States)

    Tessler, Steven

    2001-01-01

    The New England Water-Use Data System (NEWUDS) is a database for the storage and retrieval of water-use data. NEWUDS can handle data covering many facets of water use, including (1) tracking various types of water-use activities (withdrawals, returns, transfers, distributions, consumptive-use, wastewater collection, and treatment); (2) the description, classification and location of places and organizations involved in water-use activities; (3) details about measured or estimated volumes of water associated with water-use activities; and (4) information about data sources and water resources associated with water use. In NEWUDS, each water transaction occurs unidirectionally between two site objects, and the sites and conveyances form a water network. The core entities in the NEWUDS model are site, conveyance, transaction/rate, location, and owner. Other important entities include water resources (used for withdrawals and returns), data sources, and aliases. Multiple water-exchange estimates can be stored for individual transactions based on different methods or data sources. Storage of user-defined details is accommodated for several of the main entities. Numerous tables containing classification terms facilitate detailed descriptions of data items and can be used for routine or custom data summarization. NEWUDS handles single-user and aggregate-user water-use data, can be used for large or small water-network projects, and is available as a stand-alone Microsoft? Access database structure. Users can customize and extend the database, link it to other databases, or implement the design in other relational database applications.

  4. Designing and Implementing a Navicat Database System for a Call Center

    Directory of Open Access Journals (Sweden)

    Thair M. Hamtini

    2011-02-01

    Full Text Available A call center is a physical place where customer and other telephone calls are handled by an organization, usually with some amount of computer automation. Typically, a call center has the capacity to handle a considerable volume of calls at the same time, to screen calls, forward calls to qualified person to handle them, and to log calls. In this article we proposed an architecture and showed how the system works. Carolina Call Center used to manually keep track of its call information through different programs. Data would be entered on Microsoft Excel and results were displayed on Microsoft Word. This made it hard to keep track of the data in an organized matter. Since many Call Centers may encounter these problems, we found a solution by creating a database. We used Navicat as a database client, and Dreamweaver as a web-interface design. Every agent now had an employee account with a password. Each account gave the agents access to the different campaigns they were working on. The accounts also had a timer as well as a break button that automatically kept track of login, logout and breaks. Not only did the database automate work to the employees, but it was beneficial to be business as well. The decreased number of errors combined with the reduced need for employees helped the call center save money. Since the database is an efficient timesaver, it decreased the number of working hours for both management and employees. The database greatly improved the overall quality of the Carolina Call Center.

  5. ISIS (Inventory and Security Information System): A prototype using the FOCUS 4GL and an ORACLE database

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, J.T.; Beckwith, A.L.; Stewart, C.R.; Kilgore, D.G.; Fortune, R. (Oak Ridge Gaseous Diffusion Plant, TN (USA); Oak Ridge National Lab., TN (USA); Maxima Corp., Oak Ridge, TN (USA))

    1989-01-01

    An interest in many corporate data processing environments, is the ability to use both fourth generation languages and relational databases to achieve flexible and integrated information systems. Another concern for planning corporate management information systems is the ability to access multiple database software environments with consistent end user programming tools. A study was conducted for the Pacific Missile Test Center that tested the use of FOCUS 4GL code developed on a PC and ported to a MicroVAX, to access an ORACLE relational database on the MicroVAX. The prototype developed gave insight into the viability of porting code, the development of integrated systems using two different vendors, and the complexities that arise when using information retrieval techniques for hierarchical data structures with relational databases. The experience gained from developing the prototype, resulted in a decision to continue prototype development in a single vendor software environment and stressed the importance of a relational database in developing information systems.

  6. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  7. Moving Observer Support for Databases

    DEFF Research Database (Denmark)

    Bukauskas, Linas

    Interactive visual data explorations impose rigid requirements on database and visualization systems. Systems that visualize huge amounts of data tend to request large amounts of memory resources and heavily use the CPU to process and visualize data. Current systems employ a loosely coupled archi...

  8. A structured database and image acquisition system in support of palynological studies: CHITINOS.

    Science.gov (United States)

    Achab, A; Asselin, E; Liang, B

    2000-12-01

    CHITINOS is a microfossil image and data acquisition system developed to support palynologists from field work to report production. The system is intended for chitinozoans, but it can also accommodate other fossil groups. Thanks to its client-server architecture, the system can be accessed by multiple users. The database can be filled with data acquired during palynological work or taken from the literature. The system allows for the easy input, update, management, analysis and retrieval of paleontological data to enable the paleontologist to elucidate paleogeographic patterns, changes in biodiversity and taxonomic differentiations. Query and plot interfaces are intended for report production. The system was designed as the basis of a knowledge expert system by providing a new perspective in the interpretation of interrelated data.

  9. DMPD: Interferons at age 50: past, current and future impact on biomedicine. [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 18049472 Interferons at age 50: past, current and future impact on biomedicine. Bor...975-90. (.png) (.svg) (.html) (.csml) Show Interferons at age 50: past, current and future impact on biomedi...cine. PubmedID 18049472 Title Interferons at age 50: past, current and future imp

  10. Laser Therapy and Dementia: A Database Analysis and Future Aspects on LED-Based Systems

    Directory of Open Access Journals (Sweden)

    Daniela Litscher

    2014-01-01

    Full Text Available Mainly because of the movement in the age pyramid, one can assume that the incidence of Alzheimer’s disease or dementia in general will increase in the coming decades. This paper employs a database analysis to examine the profile of publication activity related to this topic. Two databases were searched: PubMed and Cochrane Library. About 600 papers related to the research area “dementia and laser” and about 450 papers related to the search terms “Alzheimer and laser” were found in these two most commonly used databases. Ten plus one papers are described in detail and are discussed in the context of the laser research performed at the Medical University of Graz. First results concerning the measurement of the transmission factor (TF through the human skull of a new LED- (light emitting diode- based system are presented (TF = 0.0434 ± 0.0104 (SD. The measurements show that this LED system (using the QIT (quantum optical induced transparency effect might be used in the treatment of dementia.

  11. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us Trypanosomes Database... Database Description General information of database Database name Trypanosomes Database...rmation and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database... classification Protein sequence databases Organism Taxonomy Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Na...me: Homo sapiens Taxonomy ID: 9606 Database description The Trypanosomes database is a database providing th

  12. Diffusion current in a system of coupled Josephson junctions

    Science.gov (United States)

    Shukrinov, Yu. M.; Rahmonov, I. R.

    2012-08-01

    The role of a diffusion current in the phase dynamics of a system of coupled Josephson junctions (JJs) has been analyzed. It is shown that, by studying the temporal dependences of the superconducting, quasi-particle, diffusion, and displacement currents and the dependences of average values of these currents on the total current, it is possible to explain the main features of the current-voltage characteristic (CVC) of the system. The effect of a diffusion current on the character of CVC branching in the vicinity of a critical current and in the region of hysteresis, as well as on the part of CVC branch corresponding to a parametric resonance in the system is demonstrated. A clear interpretation of the differences in the character of CVC branching in a model of capacitively coupled JJs (CCJJ model) and a model of capacitive coupling with diffusion current (CCJJ+DC model) is proposed. It is shown that a decrease in the diffusion current in a JJ leads to the switching of this junction to an oscillating state. The results of model calculations are qualitatively consistent with the experimental data.

  13. Diffusion current in a system of coupled Josephson junctions

    Energy Technology Data Exchange (ETDEWEB)

    Shukrinov, Yu. M., E-mail: shukrinv@theor.jinr.ru; Rahmonov, I. R. [Joint Institute for Nuclear Research (Russian Federation)

    2012-08-15

    The role of a diffusion current in the phase dynamics of a system of coupled Josephson junctions (JJs) has been analyzed. It is shown that, by studying the temporal dependences of the superconducting, quasi-particle, diffusion, and displacement currents and the dependences of average values of these currents on the total current, it is possible to explain the main features of the current-voltage characteristic (CVC) of the system. The effect of a diffusion current on the character of CVC branching in the vicinity of a critical current and in the region of hysteresis, as well as on the part of CVC branch corresponding to a parametric resonance in the system is demonstrated. A clear interpretation of the differences in the character of CVC branching in a model of capacitively coupled JJs (CCJJ model) and a model of capacitive coupling with diffusion current (CCJJ+DC model) is proposed. It is shown that a decrease in the diffusion current in a JJ leads to the switching of this junction to an oscillating state. The results of model calculations are qualitatively consistent with the experimental data.

  14. Spin currents and magnetization dynamics in multilayer systems

    NARCIS (Netherlands)

    van der Bijl, E.

    2014-01-01

    In this Thesis the interplay between spin currents and magnetization dynamics is investigated theoretically. With the help of a simple model the relevant physical phenomena are introduced. From this model it can be deduced that in systems with small spin-orbit coupling, current-induced torques on

  15. Geophysical log database for the Floridan aquifer system and southeastern Coastal Plain aquifer system in Florida and parts of Georgia, Alabama, and South Carolina

    Science.gov (United States)

    Williams, Lester J.; Raines, Jessica E.; Lanning, Amanda E.

    2013-01-01

    The U.S. Geological Survey (USGS) Groundwater Resources Program began two regional studies in the southeastern United States in the fall of 2009 to investigate ground-water availability of fresh and brackish water resources: (1) groundwater availability of the Floridan aquifer system, (http://water.usgs.gov/ogw/gwrp/activities/regional.html), and (2) saline water aquifer mapping in the southeastern United States. A common goal for both studies was to gather available geophysical logs and related data from the State geological surveys and the USGS that would be used as a basis for developing a hydrogeologic framework for the study area. Similar efforts were undertaken by the USGS Floridan and Southeastern Coastal Plain Regional Aquifer-System Analysis (RASA) Program from the 1970s to mid-1990s (Miller, 1986; Renken, 1996). The logs compiled for these older efforts were difficult to access from the paper files; however, and partly because of this, older and newer logs were compiled into a single digital database for the current study. The purpose of this report is to summarize the different types of logs and related data contained in the database and to provide these logs in a digital format that can be accessed online through the database and files accompanying this report (http://pubs.usgs.gov/ds/760/).

  16. West Coast Observing System (WCOS) ADCP Currents Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The West Coast Observing System (WCOS) project provides access to temperature and currents data collected at four of the five National Marine Sanctuary sites,...

  17. Population vulnerability of marine birds within the California Current System

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Six metrics were used to determine Population Vulnerability: global population size, annual occurrence in the California Current System (CCS), percent of the...

  18. Population vulnerability of marine birds within the California Current System

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Six metrics were used to determine Population Vulnerability: global population size, annual occurrence in the California Current System (CCS), percent of the...

  19. Database development of chemical thermodynamics of protactinium for performance assessment of HLW geological disposal system

    Energy Technology Data Exchange (ETDEWEB)

    Shibutani, Tomoki; Shibutani, Sanae; Yui, Mikazu [Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works

    1998-03-01

    In the performance analysis of geological disposal system of high-level radioactive waste (HLW), solubilities of radioactive elements are estimated by thermodynamic calculation. The reliable thermodynamic database (TDB) is needed for solubility estimation. In this report, thermodynamic data for protactinium solid and aqueous species for performance assessment were selected. For the refinement of previous PNC in house thermodynamic database (PNC-TDB), existing literatures data were surveyed and reliable thermodynamic data were selected under consideration of the scientific defensibility and the consistency with the whole PNC-TDB. The estimated solubility using refined PNC-TDB was higher than measured value. We have confirmed the refined data-set of Pa to be conservative for solubility estimation of performance assessment. (author)

  20. Weighted Moore–Penrose generalized matrix inverse: MySQL vs. Cassandra database storage system

    Indian Academy of Sciences (India)

    DANIJELA MILOSEVIC; SELVER PEPIC; MUZAFER SARACEVIC; MILAN TASIC

    2016-08-01

    The research in this paper refers to two areas: programming and data storage (data base) for computing the weighted Moore–Penrose inverse. The main aim of this paper analysis of the execution speed of computing using PHP programming language and data store. The research shows that the speed of execution gives considerable difference between the Procedural programming and Object Oriented PHP language, on the middle layer in the three tier of the web architecture. Also, the research concerning the comparison of relationdatabase system, MySQL and NoSQL, key value store system, ApacheCassandra, on the database layer. The CPU times are compared and discussed.

  1. Expressiveness of the Breast Imaging Reporting and Database System (BI-RADS).

    OpenAIRE

    Starren, J.; Johnson, S.M.

    1997-01-01

    The Breast Imaging Reporting and Database System (BI-RADS) was developed by the American College of Radiology and is used by a number of computerized mammography tracking systems. The ability of BI-RADS to encode the data contained in 300 mammography reports at the Columbia-Presbyterian Medical Center was examined. BI-RADS was able to encode normal reports and "special masses" (such as lymph nodes) without difficulty. However, none of the general masses and only 17% of the calcifications coul...

  2. Expressiveness of the Breast Imaging Reporting and Database System (BI-RADS).

    OpenAIRE

    Starren, J.; Johnson, S.M.

    1997-01-01

    The Breast Imaging Reporting and Database System (BI-RADS) was developed by the American College of Radiology and is used by a number of computerized mammography tracking systems. The ability of BI-RADS to encode the data contained in 300 mammography reports at the Columbia-Presbyterian Medical Center was examined. BI-RADS was able to encode normal reports and "special masses" (such as lymph nodes) without difficulty. However, none of the general masses and only 17% of the calcifications coul...

  3. Designing an efficient electroencephalography system using database with embedded images management approach.

    Science.gov (United States)

    Yu, Tzu-Yi; Ho, Hsu-Hua

    2014-01-01

    Many diseases associated with mental deterioration among aged patients can be effectively treated using neurological treatments. Research shows that electroencephalography (EEG) can be used as an independent prognostic indicator of morbidity and mortality. Unfortunately, EEG data are typically inaccessible to modern software. It is therefore important to design a comprehensive approach to integrate EEG results into institutional medical systems. A customized EEG system utilizing a database management approach was designed to bridge the gap between the commercial EEG software and hospital data management platforms. Practical and useful medical findings are discoursed from statistical analysis of large amounts of EEG data. © 2013 Published by Elsevier Ltd.

  4. Performance Improvement with Web Based Database on Library Information System of SMK Yadika 5

    Directory of Open Access Journals (Sweden)

    Pualam Dipa Nusantara

    2015-12-01

    Full Text Available The difficulty in managing the data of books collection in the library is a problem that is often faced by the librarian that effect the quality of service. Arrangement and recording a collection of books in the file system of separate applications Word and Excel, as well as transaction handling borrowing and returning books, there has been no integrated records. Library system can manage the book collection. This system can reduce the problems often experienced by library staff when serving students in borrowing books. There so frequent difficulty in managing the books that still in borrowed state. This system will also record a collection of late fees or lost library conducted by students (borrowers. The conclusion of this study is library performance can be better with the library system using web database.

  5. Information schema constructs for defining warehouse databases of genotypes and phenotypes of system manifestation features

    Institute of Scientific and Technical Information of China (English)

    Shahab POURTALEBI‡; Imre HORVÁTH

    2016-01-01

    Our long-term objective is to develop a software toolbox for pre-embodiment design of complex and heterogeneous systems, such as cyber-physical systems. The novelty of this toolbox is that it uses system manifestation features (SMFs) for transdisciplinary modeling of these systems. The main challenges of implementation of the toolbox are functional design- and language-independent computational realization of the warehouses, and systematic development and management of the various evolving implements of SMFs (genotypes, phenotypes, and instances). Therefore, an information schema construct (ISC) based approach is proposed to create the schemata of the associated warehouse databases and the above-mentioned SMF implements. ISCs logically arrange the data contents of SMFs in a set of relational tables of varying semantics. In this article we present the ISCs necessary for creation of genotypes and phenotypes. They increase the efficiency of the database development process and make the data relationships transparent. Our follow-up research focuses on the elaboration of the SMF instances based system modeling methodology.

  6. Analysis and comparison of NoSQL databases with an introduction to consistent references in big data storage systems

    Science.gov (United States)

    Dziedzic, Adam; Mulawka, Jan

    2014-11-01

    NoSQL is a new approach to data storage and manipulation. The aim of this paper is to gain more insight into NoSQL databases, as we are still in the early stages of understanding when to use them and how to use them in an appropriate way. In this submission descriptions of selected NoSQL databases are presented. Each of the databases is analysed with primary focus on its data model, data access, architecture and practical usage in real applications. Furthemore, the NoSQL databases are compared in fields of data references. The relational databases offer foreign keys, whereas NoSQL databases provide us with limited references. An intermediate model between graph theory and relational algebra which can address the problem should be created. Finally, the proposal of a new approach to the problem of inconsistent references in Big Data storage systems is introduced.

  7. Some methods for determining the limit of potential image quality of optical systems of various complexities using the database

    Science.gov (United States)

    Bezdidko, S.

    2016-09-01

    In the article some methods for processing the information contained in a database are offered with the purpose of extraction of the knowledge, the experience and the intuition of the designers, coded in the database. It gives much attention to the methods for determinating limit potential image quality of optical systems of various complexities.

  8. Curating and Preserving the Big Canopy Database System: an Active Curation Approach using SEAD

    Science.gov (United States)

    Myers, J.; Cushing, J. B.; Lynn, P.; Weiner, N.; Ovchinnikova, A.; Nadkarni, N.; McIntosh, A.

    2015-12-01

    Modern research is increasingly dependent upon highly heterogeneous data and on the associated cyberinfrastructure developed to organize, analyze, and visualize that data. However, due to the complexity and custom nature of such combined data-software systems, it can be very challenging to curate and preserve them for the long term at reasonable cost and in a way that retains their scientific value. In this presentation, we describe how this challenge was met in preserving the Big Canopy Database (CanopyDB) system using an agile approach and leveraging the Sustainable Environment - Actionable Data (SEAD) DataNet project's hosted data services. The CanopyDB system was developed over more than a decade at Evergreen State College to address the needs of forest canopy researchers. It is an early yet sophisticated exemplar of the type of system that has become common in biological research and science in general, including multiple relational databases for different experiments, a custom database generation tool used to create them, an image repository, and desktop and web tools to access, analyze, and visualize this data. SEAD provides secure project spaces with a semantic content abstraction (typed content with arbitrary RDF metadata statements and relationships to other content), combined with a standards-based curation and publication pipeline resulting in packaged research objects with Digital Object Identifiers. Using SEAD, our cross-project team was able to incrementally ingest CanopyDB components (images, datasets, software source code, documentation, executables, and virtualized services) and to iteratively define and extend the metadata and relationships needed to document them. We believe that both the process, and the richness of the resultant standards-based (OAI-ORE) preservation object, hold lessons for the development of best-practice solutions for preserving scientific data in association with the tools and services needed to derive value from it.

  9. An Integrative Database System of Agro-Ecology for the Black Soil Region of China

    Directory of Open Access Journals (Sweden)

    Cuiping Ge

    2007-12-01

    Full Text Available The comprehensive database system of the Northeast agro-ecology of black soil (CSDB_BL is user-friendly software designed to store and manage large amounts of data on agriculture. The data was collected in an efficient and systematic way by long-term experiments and observations of black land and statistics information. It is based on the ORACLE database management system and the interface is written in PB language. The database has the following main facilities:(1 runs on Windows platforms; (2 facilitates data entry from *.dbf to ORACLE or creates ORACLE tables directly; (3has a metadata facility that describes the methods used in the laboratory or in the observations; (4 data can be transferred to an expert system for simulation analysis and estimates made by Visual C++ and Visual Basic; (5 can be connected with GIS, so it is easy to analyze changes in land use ; and (6 allows metadata and data entity to be shared on the internet. The following datasets are included in CSDB_BL: long-term experiments and observations of water, soil, climate, biology, special research projects, and a natural resource survey of Hailun County in the 1980s; images from remote sensing, graphs of vectors and grids, and statistics from Northeast of China. CSDB_BL can be used in the research and evaluation of agricultural sustainability nationally, regionally, or locally. Also, it can be used as a tool to assist the government in planning for agricultural development. Expert systems connected with CSDB_BL can give farmers directions for farm planting management.

  10. DBPower:面向绿色数据库系统的能耗有效性测试%DBPower: Measuring Energy Efficiency for Green Database Systems

    Institute of Scientific and Technical Information of China (English)

    金培权; 杨濮源; 陈恺萌; 岳丽华

    2011-01-01

    高效节能的绿色数据库技术已成为目前信息技术领域的重大挑战.传统的数据库系统通常以高性能为设计目标,忽略了对能耗的考虑,因此“高性能≠低能耗”.针对这一问题,提出了一个用于测试绿色数据库系统能耗有效性的工具——DBPower.DBPower采用软硬件集成的方式,设计了专门的能耗测试设备和可视化分析工具.DBPower可以有效地测试数据库系统整体能耗,也可以测试各个部件的能耗细节,从而为研究兼顾性能和能耗的新型数据库算法提供有力的支持.%Energy-efficient green databases have been one of the most important challenges in current information technology area. Traditional database systems aimed at providing high performance and had little consideration on the control of energy consumption of the whole system, which consequently leads to the fact "high performance≠low energy consumption". In this paper we present a tool called DBPower to measure the energy efficiency of database systems. DBPower was designed as a package integrating both software and hardware, and new device for energy measurement and new software for data collection and visualization are designed. DBPower can test the whole energy consumption of database systems, as well as the energy cost of individual components in database system. Therefore, it can be used as a basic benchmark tool for novel energy-efficient database algorithms.

  11. A database paradigm for the management of DICOM-RT structure sets using a geographic information system

    Science.gov (United States)

    Shao, Weber; Kupelian, Patrick A.; Wang, Jason; Low, Daniel A.; Ruan, Dan

    2014-03-01

    We devise a paradigm for representing the DICOM-RT structure sets in a database management system, in such way that secondary calculations of geometric information can be performed quickly from the existing contour definitions. The implementation of this paradigm is achieved using the PostgreSQL database system and the PostGIS extension, a geographic information system commonly used for encoding geographical map data. The proposed paradigm eliminates the overhead of retrieving large data records from the database, as well as the need to implement various numerical and data parsing routines, when additional information related to the geometry of the anatomy is desired.

  12. Construction of a bibliographic information database and development of retrieval system for research reports in nuclear science and technology (II)

    Energy Technology Data Exchange (ETDEWEB)

    Han, Duk Haeng; Kim, Tae Whan; Choi, Kwang; Yoo, An Na; Keum, Jong Yong; Kim, In Kwon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-05-01

    The major goal of this project is to construct a bibliographic information database in nuclear engineering and to develop a prototype retrieval system. To give an easy access to microfiche research report, this project has accomplished the construction of microfiche research reports database and the development of retrieval system. The results of the project are as follows; 1. Microfiche research reports database was constructed by downloading from DOE Energy, NTIS, INIS. 2. The retrieval system was developed in host and web version using access point such as title, abstracts, keyword, report number. 6 tabs., 8 figs., 11 refs. (Author) .new.

  13. HPD: an online integrated human pathway database enabling systems biology studies.

    Science.gov (United States)

    Chowbina, Sudhir R; Wu, Xiaogang; Zhang, Fan; Li, Peter M; Pandey, Ragini; Kasamsetty, Harini N; Chen, Jake Y

    2009-10-08

    Pathway-oriented experimental and computational studies have led to a significant accumulation of biological knowledge concerning three major types of biological pathway events: molecular signaling events, gene regulation events, and metabolic reaction events. A pathway consists of a series of molecular pathway events that link molecular entities such as proteins, genes, and metabolites. There are approximately 300 biological pathway resources as of April 2009 according to the Pathguide database; however, these pathway databases generally have poor coverage or poor quality, and are difficult to integrate, due to syntactic-level and semantic-level data incompatibilities. We developed the Human Pathway Database (HPD) by integrating heterogeneous human pathway data that are either curated at the NCI Pathway Interaction Database (PID), Reactome, BioCarta, KEGG or indexed from the Protein Lounge Web sites. Integration of pathway data at syntactic, semantic, and schematic levels was based on a unified pathway data model and data warehousing-based integration techniques. HPD provides a comprehensive online view that connects human proteins, genes, RNA transcripts, enzymes, signaling events, metabolic reaction events, and gene regulatory events. At the time of this writing HPD includes 999 human pathways and more than 59,341 human molecular entities. The HPD software provides both a user-friendly Web interface for online use and a robust relational database backend for advanced pathway querying. This pathway tool enables users to 1) search for human pathways from different resources by simply entering genes/proteins involved in pathways or words appearing in pathway names, 2) analyze pathway-protein association, 3) study pathway-pathway similarity, and 4) build integrated pathway networks. We demonstrated the usage and characteristics of the new HPD through three breast cancer case studies. HPD http://bio.informatics.iupui.edu/HPD is a new resource for searching, managing

  14. De novo transcriptome assembly databases for the central nervous system of the medicinal leech

    Science.gov (United States)

    Hibsh, Dror; Schori, Hadas; Efroni, Sol; Shefi, Orit

    2015-01-01

    The study of non-model organisms stands to benefit greatly from genetic and genomic data. For a better understanding of the molecular mechanisms driving neuronal development, and to characterize the entire leech Hirudo medicinalis central nervous system (CNS) transcriptome we combined Trinity for de-novo assembly and Illumina HiSeq2000 for RNA-Seq. We present a set of 73,493 de-novo assembled transcripts for the leech, reconstructed from RNA collected, at a single ganglion resolution, from the CNS. This set of transcripts greatly enriches the available data for the leech. Here, we share two databases, such that each dataset allows a different type of search for candidate homologues. The first is the raw set of assembled transcripts. This set allows a sequence-based search. A comprehensive analysis of which revealed 22,604 contigs with high e-values, aligned versus the Swiss-Prot database. This analysis enabled the production of the second database, which includes correlated sequences to annotated transcript names, with the confidence of BLAST best hit. PMID:25977819

  15. Nationwide incidence of motor neuron disease using the French health insurance information system database.

    Science.gov (United States)

    Kab, Sofiane; Moisan, Frédéric; Preux, Pierre-Marie; Marin, Benoît; Elbaz, Alexis

    2017-08-01

    There are no estimates of the nationwide incidence of motor neuron disease (MND) in France. We used the French health insurance information system to identify incident MND cases (2012-2014), and compared incidence figures to those from three external sources. We identified incident MND cases (2012-2014) based on three data sources (riluzole claims, hospitalisation records, long-term chronic disease benefits), and computed MND incidence by age, gender, and geographic region. We used French mortality statistics, Limousin ALS registry data, and previous European studies based on administrative databases to perform external comparisons. We identified 6553 MND incident cases. After standardisation to the United States 2010 population, the age/gender-standardised incidence was 2.72/100,000 person-years (males, 3.37; females, 2.17; male:female ratio = 1.53, 95% CI1.46-1.61). There was no major spatial difference in MND distribution. Our data were in agreement with the French death database (standardised mortality ratio = 1.01, 95% CI = 0.96-1.06) and Limousin ALS registry (standardised incidence ratio = 0.92, 95% CI = 0.72-1.15). Incidence estimates were in the same range as those from previous studies. We report French nationwide incidence estimates of MND. Administrative databases including hospital discharge data and riluzole claims offer an interesting approach to identify large population-based samples of patients with MND for epidemiologic studies and surveillance.

  16. De novo transcriptome assembly databases for the central nervous system of the medicinal leech.

    Science.gov (United States)

    Hibsh, Dror; Schori, Hadas; Efroni, Sol; Shefi, Orit

    2015-01-01

    The study of non-model organisms stands to benefit greatly from genetic and genomic data. For a better understanding of the molecular mechanisms driving neuronal development, and to characterize the entire leech Hirudo medicinalis central nervous system (CNS) transcriptome we combined Trinity for de-novo assembly and Illumina HiSeq2000 for RNA-Seq. We present a set of 73,493 de-novo assembled transcripts for the leech, reconstructed from RNA collected, at a single ganglion resolution, from the CNS. This set of transcripts greatly enriches the available data for the leech. Here, we share two databases, such that each dataset allows a different type of search for candidate homologues. The first is the raw set of assembled transcripts. This set allows a sequence-based search. A comprehensive analysis of which revealed 22,604 contigs with high e-values, aligned versus the Swiss-Prot database. This analysis enabled the production of the second database, which includes correlated sequences to annotated transcript names, with the confidence of BLAST best hit.

  17. Signal Detection of Adverse Drug Reaction of Amoxicillin Using the Korea Adverse Event Reporting System Database.

    Science.gov (United States)

    Soukavong, Mick; Kim, Jungmee; Park, Kyounghoon; Yang, Bo Ram; Lee, Joongyub; Jin, Xue Mei; Park, Byung Joo

    2016-09-01

    We conducted pharmacovigilance data mining for a β-lactam antibiotics, amoxicillin, and compare the adverse events (AEs) with the drug labels of 9 countries including Korea, USA, UK, Japan, Germany, Swiss, Italy, France, and Laos. We used the Korea Adverse Event Reporting System (KAERS) database, a nationwide database of AE reports, between December 1988 and June 2014. Frequentist and Bayesian methods were used to calculate disproportionality distribution of drug-AE pairs. The AE which was detected by all the three indices of proportional reporting ratio (PRR), reporting odds ratio (ROR), and information component (IC) was defined as a signal. The KAERS database contained a total of 807,582 AE reports, among which 1,722 reports were attributed to amoxicillin. Among the 192,510 antibiotics-AE pairs, the number of amoxicillin-AE pairs was 2,913. Among 241 AEs, 52 adverse events were detected as amoxicillin signals. Comparing the drug labels of 9 countries, 12 adverse events including ineffective medicine, bronchitis, rhinitis, sinusitis, dry mouth, gastroesophageal reflux, hypercholesterolemia, gastric carcinoma, abnormal crying, induration, pulmonary carcinoma, and influenza-like symptoms were not listed on any of the labels of nine countries. In conclusion, we detected 12 new signals of amoxicillin which were not listed on the labels of 9 countries. Therefore, it should be followed by signal evaluation including causal association, clinical significance, and preventability.

  18. Signal Detection of Adverse Drug Reaction of Amoxicillin Using the Korea Adverse Event Reporting System Database

    Science.gov (United States)

    2016-01-01

    We conducted pharmacovigilance data mining for a β-lactam antibiotics, amoxicillin, and compare the adverse events (AEs) with the drug labels of 9 countries including Korea, USA, UK, Japan, Germany, Swiss, Italy, France, and Laos. We used the Korea Adverse Event Reporting System (KAERS) database, a nationwide database of AE reports, between December 1988 and June 2014. Frequentist and Bayesian methods were used to calculate disproportionality distribution of drug-AE pairs. The AE which was detected by all the three indices of proportional reporting ratio (PRR), reporting odds ratio (ROR), and information component (IC) was defined as a signal. The KAERS database contained a total of 807,582 AE reports, among which 1,722 reports were attributed to amoxicillin. Among the 192,510 antibiotics-AE pairs, the number of amoxicillin-AE pairs was 2,913. Among 241 AEs, 52 adverse events were detected as amoxicillin signals. Comparing the drug labels of 9 countries, 12 adverse events including ineffective medicine, bronchitis, rhinitis, sinusitis, dry mouth, gastroesophageal reflux, hypercholesterolemia, gastric carcinoma, abnormal crying, induration, pulmonary carcinoma, and influenza-like symptoms were not listed on any of the labels of nine countries. In conclusion, we detected 12 new signals of amoxicillin which were not listed on the labels of 9 countries. Therefore, it should be followed by signal evaluation including causal association, clinical significance, and preventability. PMID:27510377

  19. A case study in specifying data requirements for a decision support system database

    Energy Technology Data Exchange (ETDEWEB)

    Ng, V.

    1990-08-01

    An atomic database is a collection of detailed and archival data primarily used to support a decision support system (DSS). Typically, atomic data are generated externally from other sources. In order to build an atomic data base in which data represent the information the DSS users expect, detailed data requirements must be specified to the source system. The basic types of information a source system needs are a list of required data items, the frequency and terms of data needs, and the method of interface. A detailed list of required items recommended for inclusion in any specifications of external data requirements is presented in this paper. Because of the volume of information involved in such specifications, a matrix presentation is believed to be the best organizational format to describe the requirements concisely and precisely. The specifications of external data requirements written for the Worldwide Household Goods Information System for Transportation Modernization (WHIST-MOD) project for the Personal Property Directorate (MTPP) of the Military Traffic Management Command (MTMC) are presented as a case study in this paper. It has been shown that the specifications in this case study effectively serve the objective of such a document. It is recommended that the concept presented in this paper be used as a guideline in specifying data requirements for an atomic database. 5 refs., 2 figs.

  20. Reengineering a database for clinical trials management: lessons for system architects.

    Science.gov (United States)

    Brandt, C A; Nadkarni, P; Marenco, L; Karras, B T; Lu, C; Schacter, L; Fisk, J M; Miller, P L

    2000-10-01

    This paper describes the process of enhancing Trial/DB, a database system for clinical studies management. The system's enhancements have been driven by the need to maximize the effectiveness of developer personnel in supporting numerous and diverse users, of study designers in setting up new studies, and of administrators in managing ongoing studies. Trial/DB was originally designed to work over a local area network within a single institution, and basic architectural changes were necessary to make it work over the Internet efficiently as well as securely. Further, as its use spread to diverse communities of users, changes were made to let the processes of study design and project management adapt to the working styles of the principal investigators and administrators for each study. The lessons learned in the process should prove instructive for system architects as well as managers of electronic patient record systems.

  1. Drug-induced gingival hyperplasia: a retrospective study using spontaneous reporting system databases.

    Science.gov (United States)

    Hatahira, Haruna; Abe, Junko; Hane, Yuuki; Matsui, Toshinobu; Sasaoka, Sayaka; Motooka, Yumi; Hasegawa, Shiori; Fukuda, Akiho; Naganuma, Misa; Ohmori, Tomofumi; Kinosada, Yasutomi; Nakamura, Mitsuhiro

    2017-01-01

    Drug-induced gingival hyperplasia (DIGH) causes problems with chewing, aesthetics, and pronunciation, and leads to the deterioration of the patient's quality of life (QOL). Thus, the aim of this study was to evaluate the incidence of DIGH using spontaneous reporting system (SRS) databases. We analyzed reports of DIGH from SRS databases and calculated the reporting odds ratios (RORs) of suspected drugs (immunosuppressants, calcium channel blockers, and anticonvulsants). The SRS databases used were the US Food and Drug Administration (FDA) Adverse Event Reporting System (FAERS) and the Japanese Adverse Drug Event Report (JADER) database. With the data, we evaluated the time-to-onset profile and the hazard type using the Weibull shape parameter (WSP). Furthermore, we used the association rule mining technique to discover undetected relationships such as possible risk factors. The FAERS contained 5,821,716 reports. The RORs (95% confidence interval: CI) for cyclosporine, everolimus, sirolimus, mycophenolate mofetil, amlodipine, nifedipine, carbamazepine, clobazam, levetiracetam, phenobarbital, phenytoin, primidone, topiramate, and valproic acid, were 39.4 (95% CI: 30.3-51.2), 4.2 (1.7-10.0), 6.6 (2.5-17.7), 13.1 (7.2-23.2), 94.8 (80.0-112.9), 57.9 (35.7-94.0), 15.1 (10.3-22.3), 65.4 (33.8-126.7), 6.5 (3.6-11.8), 19.7 (8.8-44.0), 65.4 (52.4-82.9), 56.5 (21.1-151.7), 2.9 (1.1-7.7), and 17.5 (12.6-24.4), respectively. The JADER database contained 430,587 reports. The median time-to-onset of gingival hyperplasia values for immunosuppressants, calcium channel blockers, and anticonvulsants use were 71, 262, and 37 days, respectively. Furthermore, the 95% CI of the WSP β for anticonvulsants was over and excluded 1, which meant that they were wear-out failure type. Our results suggest that DIGH monitoring of patients administered immunosuppressants, calcium channel blockers, or anticonvulsants is important. We demonstrated the potential risk of DIGH following the long

  2. Tidal current turbine based on hydraulic transmission system

    Institute of Scientific and Technical Information of China (English)

    Hong-wei LIU; Wei LI; Yong-gang LIN; Shun MA

    2011-01-01

    Tidal current turbines (TCTs) are newly developed electricity generating devices.Aiming at the stabilization of the power output of TCTs,this paper introduces the hydraulic transmission technologies into TCTs.The hydrodynamics of the turbine was analyzed at first and its power output characteristics were predicted.A hydraulic power transmission system and a hydraulic pitch-controlled system were designed.Then related simulations were conducted.Finally,a TCT prototype was manufactured and tested in the workshop.The test results have confirmed the correctness of the current design and availability of installation of the hydraulic system in TCTs.

  3. A new database sub-system for grain-size analysis

    Science.gov (United States)

    Suckow, Axel

    2013-04-01

    Detailed grain-size analyses of large depth profiles for palaeoclimate studies create large amounts of data. For instance (Novothny et al., 2011) presented a depth profile of grain-size analyses with 2 cm resolution and a total depth of more than 15 m, where each sample was measured with 5 repetitions on a Beckman Coulter LS13320 with 116 channels. This adds up to a total of more than four million numbers. Such amounts of data are not easily post-processed by spreadsheets or standard software; also MS Access databases would face serious performance problems. The poster describes a database sub-system dedicated to grain-size analyses. It expands the LabData database and laboratory management system published by Suckow and Dumke (2001). This compatibility with a very flexible database system provides ease to import the grain-size data, as well as the overall infrastructure of also storing geographic context and the ability to organize content like comprising several samples into one set or project. It also allows easy export and direct plot generation of final data in MS Excel. The sub-system allows automated import of raw data from the Beckman Coulter LS13320 Laser Diffraction Particle Size Analyzer. During post processing MS Excel is used as a data display, but no number crunching is implemented in Excel. Raw grain size spectra can be exported and controlled as Number- Surface- and Volume-fractions, while single spectra can be locked for further post-processing. From the spectra the usual statistical values (i.e. mean, median) can be computed as well as fractions larger than a grain size, smaller than a grain size, fractions between any two grain sizes or any ratio of such values. These deduced values can be easily exported into Excel for one or more depth profiles. However, such a reprocessing for large amounts of data also allows new display possibilities: normally depth profiles of grain-size data are displayed only with summarized parameters like the clay

  4. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  5. Modeling and strain gauging of eddy current repulsion deicing systems

    Science.gov (United States)

    Smith, Samuel O.

    1993-01-01

    Work described in this paper confirms and extends work done by Zumwalt, et al., on a variety of in-flight deicing systems that use eddy current repulsion for repelling ice. Two such systems are known as electro-impulse deicing (EIDI) and the eddy current repulsion deicing strip (EDS). Mathematical models for these systems are discussed for their capabilities and limitations. The author duplicates a particular model of the EDS. Theoretical voltage, current, and force results are compared directly to experimental results. Dynamic strain measurements results are presented for the EDS system. Dynamic strain measurements near EDS or EIDI coils are complicated by the high magnetic fields in the vicinity of the coils. High magnetic fields induce false voltage signals out of the gages.

  6. A novel processed food classification system applied to Australian food composition databases.

    Science.gov (United States)

    O'Halloran, S A; Lacy, K E; Grimes, C A; Woods, J; Campbell, K J; Nowson, C A

    2017-08-01

    The extent of food processing can affect the nutritional quality of foodstuffs. Categorising foods by the level of processing emphasises the differences in nutritional quality between foods within the same food group and is likely useful for determining dietary processed food consumption. The present study aimed to categorise foods within Australian food composition databases according to the level of food processing using a processed food classification system, as well as assess the variation in the levels of processing within food groups. A processed foods classification system was applied to food and beverage items contained within Australian Food and Nutrient (AUSNUT) 2007 (n = 3874) and AUSNUT 2011-13 (n = 5740). The proportion of Minimally Processed (MP), Processed Culinary Ingredients (PCI) Processed (P) and Ultra Processed (ULP) by AUSNUT food group and the overall proportion of the four processed food categories across AUSNUT 2007 and AUSNUT 2011-13 were calculated. Across the food composition databases, the overall proportions of foods classified as MP, PCI, P and ULP were 27%, 3%, 26% and 44% for AUSNUT 2007 and 38%, 2%, 24% and 36% for AUSNUT 2011-13. Although there was wide variation in the classifications of food processing within the food groups, approximately one-third of foodstuffs were classified as ULP food items across both the 2007 and 2011-13 AUSNUT databases. This Australian processed food classification system will allow researchers to easily quantify the contribution of processed foods within the Australian food supply to assist in assessing the nutritional quality of the dietary intake of population groups. © 2017 The British Dietetic Association Ltd.

  7. A Novel Lightweight Main Memory Database for Telecom Network Performance Management System

    Directory of Open Access Journals (Sweden)

    Lina Lan

    2012-04-01

    Full Text Available Today telecom network are growing complex. Although the amount of network performance data increased dramatically, telecom network operators require better performance on network performance data collection and analysis. Database is the important component in modern network management model. Since main memory database (MMDB store data in main physical memory and provide very high-speed access, MMDB can suffice the requirements on data intensive and real time response in network performance management system. This paper presents a novel lightweight design on MMDB for network performance data persistence. This design improves data access performance in following aspects.  The data persistence mechanism employs user mode memory map provided by UNIX OS. To reduce the cost of data copy and data interpretation, the data storage format is designed as consistent with binary format in application memory. The database is provided as program library and the application can access data in shared memory to avoid the cost on inter-process communication. Once data is updated in memory, query application can get updated data without disk I/O cost. The data access methods adopt multi-level RB-Tree structure. In best case, the algorithm complexity is O(N. In worst case, the algorithm complexity is O(N*lgN. In real performance data distribution scenarios, the complexity is nearly O(N. The system approach has been tested in laboratory using benchmark data. The result shows the performances of the application fully meet the requirements of the product index. The CPU and memory consumption are also lower than network management system requirements.

  8. Subsurface interpretation based on geophysical data set using geothermal database system `GEOBASE`; Chinetsu database system `GEOBASE` wo riyoshita Kakkonda chinetsu chiiki no chika kozo kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Osato, K.; Sato, T.; Miura, Y.; Yamane, K. [Geothermal Energy Research and Development Co. Ltd., Tokyo (Japan); Doi, N. [Japan Metals and Chemicals Co. Ltd., Tokyo (Japan); Uchida, T. [New Energy and Industrial Technology Development Organization, Tokyo, (Japan)

    1996-05-01

    This paper reports application of a geothermal database system (GEOBASE) to analyzing subsurface structure in the Kakkonda geothermal area. Registered into the GEOBASE to analyze specific resistance structure in this area were depth information (well track and electric logging of existing wells), three-dimensional discretization data (two-dimensional analysis cross section using the MT method and distribution of micro-earthquake epicenters), and two-dimensional discretization data (altitude, and depth to top of the Kakkonda granite). The GEOBASE is capable of three-dimensional interpolation and three-dimensional indication respectively on the three-dimensional discretization data and the depth information table. The paper presents a depth compiling plan drawing for 2000 m below sea level and an SE-NE cross section compiling cross sectional drawing. The paper also indicates that the three-dimensional interpolation function of the GEOBASE renders comparison of spatial data capable of being done freely and quickly, thereby exhibiting power in the comprehensive analysis of this kind. 3 refs., 8 figs., 2 tabs.

  9. Development of database systems for safety of repositories for disposal of radioactive wastes

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yeong Hoon; Han, Jeong Sang; Shin, Hyeon Joon; Ham, Sang Won; Moon, Sang Kee [Yonsei Univ., Seoul (Korea, Republic of)

    1998-03-15

    In this study, contents and survey and supervision items in each part are selected to avoid overlap between different parts referring national lows, criterion, and guidance related to atomic energy. The items consist of climatology, hydrology, geology, seismology, engineering geology, geochemistry, and civil and social parts. Based on these items, general study and systematic control related to the stability of disposal sites os established and as specific region required with the properties that is similar to properties of radioactive waste disposal sites, Ulsan region equipped with LPG underground storage facility was selected and its datum were surveyed and inputted. So propriety of established database system was proved.

  10. Design of Nutrition Catering System for Athletes Based on Access Database

    Directory of Open Access Journals (Sweden)

    Hongjiang Wu,

    2015-08-01

    Full Text Available In order to monitor and adjust athletes' dietary nutrition scientifically, Active X Data Object (ADO and Structure Query Language (SQL were used to produce program under the development environment of Visual Basic 6.0 and Access database. The consulting system on food nutrition and dietary had been developed with the two languages combination and organization of the latest nutrition information. Nutrition balance of physiological characteristics, assessment for nutrition intake, inquiring nutrition of common food and recommended of functional nourishing food could be achieved for different events and different level of athletes.

  11. Building a highly available and intrusion tolerant database security and protection system (DSPS)

    Institute of Scientific and Technical Information of China (English)

    蔡亮; 杨小虎; 董金祥

    2003-01-01

    Database Security and Protection System (DSPS) is a security platform for fighting malicious DBMS. The security and performance are critical to DSPS. The authors suggested a key management scheme by combining the server group structure to improve availability and the key distribution structure needed by proactive security. This paper detailed the implementation of proactive security in DSPS. After thorough performance analysis, the authors concluded that the performance difference between the replicated mechanism and proactive mechanism becomes smaller and smaller with increasing number of concurrent connections; and that proactive security is very useful and practical for large, critical applications.

  12. Building a highly available and intrusion tolerant database security and protection system ( DSPS)

    Institute of Scientific and Technical Information of China (English)

    蔡亮; 杨小虎; 董金祥

    2003-01-01

    Database Security and Protection System (DSPS) is a security platform for fighting malicious DBMS. The security and performance are critical to DSPS. The authors suggested a key management scheme by combining the server group structure to improve availability and the key distribution structure needed by proactive security. This paper detailed the implementation of proactive security in DSPS. After thorough performane analysis, the authors concluded that the performance difference between the replicated mechanism and proactive mechanism becomes smaller and smaller with increasing number of concurrent connections ; and that proactive security is very useful and practical for large, critical applications.

  13. Design and Implementation of CNEOST Image Database Based on NoSQL System

    Science.gov (United States)

    Wang, Xin

    2014-04-01

    The China Near Earth Object Survey Telescope is the largest Schmidt telescope in China, and it has acquired more than 3 TB astronomical image data since it saw the first light in 2006. After the upgrade of the CCD camera in 2013, over 10 TB data will be obtained every year. The management of the massive images is not only an indispensable part of data processing pipeline but also the basis of data sharing. Based on the analysis of requirement, an image management system is designed and implemented by employing the non-relational database.

  14. Anesthesia information management systems marketplace and current vendors.

    Science.gov (United States)

    Stonemetz, Jerry

    2011-09-01

    This article addresses the brief history of anesthesia information management systems (AIMS) and discusses the vendors that currently market AIMS. The current market penetration based on the information provided by these vendors is presented and the rationale for the purchase of AIMS is discussed. The considerations to be evaluated when making a vendor selection are also discussed. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. ARRAY PULSED EDDY CURRENT IMAGING SYSTEM USED TO DETECT CORROSION

    Institute of Scientific and Technical Information of China (English)

    Yang Binfeng; Luo Feilu; Cao Xiongheng; Xu Xiaojie

    2005-01-01

    A theory model is established to describe the voltage-current response function. The peak amplitude and the zero-crossing time of the transient signal is extracted as the imaging features, array pulsed eddy current (PEC) imaging is proposed to detect corrosion. The test results show that this system has the advantage of fast scanning speed, different imaging mode and quantitative detection, it has a broad application in the aviation nondestructive testing.

  16. National Carbon Sequestration Database and Geographic Information System (NatCarb)

    Energy Technology Data Exchange (ETDEWEB)

    Kenneth Nelson; Timothy Carr

    2009-03-31

    This annual and final report describes the results of the multi-year project entitled 'NATional CARBon Sequestration Database and Geographic Information System (NatCarb)' (http://www.natcarb.org). The original project assembled a consortium of five states (Indiana, Illinois, Kansas, Kentucky and Ohio) in the midcontinent of the United States (MIDCARB) to construct an online distributed Relational Database Management System (RDBMS) and Geographic Information System (GIS) covering aspects of carbon dioxide (CO{sub 2}) geologic sequestration. The NatCarb system built on the technology developed in the initial MIDCARB effort. The NatCarb project linked the GIS information of the Regional Carbon Sequestration Partnerships (RCSPs) into a coordinated regional database system consisting of datasets useful to industry, regulators and the public. The project includes access to national databases and GIS layers maintained by the NatCarb group (e.g., brine geochemistry) and publicly accessible servers (e.g., USGS, and Geography Network) into a single system where data are maintained and enhanced at the local level, but are accessed and assembled through a single Web portal to facilitate query, assembly, analysis and display. This project improves the flow of data across servers and increases the amount and quality of available digital data. The purpose of NatCarb is to provide a national view of the carbon capture and storage potential in the U.S. and Canada. The digital spatial database allows users to estimate the amount of CO{sub 2} emitted by sources (such as power plants, refineries and other fossil-fuel-consuming industries) in relation to geologic formations that can provide safe, secure storage sites over long periods of time. The NatCarb project worked to provide all stakeholders with improved online tools for the display and analysis of CO{sub 2} carbon capture and storage data through a single website portal (http://www.natcarb.org/). While the external

  17. National Carbon Sequestration Database and Geographic Information System (NatCarb)

    Energy Technology Data Exchange (ETDEWEB)

    Kenneth Nelson; Timothy Carr

    2009-03-31

    This annual and final report describes the results of the multi-year project entitled 'NATional CARBon Sequestration Database and Geographic Information System (NatCarb)' (http://www.natcarb.org). The original project assembled a consortium of five states (Indiana, Illinois, Kansas, Kentucky and Ohio) in the midcontinent of the United States (MIDCARB) to construct an online distributed Relational Database Management System (RDBMS) and Geographic Information System (GIS) covering aspects of carbon dioxide (CO{sub 2}) geologic sequestration. The NatCarb system built on the technology developed in the initial MIDCARB effort. The NatCarb project linked the GIS information of the Regional Carbon Sequestration Partnerships (RCSPs) into a coordinated regional database system consisting of datasets useful to industry, regulators and the public. The project includes access to national databases and GIS layers maintained by the NatCarb group (e.g., brine geochemistry) and publicly accessible servers (e.g., USGS, and Geography Network) into a single system where data are maintained and enhanced at the local level, but are accessed and assembled through a single Web portal to facilitate query, assembly, analysis and display. This project improves the flow of data across servers and increases the amount and quality of available digital data. The purpose of NatCarb is to provide a national view of the carbon capture and storage potential in the U.S. and Canada. The digital spatial database allows users to estimate the amount of CO{sub 2} emitted by sources (such as power plants, refineries and other fossil-fuel-consuming industries) in relation to geologic formations that can provide safe, secure storage sites over long periods of time. The NatCarb project worked to provide all stakeholders with improved online tools for the display and analysis of CO{sub 2} carbon capture and storage data through a single website portal (http://www.natcarb.org/). While the external

  18. Nonequilibrium Microscopic Distribution of Thermal Current in Particle Systems

    KAUST Repository

    Yukawa, Satoshi

    2009-02-15

    A nonequilibrium distribution function of microscopic thermal current is studied by a direct numerical simulation in a thermal conducting steady state of particle systems. Two characteristic temperatures of the thermal current are investigated on the basis of the distribution. It is confirmed that the temperature depends on the current direction; Parallel temperature to the heat-flux is higher than antiparallel one. The difference between the parallel temperature and the antiparallel one is proportional to a macroscopic temperature gradient. ©2009 The Physical Society of Japan.

  19. Output Current Ripple Reduction Algorithms for Home Energy Storage Systems

    Directory of Open Access Journals (Sweden)

    Jin-Hyuk Park

    2013-10-01

    Full Text Available This paper proposes an output current ripple reduction algorithm using a proportional-integral (PI controller for an energy storage system (ESS. In single-phase systems, the DC/AC inverter has a second-order harmonic at twice the grid frequency of a DC-link voltage caused by pulsation of the DC-link voltage. The output current of a DC/DC converter has a ripple component because of the ripple of the DC-link voltage. The second-order harmonic adversely affects the battery lifetime. The proposed algorithm has an advantage of reducing the second-order harmonic of the output current in the variable frequency system. The proposed algorithm is verified from the PSIM simulation and experiment with the 3 kW ESS model.

  20. Towards a Database System for Large-scale Analytics on Strings

    KAUST Repository

    Sahli, Majed A.

    2015-07-23

    Recent technological advances are causing an explosion in the production of sequential data. Biological sequences, web logs and time series are represented as strings. Currently, strings are stored, managed and queried in an ad-hoc fashion because they lack a standardized data model and query language. String queries are computationally demanding, especially when strings are long and numerous. Existing approaches cannot handle the growing number of strings produced by environmental, healthcare, bioinformatic, and space applications. There is a trade- off between performing analytics efficiently and scaling to thousands of cores to finish in reasonable times. In this thesis, we introduce a data model that unifies the input and output representations of core string operations. We define a declarative query language for strings where operators can be pipelined to form complex queries. A rich set of core string operators is described to support string analytics. We then demonstrate a database system for string analytics based on our model and query language. In particular, we propose the use of a novel data structure augmented by efficient parallel computation to strike a balance between preprocessing overheads and query execution times. Next, we delve into repeated motifs extraction as a core string operation for large-scale string analytics. Motifs are frequent patterns used, for example, to identify biological functionality, periodic trends, or malicious activities. Statistical approaches are fast but inexact while combinatorial methods are sound but slow. We introduce ACME, a combinatorial repeated motifs extractor. We study the spatial and temporal locality of motif extraction and devise a cache-aware search space traversal technique. ACME is the only method that scales to gigabyte- long strings, handles large alphabets, and supports interesting motif types with minimal overhead. While ACME is cache-efficient, it is limited by being serial. We devise a lightweight