WorldWideScience

Sample records for network database final

  1. Wisconsin Inventors` Network Database final report

    Energy Technology Data Exchange (ETDEWEB)

    1991-12-04

    The Wisconsin Innovation Service Center at UW-Whitewater received a DOE grant to create an Inventor`s Network Database to assist independent inventors and entrepreneurs with new product development. Since 1980, the Wisconsin Innovation Service Center (WISC) at the University of Wisconsin-Whitewater has assisted independent and small business inventors in estimating the marketability of their new product ideas and inventions. The purpose of the WISC as an economic development entity is to encourage inventors who appear to have commercially viable inventions, based on preliminary market research, to invest in the next stages of development, perhaps investigating prototype development, legal protection, or more in-depth market research. To address inventor`s information needs, WISC developed on electronic database with search capabilities by geographic region and by product category/industry. It targets both public and private resources capable of, and interested in, working with individual and small business inventors. At present, the project includes resources in Wisconsin only.

  2. Wisconsin Inventors' Network Database final report

    Energy Technology Data Exchange (ETDEWEB)

    1991-12-04

    The Wisconsin Innovation Service Center at UW-Whitewater received a DOE grant to create an Inventor's Network Database to assist independent inventors and entrepreneurs with new product development. Since 1980, the Wisconsin Innovation Service Center (WISC) at the University of Wisconsin-Whitewater has assisted independent and small business inventors in estimating the marketability of their new product ideas and inventions. The purpose of the WISC as an economic development entity is to encourage inventors who appear to have commercially viable inventions, based on preliminary market research, to invest in the next stages of development, perhaps investigating prototype development, legal protection, or more in-depth market research. To address inventor's information needs, WISC developed on electronic database with search capabilities by geographic region and by product category/industry. It targets both public and private resources capable of, and interested in, working with individual and small business inventors. At present, the project includes resources in Wisconsin only.

  3. Freshwater Biological Traits Database (Final Report)

    Science.gov (United States)

    EPA announced the release of the final report, Freshwater Biological Traits Database. This report discusses the development of a database of freshwater biological traits. The database combines several existing traits databases into an online format. The database is also...

  4. Rett networked database

    DEFF Research Database (Denmark)

    Grillo, Elisa; Villard, Laurent; Clarke, Angus

    2012-01-01

    underlie some (usually variant) cases. There is only limited correlation between genotype and phenotype. The Rett Networked Database (http://www.rettdatabasenetwork.org/) has been established to share clinical and genetic information. Through an "adaptor" process of data harmonization, a set of 293...... clinical items and 16 genetic items was generated; 62 clinical and 7 genetic items constitute the core dataset; 23 clinical items contain longitudinal information. The database contains information on 1838 patients from 11 countries (December 2011), with or without mutations in known genes. These numbers...

  5. Network-based Database Course

    DEFF Research Database (Denmark)

    Nielsen, J.N.; Knudsen, Morten; Nielsen, Jens Frederik Dalsgaard

    A course in database design and implementation has been de- signed, utilizing existing network facilities. The course is an elementary course for students of computer engineering. Its purpose is to give the students a theoretical database knowledge as well as practical experience with design...... and implementation. A tutorial relational database and the students self-designed databases are implemented on the UNIX system of Aalborg University, thus giving the teacher the possibility of live demonstrations in the lecture room, and the students the possibility of interactive learning in their working rooms...

  6. Network-based Database Course

    DEFF Research Database (Denmark)

    Nielsen, J.N.; Knudsen, Morten; Nielsen, Jens Frederik Dalsgaard

    A course in database design and implementation has been de- signed, utilizing existing network facilities. The course is an elementary course for students of computer engineering. Its purpose is to give the students a theoretical database knowledge as well as practical experience with design...... and implementation. A tutorial relational database and the students self-designed databases are implemented on the UNIX system of Aalborg University, thus giving the teacher the possibility of live demonstrations in the lecture room, and the students the possibility of interactive learning in their working rooms...

  7. The NASA Fireball Network Database

    Science.gov (United States)

    Moser, Danielle E.

    2011-01-01

    The NASA Meteoroid Environment Office (MEO) has been operating an automated video fireball network since late-2008. Since that time, over 1,700 multi-station fireballs have been observed. A database containing orbital data and trajectory information on all these events has recently been compiled and is currently being mined for information. Preliminary results are presented here.

  8. Distributed Structure-Searchable Toxicity Database Network

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Distributed Structure-Searchable Toxicity (DSSTox) Database Network provides a public forum for search and publishing downloadable, structure-searchable,...

  9. Knowledge Discovery from Communication Network Alarm Databases

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The technique of Knowledge Discovery in Databases(KDD) to learn valuable knowledge hidden in network alarm databases is introduced. To get such knowledge, we propose an efficient method based on sliding windows (named as Slidwin) to discover different episode rules from time sequential alarm data. The experimental results show that given different thresholds parameters, large amount of different rules could be discovered quickly.

  10. Electronic Reference Library: Silverplatter's Database Networking Solution.

    Science.gov (United States)

    Millea, Megan

    Silverplatter's Electronic Reference Library (ERL) provides wide area network access to its databases using TCP/IP communications and client-server architecture. ERL has two main components: The ERL clients (retrieval interface) and the ERL server (search engines). ERL clients provide patrons with seamless access to multiple databases on multiple…

  11. The final COS-B database now publicly available

    Science.gov (United States)

    Mayer-Hasselwander, H. A.; Bennett, K.; Bignami, G. F.; Bloemen, J. B. G. M.; Buccheri, R.; Caraveo, P. A.; Hermsen, W.; Kanbach, G.; Lebrun, F.; Paul, J. A.

    1985-01-01

    The data obtained by the gamma ray satellite COS-B was processed, condensed and integrated together with the relevant mission and experiment parameters into the Final COS-B Database. The database contents and the access programs available with the database are outlined. The final sky coverage and a presentation of the large scale distribution of the observed Milky Way emission are given. The database is announced to be available through the European Space Agency.

  12. FINAL DFIRM DATABASE, SHARP COUNTY, ARKANSAS, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  13. FINAL DFIRM DATABASE, LIMESTONE COUNTY, TEXAS, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  14. FINAL DFIRM DATABASE, UNION PARISH, LOUISIANA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  15. FINAL Database, LAKE COUNTY, FLORIDA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  16. FINAL DFIRM DATABASE, BRYAN COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  17. Virtualized Network Control. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Ghani, Nasir [Univ. of New Mexico, Albuquerque, NM (United States)

    2013-02-01

    This document is the final report for the Virtualized Network Control (VNC) project, which was funded by the United States Department of Energy (DOE) Office of Science. This project was also informally referred to as Advanced Resource Computation for Hybrid Service and TOpology NEtworks (ARCHSTONE). This report provides a summary of the project's activities, tasks, deliverable, and accomplishments. It also provides a summary of the documents, software, and presentations generated as part of this projects activities. Namely, the Appendix contains an archive of the deliverables, documents, and presentations generated a part of this project.

  18. Final Results of Shuttle MMOD Impact Database

    Science.gov (United States)

    Hyde, J. L.; Christiansen, E. L.; Lear, D. M.

    2015-01-01

    The Shuttle Hypervelocity Impact Database documents damage features on each Orbiter thought to be from micrometeoroids (MM) or orbital debris (OD). Data is divided into tables for crew module windows, payload bay door radiators and thermal protection systems along with other miscellaneous regions. The combined number of records in the database is nearly 3000. Each database record provides impact feature dimensions, location on the vehicle and relevant mission information. Additional detail on the type and size of particle that produced the damage site is provided when sampling data and definitive spectroscopic analysis results are available. Guidelines are described which were used in determining whether impact damage is from micrometeoroid or orbital debris impact based on the findings from scanning electron microscopy chemical analysis. Relationships assumed when converting from observed feature sizes in different shuttle materials to particle sizes will be presented. A small number of significant impacts on the windows, radiators and wing leading edge will be highlighted and discussed in detail, including the hypervelocity impact testing performed to estimate particle sizes that produced the damage.

  19. Network and Database Security: Regulatory Compliance, Network, and Database Security - A Unified Process and Goal

    Directory of Open Access Journals (Sweden)

    Errol A. Blake

    2007-12-01

    Full Text Available Database security has evolved; data security professionals have developed numerous techniques and approaches to assure data confidentiality, integrity, and availability. This paper will show that the Traditional Database Security, which has focused primarily on creating user accounts and managing user privileges to database objects are not enough to protect data confidentiality, integrity, and availability. This paper is a compilation of different journals, articles and classroom discussions will focus on unifying the process of securing data or information whether it is in use, in storage or being transmitted. Promoting a change in Database Curriculum Development trends may also play a role in helping secure databases. This paper will take the approach that if one make a conscientious effort to unifying the Database Security process, which includes Database Management System (DBMS selection process, following regulatory compliances, analyzing and learning from the mistakes of others, Implementing Networking Security Technologies, and Securing the Database, may prevent database breach.

  20. The Danish Collaborative Bacteraemia Network (DACOBAN) database.

    Science.gov (United States)

    Gradel, Kim Oren; Schønheyder, Henrik Carl; Arpi, Magnus; Knudsen, Jenny Dahl; Ostergaard, Christian; Søgaard, Mette

    2014-01-01

    The Danish Collaborative Bacteraemia Network (DACOBAN) research database includes microbiological data obtained from positive blood cultures from a geographically and demographically well-defined population serviced by three clinical microbiology departments (1.7 million residents, 32% of the Danish population). The database also includes data on comorbidity from the Danish National Patient Registry, vital status from the Danish Civil Registration System, and clinical data on 31% of nonselected records in the database. Use of the unique civil registration number given to all Danish residents enables linkage to additional registries for specific research projects. The DACOBAN database is continuously updated, and it currently comprises 39,292 patients with 49,951 bacteremic episodes from 2000 through 2011. The database is part of an international network of population-based bacteremia registries from five developed countries on three continents. The main purpose of the DACOBAN database is to study surveillance, risk, and prognosis. Sex- and age-specific data on background populations enables the computation of incidence rates. In addition, the high number of patients facilitates studies of rare microorganisms. Thus far, studies on Staphylococcus aureus, enterococci, computer algorithms for the classification of bacteremic episodes, and prognosis and risk in relation to socioeconomic factors have been published.

  1. The Danish Collaborative Bacteraemia Network (DACOBAN) database

    DEFF Research Database (Denmark)

    Gradel, Kim Oren; Schønheyder, Henrik Carl; Arpi, Magnus

    2014-01-01

    registries from five developed countries on three continents. The main purpose of the DACOBAN database is to study surveillance, risk, and prognosis. Sex- and age-specific data on background populations enables the computation of incidence rates. In addition, the high number of patients facilitates studies......The Danish Collaborative Bacteraemia Network (DACOBAN) research database includes microbiological data obtained from positive blood cultures from a geographically and demographically well-defined population serviced by three clinical microbiology departments (1.7 million residents, 32......% of the Danish population). The database also includes data on comorbidity from the Danish National Patient Registry, vital status from the Danish Civil Registration System, and clinical data on 31% of nonselected records in the database. Use of the unique civil registration number given to all Danish residents...

  2. Final Report: Efficient Databases for MPC Microdata

    Energy Technology Data Exchange (ETDEWEB)

    Michael A. Bender; Martin Farach-Colton; Bradley C. Kuszmaul

    2011-08-31

    The purpose of this grant was to develop the theory and practice of high-performance databases for massive streamed datasets. Over the last three years, we have developed fast indexing technology, that is, technology for rapidly ingesting data and storing that data so that it can be efficiently queried and analyzed. During this project we developed the technology so that high-bandwidth data streams can be indexed and queried efficiently. Our technology has been proven to work data sets composed of tens of billions of rows when the data streams arrives at over 40,000 rows per second. We achieved these numbers even on a single disk driven by two cores. Our work comprised (1) new write-optimized data structures with better asymptotic complexity than traditional structures, (2) implementation, and (3) benchmarking. We furthermore developed a prototype of TokuFS, a middleware layer that can handle microdata I/O packaged up in an MPI-IO abstraction.

  3. BNDB – The Biochemical Network Database

    Directory of Open Access Journals (Sweden)

    Kaufmann Michael

    2007-10-01

    Full Text Available Abstract Background Technological advances in high-throughput techniques and efficient data acquisition methods have resulted in a massive amount of life science data. The data is stored in numerous databases that have been established over the last decades and are essential resources for scientists nowadays. However, the diversity of the databases and the underlying data models make it difficult to combine this information for solving complex problems in systems biology. Currently, researchers typically have to browse several, often highly focused, databases to obtain the required information. Hence, there is a pressing need for more efficient systems for integrating, analyzing, and interpreting these data. The standardization and virtual consolidation of the databases is a major challenge resulting in a unified access to a variety of data sources. Description We present the Biochemical Network Database (BNDB, a powerful relational database platform, allowing a complete semantic integration of an extensive collection of external databases. BNDB is built upon a comprehensive and extensible object model called BioCore, which is powerful enough to model most known biochemical processes and at the same time easily extensible to be adapted to new biological concepts. Besides a web interface for the search and curation of the data, a Java-based viewer (BiNA provides a powerful platform-independent visualization and navigation of the data. BiNA uses sophisticated graph layout algorithms for an interactive visualization and navigation of BNDB. Conclusion BNDB allows a simple, unified access to a variety of external data sources. Its tight integration with the biochemical network library BN++ offers the possibility for import, integration, analysis, and visualization of the data. BNDB is freely accessible at http://www.bndb.org.

  4. The Danish Collaborative Bacteraemia Network (DACOBAN database

    Directory of Open Access Journals (Sweden)

    Gradel KO

    2014-09-01

    Full Text Available Kim Oren Gradel,1,2 Henrik Carl Schønheyder,3,4 Magnus Arpi,5 Jenny Dahl Knudsen,6 Christian Østergaard,6 Mette Søgaard7For the Danish Collaborative Bacteraemia Network (DACOBAN 1Center for Clinical Epidemiology, Odense University Hospital, 2Research Unit of Clinical Epidemiology, Institute of Clinical Research, University of Southern Denmark, Odense, Denmark; 3Department of Clinical Microbiology, Aalborg University Hospital, 4Department of Clinical Medicine, Aalborg University, Aalborg, 5Department of Clinical Microbiology, Herlev Hospital, Copenhagen University Hospital, Herlev, 6Department of Clinical Microbiology, Hvidovre Hospital, Copenhagen University Hospital, Hvidovre, 7Department of Clinical Epidemiology, Institute of Clinical Medicine, Aarhus University Hospital, Aarhus University, Aarhus, Denmark Abstract: The Danish Collaborative Bacteraemia Network (DACOBAN research database includes microbiological data obtained from positive blood cultures from a geographically and demographically well-defined population serviced by three clinical microbiology departments (1.7 million residents, 32% of the Danish population. The database also includes data on comorbidity from the Danish National Patient Registry, vital status from the Danish Civil Registration System, and clinical data on 31% of nonselected records in the database. Use of the unique civil registration number given to all Danish residents enables linkage to additional registries for specific research projects. The DACOBAN database is continuously updated, and it currently comprises 39,292 patients with 49,951 bacteremic episodes from 2000 through 2011. The database is part of an international network of population-based bacteremia registries from five developed countries on three continents. The main purpose of the DACOBAN database is to study surveillance, risk, and prognosis. Sex- and age-specific data on background populations enables the computation of incidence rates. In

  5. Database Submission—The Evolving Social Network of Marketing Scholars

    OpenAIRE

    Jacob Goldenberg; Barak Libai; Eitan Muller; Stefan Stremersch

    2010-01-01

    The interest in social networks among marketing scholars and practitioners has sharply increased in the last decade. One social network of which network scholars increasingly recognize the unique value is the academic collaboration (coauthor) network. We offer a comprehensive database of the collaboration network among marketing scholars over the last 40 years (available at http://mktsci.pubs.informs.org. Based on the ProQuest database, it documents the social collaboration among researchers ...

  6. Multi databases in Health Care Networks

    CERN Document Server

    Salih, Nadir K; Sun, Mingrui

    2011-01-01

    E-Health is a relatively recent term for healthcare practice supported by electronic processes and communication, dating back to at least 1999. E-Health is greatly impacting on information distribution and availability within the health services, hospitals and to the public. E-health was introduced as the death of telemedicine, because - in the context of a broad availability of medical information systems that can interconnect and communicate - telemedicine will no longer exist as a specific field. The same could also be said for any other traditional field in medical informatics, including information systems and electronic patient records. E-health presents itself as a common name for all such technological fields. In this paper we focuses in multi database by determined some sites and distributed it in Homogenous way. This will be followed by an illustrative example as related works. Finally, the paper concludes with general remarks and a statement of further work.

  7. A database on electric vehicle use in Sweden. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Fridstrand, Niklas [Lund Univ. (Sweden). Dept. of Industrial Electrical Engineering and Automation

    2000-05-01

    The Department of Industrial Electrical Engineering and Automation (IEA) at the Lund Institute of Technology (LTH), has taken responsibility for developing and maintaining a database on electric and hybrid road vehicles in Sweden. The Swedish Transport and Communications Research Board, (KFB) initiated the development of this database. Information is collected from three major cities in Sweden: Malmoe, Gothenburg and Stockholm, as well as smaller cities such as Skellefteaa and Haernoesand in northern Sweden. This final report summarises the experience gained during the development and maintenance of the database from February 1996 to December 1999. Our aim was to construct a well-functioning database for the evaluation of electric and hybrid road vehicles in Sweden. The database contains detailed information on several years' use of electric vehicles (EVs) in Sweden (for example, 220 million driving records). Two data acquisition systems were used, one less and one more complex with respect to the number of quantities logged. Unfortunately, data collection was not complete, due to malfunctioning of the more complex system, and due to human factors for the less complex system.

  8. Good relationships are pivotal in nuclear databases. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Heger, A.S. [New Mexico Univ., Albuquerque, NM (United States)

    1995-10-01

    This expounds the importance of effective use of information in the nuclear industry. This article`s tenet is that valuable information is stored in our nuclear experience databases that must be captilized for enhanced operation of our plants, training, and rule makings. Due to large volume of data, certainly the development of an automated information retrieval is preferred to other means. To this end, after an introduction a method of adaptive information retrieval based on neural network methodology is introduced. A few examples are provided and future plans will also be discussed.

  9. Network Analysis Modeling Towards GIS Based on Object-Relation Database

    Institute of Scientific and Technical Information of China (English)

    YUE Peng; WANG Yandong; GONG Jianya; HUANG Xianfeng

    2004-01-01

    This paper compares the differences between the mathematical model in graph theory and GIS network analysis model. Thus it claims that the GIS network analysis model needs to solve. Then this paper introduces the spatial data management methods in object-relation database for GIS and discusses its effects on the network analysis model. Finally it puts forward the GIS network analysis model based on the object-relation database. The structure of the model is introduced in detail and research is done to the internal and external memory data structure of the model. The results show that it performs well in practice.

  10. The Network Configuration of an Object Relational Database Management System

    Science.gov (United States)

    Diaz, Philip; Harris, W. C.

    2000-01-01

    The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.

  11. Data Network Weather Service Reporting - Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Michael Frey

    2012-08-30

    A final report is made of a three-year effort to develop a new forecasting paradigm for computer network performance. This effort was made in co-ordination with Fermi Lab's construction of e-Weather Center.

  12. Network-based statistical comparison of citation topology of bibliographic databases

    CERN Document Server

    Šubelj, Lovro; Bajec, Marko

    2015-01-01

    Modern bibliographic databases provide the basis for scientific research and its evaluation. While their content and structure differ substantially, there exist only informal notions on their reliability. Here we compare the topological consistency of citation networks extracted from six popular bibliographic databases including Web of Science, CiteSeer and arXiv.org. The networks are assessed through a rich set of local and global graph statistics. We first reveal statistically significant inconsistencies between some of the databases with respect to individual statistics. For example, the introduced field bow-tie decomposition of DBLP Computer Science Bibliography substantially differs from the rest due to the coverage of the database, while the citation information within arXiv.org is the most exhaustive. Finally, we compare the databases over multiple graph statistics using the critical difference diagram. The citation topology of DBLP Computer Science Bibliography is the least consistent with the rest, w...

  13. Acquisition of CD-ROM Databases for Local Area Networks.

    Science.gov (United States)

    Davis, Trisha L.

    1993-01-01

    Discusses the acquisition of CD-ROM products for local area networks based on experiences at the Ohio State University libraries. Topics addressed include the historical development of CD-ROM acquisitions; database selection, including pricing and subscription options; the ordering process; and network licensing issues. (six references) (LRW)

  14. An Application to WIN/ISIS Database on Local Network

    Directory of Open Access Journals (Sweden)

    Robert Lechien

    2005-07-01

    Full Text Available A Translated Article containing an application to how WIN/ISIS database work on local network. It starts with main definitions, and how to install WIN/ISIS on PC, and how to install it on the local network server.

  15. The MOBI-DIK Approach to Searching in Mobile Ad Hoc Network Databases

    Science.gov (United States)

    Luo, Yan; Wolfson, Ouri; Xu, Bo

    In this chapter, we introduce the mobile ad-hoc network (MANET) database by discussing its definition, historical background and scientific fundamentals. Existing related projects are presented and classified into two main categories, pedestrian and vehicular projects based on their target users. Two main paradigms (i.e., report pulling and report pushing) for answering queries in MANET databases are discussed in details. Then we present the MOBIDIK approach to searching in MANET databases and compare it with alternatives. Finally, the key applications and the future research directions are addressed.

  16. Optimal access to large databases via networks

    Energy Technology Data Exchange (ETDEWEB)

    Munro, J.K.; Fellows, R.L.; Phifer, D. Carrick, M.R.; Tarlton, N.

    1997-10-01

    A CRADA with Stephens Engineering was undertaken in order to transfer knowledge and experience about access to information in large text databases, with results of queries and searches provided using the multimedia capabilities of the World Wide Web. Data access is optimized by the use of intelligent agents. Technology Logic Diagram documents published for the DOE facilities in Oak Ridge (K-25, X-10, Y-12) were chosen for this effort because of the large number of technologies identified, described, evaluated, and ranked for possible use in the environmental remediation of these facilities. Fast, convenient access to this information is difficult because of the volume and complexity of the data. WAIS software used to provide full-text, field-based search capability can also be used, through the development of an appropriate hierarchy of menus, to provide tabular summaries of technologies satisfying a wide range of criteria. The menu hierarchy can also be used to regenerate dynamically many of the tables that appeared in the original hardcopy publications, all from a single text database of the technology descriptions. Use of the Web environment permits linking many of the Technology Logic Diagram references to on-line versions of these publications, particularly the DOE Orders and related directives providing the legal requirements that were the basis for undertaking the Technology Logic Diagram studies in the first place.

  17. The European Narcolepsy Network (EU-NN) database

    DEFF Research Database (Denmark)

    Khatami, Ramin; Luca, Gianina; Baumann, Christian R

    2016-01-01

    a few European countries have registered narcolepsy cases in databases of the International Classification of Diseases or in registries of the European health authorities. A promising approach to identify disease-specific adverse health effects and needs in healthcare delivery in the field of rare...... diseases is to establish a distributed expert network. A first and important step is to create a database that allows collection, storage and dissemination of data on narcolepsy in a comprehensive and systematic way. Here, the first prospective web-based European narcolepsy database hosted by the European...... Narcolepsy Network is introduced. The database structure, standardization of data acquisition and quality control procedures are described, and an overview provided of the first 1079 patients from 18 European specialized centres. Due to its standardization this continuously increasing data pool is most...

  18. Representing Non-Relational Databases with Darwinian Networks

    Directory of Open Access Journals (Sweden)

    Paulo Roberto Martins de Andrade

    2017-05-01

    Full Text Available The Darwinian networks (DNs are first introduced by Dr Butz [1] to simplify and clarify how to work with Bayesian networks (BNs. DNs can unify modeling and reasoning tasks into a single platform using the graphical manipulation of the probability tables that takes on a biological feel. From this view of the DNs, we propose a graphical library to represent and depict non-relational databases using DNs. Because of the growing of this kind of databases, we need even more tools to help in the management work, and the DNs can help with these tasks.

  19. FINAL DIGITAL FLOOD INSURANCE RATE MAP DATABASE, CHEROKEE COUNTY, SC

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  20. FINAL DFIRM DATABASE, PALO PINTO COUNTY, TEXAS, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  1. FINAL DIGITAL FLOOD INSURANCE RATE MAP DATABASE, INYO COUNTY, CA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  2. FINAL DFIRM DATABASE, RIO ARRIBA COUNTY, NEW MEXICO, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  3. FINAL DIGITAL FLOOD INSURANCE RATE MAP DATABASE, GREENWOOD COUNTY, SC

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  4. SoyFN: a knowledge database of soybean functional networks.

    Science.gov (United States)

    Xu, Yungang; Guo, Maozu; Liu, Xiaoyan; Wang, Chunyu; Liu, Yang

    2014-01-01

    Many databases for soybean genomic analysis have been built and made publicly available, but few of them contain knowledge specifically targeting the omics-level gene-gene, gene-microRNA (miRNA) and miRNA-miRNA interactions. Here, we present SoyFN, a knowledge database of soybean functional gene networks and miRNA functional networks. SoyFN provides user-friendly interfaces to retrieve, visualize, analyze and download the functional networks of soybean genes and miRNAs. In addition, it incorporates much information about KEGG pathways, gene ontology annotations and 3'-UTR sequences as well as many useful tools including SoySearch, ID mapping, Genome Browser, eFP Browser and promoter motif scan. SoyFN is a schema-free database that can be accessed as a Web service from any modern programming language using a simple Hypertext Transfer Protocol call. The Web site is implemented in Java, JavaScript, PHP, HTML and Apache, with all major browsers supported. We anticipate that this database will be useful for members of research communities both in soybean experimental science and bioinformatics. Database URL: http://nclab.hit.edu.cn/SoyFN.

  5. Network Management Temporal Database System Based on XTACACS Protocol

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper first analyzes the basic concepts of Cisco router user certification, XTACACS certification protocol and describes the data architecture of user data protocol, then defines a kind of doubly temporal databases and log-in transaction processing based on network management property, at last, introduces implementation technology and method.

  6. An Image Database on a Parallel Processing Network.

    Science.gov (United States)

    Philip, G.; And Others

    1991-01-01

    Describes the design and development of an image database for photographs in the Ulster Museum (Northern Ireland) that used parallelism from a transputer network. Topics addressed include image processing techniques; documentation needed for the photographs, including indexing, classifying, and cataloging; problems; hardware and software aspects;…

  7. THEREDA. Thermodynamic reference database. Summary of final report

    Energy Technology Data Exchange (ETDEWEB)

    Altmaier, Marcus; Bube, Christiane; Marquardt, Christian [Karlsruher Institut fuer Technologie (KIT), Eggenstein-Leopoldshafen (Germany). Institut fuer Nukleare Entsorgung; Brendler, Vinzenz; Richter, Anke [Helmholtz-Zentrum Dresden-Rossendorf (Germany). Inst. fuer Radiochemie; Moog, Helge C.; Scharge, Tina [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Koeln (Germany); Voigt, Wolfgang [TU Bergakademie Freiburg (Germany). Inst. fuer Anorganische Chemie; Wilhelm, Stefan [AF-Colenco AG, Baden (Switzerland)

    2011-03-15

    A long term safety assessment of a repository for radioactive waste requires evidence, that all relevant processes are known and understood, which might have a significant positive or negative impact on its safety. In 2002, a working group of five institutions was established to create a common thermodynamic database for nuclear waste disposal in deep geological formations. The common database was named THEREDA: Thermodynamic Reference Database. The following institutions are members of the working group: Helmholtz-Zentrum Dresden-Rossendorf, Institute of Radiochemistry - Karlsruhe Institute of Technology, Institute for Nuclear Waste Disposal - Technische Universitaet Bergakademie Freiberg, Institute of Inorganic Chemistry - AF-Colenco AG, Baden, Switzerland, Department of Groundwater Protection and Waste Disposal - Gesellschaft fur Anlagen- und Reaktorsicherheit, Braunschweig. For the future it is intended that its usage becomes mandatory for geochemical model calculations for nuclear waste disposal in Germany. Furthermore, it was agreed that the new database should be established in accordance with the following guidelines: Long-term usability: The disposal of radioactive waste is a task encompassing decades. The database is projected to operate on a long-term basis. This has influenced the choice of software (which is open source), the documentation and the data structure. THEREDA is adapted to the present-day necessities and computational codes but also leaves many degrees of freedom for varying demands in the future. Easy access: The database is accessible via the World Wide Web for free. Applicability: To promote the usage of the database in a wide community, THEREDA is providing ready-to-use parameter files for the most common codes. These are at present: PHREEQC, EQ3/6, Geochemist's Workbench, and CHEMAPP. Internal consistency: It is distinguished between dependent and independent data. To ensure the required internal consistency of THEREDA, the

  8. Towards future electricity networks - Final report

    Energy Technology Data Exchange (ETDEWEB)

    Papaemmanouil, A.

    2008-07-01

    This comprehensive final report for the Swiss Federal Office of Energy (SFOE) reviews work done on the development of new power transmission planning tools for restructured power networks. These are needed in order to face the challenges that arise due to economic, environmental and social issues. The integration of transmission, generation and energy policy planning in order to support a common strategy with respect to sustainable electricity networks is discussed. In the first phase of the project the main focus was placed on the definition of criteria and inputs that are most likely to affect sustainable transmission expansion plans. Models, concepts, and methods developed in order to study the impact of the internalisation of external costs in power production are examined. To consider external costs in the planning process, a concurrent software tool has been implemented that is capable of studying possible development scenarios. The report examines a concept that has been developed to identify congested transmission lines or corridors and evaluates the dependencies between the various market participants. The paper includes a set of three appendices that include a paper on the 28{sup th} USAEE North American conference, an abstract from Powertech 2009 and an SFOE report from July 2008.

  9. Automated tools for cross-referencing large databases. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Clapp, N E; Green, P L; Bell, D [and others

    1997-05-01

    A Cooperative Research and Development Agreement (CRADA) was funded with TRESP Associates, Inc., to develop a limited prototype software package operating on one platform (e.g., a personal computer, small workstation, or other selected device) to demonstrate the concepts of using an automated database application to improve the process of detecting fraud and abuse of the welfare system. An analysis was performed on Tennessee`s welfare administration system. This analysis was undertaken to determine if the incidence of welfare waste, fraud, and abuse could be reduced and if the administrative process could be improved to reduce benefits overpayment errors. The analysis revealed a general inability to obtain timely data to support the verification of a welfare recipient`s economic status and eligibility for benefits. It has been concluded that the provision of more modern computer-based tools and the establishment of electronic links to other state and federal data sources could increase staff efficiency, reduce the incidence of out-of-date information provided to welfare assistance staff, and make much of the new data required available in real time. Electronic data links have been proposed to allow near-real-time access to data residing in databases located in other states and at federal agency data repositories. The ability to provide these improvements to the local office staff would require the provision of additional computers, software, and electronic data links within each of the offices and the establishment of approved methods of accessing remote databases and transferring potentially sensitive data. In addition, investigations will be required to ascertain if existing laws would allow such data transfers, and if not, what changed or new laws would be required. The benefits, in both cost and efficiency, to the state of Tennessee of having electronically-enhanced welfare system administration and control are expected to result in a rapid return of investment.

  10. The European Narcolepsy Network (EU-NN) database.

    Science.gov (United States)

    Khatami, Ramin; Luca, Gianina; Baumann, Christian R; Bassetti, Claudio L; Bruni, Oliviero; Canellas, Francesca; Dauvilliers, Yves; Del Rio-Villegas, Rafael; Feketeova, Eva; Ferri, Raffaele; Geisler, Peter; Högl, Birgit; Jennum, Poul; Kornum, Birgitte R; Lecendreux, Michel; Martins-da-Silva, Antonio; Mathis, Johannes; Mayer, Geert; Paiva, Teresa; Partinen, Markku; Peraita-Adrados, Rosa; Plazzi, Guiseppe; Santamaria, Joan; Sonka, Karel; Riha, Renata; Tafti, Mehdi; Wierzbicka, Aleksandra; Young, Peter; Lammers, Gert Jan; Overeem, Sebastiaan

    2016-06-01

    Narcolepsy with cataplexy is a rare disease with an estimated prevalence of 0.02% in European populations. Narcolepsy shares many features of rare disorders, in particular the lack of awareness of the disease with serious consequences for healthcare supply. Similar to other rare diseases, only a few European countries have registered narcolepsy cases in databases of the International Classification of Diseases or in registries of the European health authorities. A promising approach to identify disease-specific adverse health effects and needs in healthcare delivery in the field of rare diseases is to establish a distributed expert network. A first and important step is to create a database that allows collection, storage and dissemination of data on narcolepsy in a comprehensive and systematic way. Here, the first prospective web-based European narcolepsy database hosted by the European Narcolepsy Network is introduced. The database structure, standardization of data acquisition and quality control procedures are described, and an overview provided of the first 1079 patients from 18 European specialized centres. Due to its standardization this continuously increasing data pool is most promising to provide a better insight into many unsolved aspects of narcolepsy and related disorders, including clear phenotype characterization of subtypes of narcolepsy, more precise epidemiological data and knowledge on the natural history of narcolepsy, expectations about treatment effects, identification of post-marketing medication side-effects, and will contribute to improve clinical trial designs and provide facilities to further develop phase III trials.

  11. Global Terrestrial Network for Glaciers: Databases and Web interfaces

    Science.gov (United States)

    Raup, B.; Armstrong, R.; Fetterer, F.; Gartner-Roer, I.; Haeberli, W.; Hoelzle, M.; Khalsa, S. J. S.; Nussbaumer, S.; Weaver, R.; Zemp, M.

    2012-04-01

    The Global Terrestrial Network for Glaciers (GTN-G) is an umbrella organization with links to the Global Climate Observing System (GCOS), Global Terrestrial Observing System (GTOS), and UNESCO (all organizations under the United Nations), for the curation of several glacier-related databases. It is composed of the World Glacier Monitoring Service (WGMS), the U.S. National Snow and Ice Data Center (NSIDC), and the Global Land Ice Measurements from Space (GLIMS) initiative. The glacier databases include the World Glacier Inventory (WGI), the GLIMS Glacier Database, the Glacier Photograph Collection at NSIDC, and the Fluctuations of Glaciers (FoG) and Mass Balance databases at WGMS. We are working toward increased interoperability between these related databases. For example, the Web interface to the GLIMS Glacier Database has also included queryable layers for the WGI and FoG databases since 2008. To improve this further, we have produced a new GTN-G web portal (http://www.gtn-g.org/), which includes a glacier metadata browsing application. This web application allows the browsing of the metadata behind the main GTN-G databases, as well as querying the metadata in order to get to the source, no matter which database holds the data in question. A new glacier inventory, called the Randolph Glacier Inventory 1.0, has recently been compiled. This compilation, which includes glacier outlines that do not have the attributes or IDs or links to other data like the GLIMS data do, was motivated by the tight deadline schedule of the sea level chapter of the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). Now served from the GLIMS website (http://glims.org/), it is designed to serve that narrowly focused research goal in the near term, and in the longer term will be incorporated into the multi-temporal glacier database of GLIMS. For the required merging of large sets of glacier outlines and association of proper IDs that tie together outlines

  12. [FY 2014 Final report]: Native Prairie Adaptive Management Database Archive and Competing Model Linkage

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — The final report for the Native Prairie Adaptive Management Database Archive and Competing Model Linkage project covers activities during FY2014. The overall goal of...

  13. Handling of network and database instabilities in CORAL

    CERN Document Server

    Trentadue, R; Kalkhof, A

    2012-01-01

    The CORAL software is widely used by the LHC experiments for storing and accessing data using relational database technologies. CORAL provides a C++ abstraction layer that supports data persistency for several back-ends and deployment models, direct client access to Oracle servers being one of the most important use cases. Since 2010, several problems have been reported by the LHC experiments in their use of Oracle through CORAL, involving application errors, hangs or crashes after the network or the database servers became temporarily unavailable. CORAL already provided some level of handling of these instabilities, which are due to external causes and cannot be avoided, but this proved to be insufficient in some cases and to be itself the cause of other problems, such as the hangs and crashes mentioned before, in other cases. As a consequence, a major redesign of the CORAL plugins has been implemented, with the aim of making the software more robust against these database and network glitches. The new imple...

  14. Network-based statistical comparison of citation topology of bibliographic databases

    Science.gov (United States)

    Šubelj, Lovro; Fiala, Dalibor; Bajec, Marko

    2014-09-01

    Modern bibliographic databases provide the basis for scientific research and its evaluation. While their content and structure differ substantially, there exist only informal notions on their reliability. Here we compare the topological consistency of citation networks extracted from six popular bibliographic databases including Web of Science, CiteSeer and arXiv.org. The networks are assessed through a rich set of local and global graph statistics. We first reveal statistically significant inconsistencies between some of the databases with respect to individual statistics. For example, the introduced field bow-tie decomposition of DBLP Computer Science Bibliography substantially differs from the rest due to the coverage of the database, while the citation information within arXiv.org is the most exhaustive. Finally, we compare the databases over multiple graph statistics using the critical difference diagram. The citation topology of DBLP Computer Science Bibliography is the least consistent with the rest, while, not surprisingly, Web of Science is significantly more reliable from the perspective of consistency. This work can serve either as a reference for scholars in bibliometrics and scientometrics or a scientific evaluation guideline for governments and research agencies.

  15. Network-based statistical comparison of citation topology of bibliographic databases

    Science.gov (United States)

    Šubelj, Lovro; Fiala, Dalibor; Bajec, Marko

    2014-01-01

    Modern bibliographic databases provide the basis for scientific research and its evaluation. While their content and structure differ substantially, there exist only informal notions on their reliability. Here we compare the topological consistency of citation networks extracted from six popular bibliographic databases including Web of Science, CiteSeer and arXiv.org. The networks are assessed through a rich set of local and global graph statistics. We first reveal statistically significant inconsistencies between some of the databases with respect to individual statistics. For example, the introduced field bow-tie decomposition of DBLP Computer Science Bibliography substantially differs from the rest due to the coverage of the database, while the citation information within arXiv.org is the most exhaustive. Finally, we compare the databases over multiple graph statistics using the critical difference diagram. The citation topology of DBLP Computer Science Bibliography is the least consistent with the rest, while, not surprisingly, Web of Science is significantly more reliable from the perspective of consistency. This work can serve either as a reference for scholars in bibliometrics and scientometrics or a scientific evaluation guideline for governments and research agencies. PMID:25263231

  16. Data mining the EXFOR database using network theory

    CERN Document Server

    Hirdt, John A

    2013-01-01

    The EXFOR database contains the largest collection of experimental nuclear reaction data available as well as the data's bibliographic information and experimental details. We created an undirected graph from the EXFOR datasets with graph nodes representing single observables and graph links representing the various types of connections between these observables. This graph is an abstract representation of the connections in EXFOR, similar to graphs of social networks, authorship networks, etc. By analyzing this abstract graph, we are able to address very specific questions such as 1) what observables are being used as reference measurements by the experimental nuclear science community? 2) are these observables given the attention needed by various nuclear data evaluation projects? 3) are there classes of observables that are not connected to these reference measurements? In addressing these questions, we propose several (mostly cross section) observables that should be evaluated and made into reaction refer...

  17. Gigabit network technology. Final technical report

    Energy Technology Data Exchange (ETDEWEB)

    Davenport, C.M.C. [ed.

    1996-10-01

    Current digital networks are evolving toward distributed multimedia with a wide variety of applications with individual data rates ranging from kb/sec to tens and hundreds of Mb/sec. Link speed requirements are pushing into the Gb/sec range and beyond the envelop of electronic networking capabilities. There is a vast amount of untapped bandwidth available in the low-attenuation communication bands of an optical fiber. The capacity in one fiber thread is enough to carry more than two thousand times as much information as all the current radio and microwave frequencies. And while fiber optics has replaced copper wire as the transmission medium of choice, the communication capacity of conventional fiber optic networks is ultimately limited by electronic processing speeds.

  18. Provider Services Network Project. Draft Final Report.

    Science.gov (United States)

    Urban and Rural Systems Associates, San Francisco, CA.

    This draft report on the development and testing of a child care Provider Services Network (PSN) model in Santa Clara County, California, includes a handbook (Manual to Optimize a PSN) designed to provide the State Department of Education and regional or local child care coordinating agencies with information needed to develop PSN optimization…

  19. HEP Science Network Requirements--Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Bakken, Jon; Barczyk, Artur; Blatecky, Alan; Boehnlein, Amber; Carlson, Rich; Chekanov, Sergei; Cotter, Steve; Cottrell, Les; Crawford, Glen; Crawford, Matt; Dart, Eli; Dattoria, Vince; Ernst, Michael; Fisk, Ian; Gardner, Rob; Johnston, Bill; Kent, Steve; Lammel, Stephan; Loken, Stewart; Metzger, Joe; Mount, Richard; Ndousse-Fetter, Thomas; Newman, Harvey; Schopf, Jennifer; Sekine, Yukiko; Stone, Alan; Tierney, Brian; Tull, Craig; Zurawski, Jason

    2010-04-27

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In August 2009 ESnet and the Office of High Energy Physics (HEP), of the DOE Office of Science, organized a workshop to characterize the networking requirements of the programs funded by HEP. The International HEP community has been a leader in data intensive science from the beginning. HEP data sets have historically been the largest of all scientific data sets, and the communty of interest the most distributed. The HEP community was also the first to embrace Grid technologies. The requirements identified at the workshop are summarized below, and described in more detail in the case studies and the Findings section: (1) There will be more LHC Tier-3 sites than orginally thought, and likely more Tier-2 to Tier-2 traffic than was envisioned. It it not yet known what the impact of this will be on ESnet, but we will need to keep an eye on this traffic. (2) The LHC Tier-1 sites (BNL and FNAL) predict the need for 40-50 Gbps of data movement capacity in 2-5 years, and 100-200 Gbps in 5-10 years for HEP program related traffic. Other key HEP sites include LHC Tier-2 and Tier-3 sites, many of which are located at universities. To support the LHC, ESnet must continue its collaborations with university and international networks. (3) While in all cases the deployed 'raw' network bandwidth must exceed the user requirements in order to meet the data transfer and reliability requirements, network engineering for trans

  20. Microsimulation Model Estimating Czech Farm Income from Farm Accountancy Data Network Database

    Directory of Open Access Journals (Sweden)

    Z. Hloušková

    2014-09-01

    Full Text Available Agricultural income is one of the most important measures of economic status of agricultural farms and the whole agricultural sector. This work is focused on finding the optimal method of estimating national agricultural income from micro-economic database managed by the Farm Accountancy Data Network (FADN. Use of FADN data base is relevant due to the representativeness of the results for the whole country and the opportunity to carry out micro-level analysis. The main motivation for this study was a first forecast of national agricultural income from FADN data undertaken 9 months before the final official FADN results were published. Our own method of estimating the income estimation and the simulation procedure were established and successfully tested on the whole database on data from two preceding years. Present paper also provides information on used method of agricultural income prediction and on tests of its suitability.

  1. Strong Ground Motion Database System for the Mexican Seismic Network

    Science.gov (United States)

    Perez-Yanez, C.; Ramirez-Guzman, L.; Ruiz, A. L.; Delgado, R.; Macías, M. A.; Sandoval, H.; Alcántara, L.; Quiroz, A.

    2014-12-01

    A web-based system for strong Mexican ground motion records dissemination and archival is presented. More than 50 years of continuous strong ground motion instrumentation and monitoring in Mexico have provided a fundamental resource -several thousands of accelerograms- for better understanding earthquakes and their effects in the region. Lead by the Institute of Engineering (IE) of the National Autonomous University of Mexico (UNAM), the engineering strong ground motion monitoring program at IE relies on a continuously growing network, that at present includes more than 100 free-field stations and provides coverage to the seismic zones in the country. Among the stations, approximately 25% send the observed acceleration to a processing center in Mexico City in real-time, and the rest require manual access, remote or in situ, for later processing and cataloguing. As part of a collaboration agreement between UNAM and the National Center for Disaster Prevention, regarding the construction and operation of a unified seismic network, a web system was developed to allow access to UNAM's engineering strong motion archive and host data from other institutions. The system allows data searches under a relational database schema, following a general structure relying on four databases containing the: 1) free-field stations, 2) epicentral location associated with the strong motion records available, 3) strong motion catalogue, and 4) acceleration files -the core of the system. In order to locate and easily access one or several records of the data bank, the web system presents a variety of parameters that can be involved in a query (seismic event, region boundary, station name or ID, radial distance to source or peak acceleration). This homogeneous platform has been designed to facilitate dissemination and processing of the information worldwide. Each file, in a standard format, contains information regarding the recording instrument, the station, the corresponding earthquake

  2. PL/SQL and Bind Variable: the two ways to increase the efficiency of Network Databases

    Directory of Open Access Journals (Sweden)

    Hitesh KUMAR SHARMA

    2011-12-01

    Full Text Available Modern data analysis applications are driven by the Network databases. They are pushing traditional database and data warehousing technologies beyond their limits due to their massively increasing data volumes and demands for low latency. There are three major challenges in working with network databases: interoperability due to heterogeneous data repositories, proactively due to autonomy of data sources and high efficiency to meet the application demand. This paper provides the two ways to meet the third challenge of network databases. This goal can be achieved by network database administrator with the usage of PL/SQL blocks and bind variable. The paper will explain the effect of PL/SQL block and bind variable on Network database efficiency to meet the modern data analysis application demand.

  3. Application of CAPEC Lipid Property Databases in the Synthesis and Design of Biorefinery Networks

    DEFF Research Database (Denmark)

    Bertran, Maria-Ona; Cunico, Larissa; Gani, Rafiqul

    processes are not. Lipids are present in biorefinery processes: they represent feedstock (vegetable oil, waste cooking oil, microalgal oil), intermediate products (fatty acids, glycerol) and final products in biorefineries, thus the prediction of their properties is of relevance for the synthesis and design......]. The wide variety and complex nature of components in biorefineries poses a challenge with respect to the synthesis and design of these types of processes. Whereas physical and thermodynamic property data or models for petroleum-based processes are widely available, most data and models for biobased...... of biorefinery networks. The objective of this work is to show the application of databases of physical and thermodynamic properties of lipid components to the synthesis and design of biorefinery networks....

  4. Exploring the Ligand-Protein Networks in Traditional Chinese Medicine: Current Databases, Methods, and Applications

    Directory of Open Access Journals (Sweden)

    Mingzhu Zhao

    2013-01-01

    Full Text Available The traditional Chinese medicine (TCM, which has thousands of years of clinical application among China and other Asian countries, is the pioneer of the “multicomponent-multitarget” and network pharmacology. Although there is no doubt of the efficacy, it is difficult to elucidate convincing underlying mechanism of TCM due to its complex composition and unclear pharmacology. The use of ligand-protein networks has been gaining significant value in the history of drug discovery while its application in TCM is still in its early stage. This paper firstly surveys TCM databases for virtual screening that have been greatly expanded in size and data diversity in recent years. On that basis, different screening methods and strategies for identifying active ingredients and targets of TCM are outlined based on the amount of network information available, both on sides of ligand bioactivity and the protein structures. Furthermore, applications of successful in silico target identification attempts are discussed in detail along with experiments in exploring the ligand-protein networks of TCM. Finally, it will be concluded that the prospective application of ligand-protein networks can be used not only to predict protein targets of a small molecule, but also to explore the mode of action of TCM.

  5. Five years database of landslides and floods affecting Swiss transportation networks

    Science.gov (United States)

    Voumard, Jérémie; Derron, Marc-Henri; Jaboyedoff, Michel

    2017-04-01

    Switzerland is a country threatened by a lot of natural hazards. Many events occur in built environment, affecting infrastructures, buildings or transportation networks and producing occasionally expensive damages. This is the reason why large landslides are generally well studied and monitored in Switzerland to reduce the financial and human risks. However, we have noticed a lack of data on small events which have impacted roads and railways these last years. This is why we have collect all the reported natural hazard events which have affected the Swiss transportation networks since 2012 in a database. More than 800 roads and railways closures have been recorded in five years from 2012 to 2016. These event are classified into six classes: earth flow, debris flow, rockfall, flood, avalanche and others. Data come from Swiss online press articles sorted by Google Alerts. The search is based on more than thirty keywords, in three languages (Italian, French, German). After verifying that the article relates indeed an event which has affected a road or a railways track, it is studied in details. We get finally information on about sixty attributes by event about event date, event type, event localisation, meteorological conditions as well as impacts and damages on the track and human damages. From this database, many trends over the five years of data collection can be outlined: in particular, the spatial and temporal distributions of the events, as well as their consequences in term of traffic (closure duration, deviation, etc.). Even if the database is imperfect (by the way it was built and because of the short time period considered), it highlights the not negligible impact of small natural hazard events on roads and railways in Switzerland at a national level. This database helps to better understand and quantify this events, to better integrate them in risk assessment.

  6. Databases

    Data.gov (United States)

    National Aeronautics and Space Administration — The databases of computational and experimental data from the first Aeroelastic Prediction Workshop are located here. The databases file names tell their contents by...

  7. The Animal Genetic Resource Information Network (AnimalGRIN) Database: A Database Design & Implementation Case

    Science.gov (United States)

    Irwin, Gretchen; Wessel, Lark; Blackman, Harvey

    2012-01-01

    This case describes a database redesign project for the United States Department of Agriculture's National Animal Germplasm Program (NAGP). The case provides a valuable context for teaching and practicing database analysis, design, and implementation skills, and can be used as the basis for a semester-long team project. The case demonstrates the…

  8. The research of network database security technology based on web service

    Science.gov (United States)

    Meng, Fanxing; Wen, Xiumei; Gao, Liting; Pang, Hui; Wang, Qinglin

    2013-03-01

    Database technology is one of the most widely applied computer technologies, its security is becoming more and more important. This paper introduced the database security, network database security level, studies the security technology of the network database, analyzes emphatically sub-key encryption algorithm, applies this algorithm into the campus-one-card system successfully. The realization process of the encryption algorithm is discussed, this method is widely used as reference in many fields, particularly in management information system security and e-commerce.

  9. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  10. Improving Network Scalability Using NoSql Database

    Directory of Open Access Journals (Sweden)

    Ankita Bhatewara , Kalyani Waghmare

    2012-12-01

    Full Text Available The traditional database is designed for the structured data and the complex query. In the environment of the cloud, the scale of data is very large, the data is non-structured, the request of the data is dynamic, these characteristics raise new challenges for the data storage and administration, in this context, the NoSQL database comes into picture. This paper discusses about some nonstructured databases. It also discusses advantages and disadvantages of Cassandra and howCassandra is used to improve the scalability of thenetwork compared to RDBMS

  11. Application and Network-Cognizant Proxies - Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Antonio Ortega; Daniel C. Lee

    2003-03-24

    OAK B264 Application and Network-Cognizant Proxies - Final Report. Current networks show increasing heterogeneity both in terms of their bandwidths/delays and the applications they are required to support. This is a trend that is likely to intensify in the future, as real-time services, such as video, become more widely available and networking access over wireless links becomes more widespread. For this reason they propose that application-specific proxies, intermediate network nodes that broker the interactions between server and client, will become an increasingly important network element. These proxies will allow adaptation to changes in network characteristics without requiring a direct intervention of either server or client. Moreover, it will be possible to locate these proxies strategically at those points where a mismatch occurs between subdomains (for example, a proxy could be placed so as to act as a bridge between a reliable network domain and an unreliable one). This design philosophy favors scalability in the sense that the basic network infrastructure can remain unchanged while new functionality can be added to proxies, as required by the applications. While proxies can perform numerous generic functions, such as caching or security, they concentrate here on media-specific, and in particular video-specific, tasks. The goal of this project was to demonstrate that application- and network-specific knowledge at a proxy can improve overall performance especially under changing network conditions. They summarize below the work performed to address these issues. Particular effort was spent in studying caching techniques and on video classification to enable DiffServ delivery. other work included analysis of traffic characteristics, optimized media scheduling, coding techniques based on multiple description coding, and use of proxies to reduce computation costs. This work covered much of what was originally proposed but with a necessarily reduced scope.

  12. A Database Query Processing Model in Peer-To-Peer Network ...

    African Journals Online (AJOL)

    PROMOTING ACCESS TO AFRICAN RESEARCH ... This paper presents an extensive evaluation of database query processing model in peer-to-peer networks using top-k query processing technique and implemented by Java,and MySql.

  13. Network-based statistical comparison of citation topology of bibliographic databases

    OpenAIRE

    Lovro Šubelj; Dalibor Fiala; Marko Bajec

    2014-01-01

    Modern bibliographic databases provide the basis for scientific research and its evaluation. While their content and structure differ substantially, there exist only informal notions on their reliability. Here we compare the topological consistency of citation networks extracted from six popular bibliographic databases including Web of Science, CiteSeer and arXiv.org. The networks are assessed through a rich set of local and global graph statistics. We first reveal statistically significant i...

  14. Developing an Automated Technique for Translating a Relational Database into an Equivalent Network One.

    Science.gov (United States)

    1984-12-01

    be superior in terms of the logical description of databases, whereas the network model is more ef- ficient of space and time, and provides a more...as more familiar low-level operators are available. The high-level operators are those of the relational algebra and equivalent languages. The number...Principles of Database Systems (Second Edition). Maryland: Coputer Science Press, 1982. 8. Cardenas , Alfonso F. Database Management Systems. Boston

  15. Fish Karyome: A karyological information network database of Indian Fishes.

    Science.gov (United States)

    Nagpure, Naresh Sahebrao; Pathak, Ajey Kumar; Pati, Rameshwar; Singh, Shri Prakash; Singh, Mahender; Sarkar, Uttam Kumar; Kushwaha, Basdeo; Kumar, Ravindra

    2012-01-01

    'Fish Karyome', a database on karyological information of Indian fishes have been developed that serves as central source for karyotype data about Indian fishes compiled from the published literature. Fish Karyome has been intended to serve as a liaison tool for the researchers and contains karyological information about 171 out of 2438 finfish species reported in India and is publically available via World Wide Web. The database provides information on chromosome number, morphology, sex chromosomes, karyotype formula and cytogenetic markers etc. Additionally, it also provides the phenotypic information that includes species name, its classification, and locality of sample collection, common name, local name, sex, geographical distribution, and IUCN Red list status. Besides, fish and karyotype images, references for 171 finfish species have been included in the database. Fish Karyome has been developed using SQL Server 2008, a relational database management system, Microsoft's ASP.NET-2008 and Macromedia's FLASH Technology under Windows 7 operating environment. The system also enables users to input new information and images into the database, search and view the information and images of interest using various search options. Fish Karyome has wide range of applications in species characterization and identification, sex determination, chromosomal mapping, karyo-evolution and systematics of fishes.

  16. A new global river network database for macroscale hydrologic modeling

    Science.gov (United States)

    Wu, Huan; Kimball, John S.; Li, Hongyi; Huang, Maoyi; Leung, L. Ruby; Adler, Robert F.

    2012-09-01

    Coarse-resolution (upscaled) river networks are critical inputs for runoff routing in macroscale hydrologic models. Recently, Wu et al. (2011) developed a hierarchical dominant river tracing (DRT) algorithm for automated extraction and spatial upscaling of river networks using fine-scale hydrography inputs. We applied the DRT algorithms using combined HydroSHEDS and HYDRO1k global fine-scale hydrography inputs and produced a new series of upscaled global river network data at multiple (1/16° to 2°) spatial resolutions. The new upscaled results are internally consistent and congruent with the baseline fine-scale inputs and should facilitate improved regional to global scale hydrologic simulations.

  17. Trajectory Based Optimal Segment Computation in Road Network Databases

    DEFF Research Database (Denmark)

    Li, Xiaohui; Ceikute, Vaida; Jensen, Christian S.

    that adopt different approaches to computing the query. Algorithm AUG uses graph augmentation, and ITE uses iterative road-network partitioning. Empirical studies with real data sets demonstrate that the algorithms are capable of offering high performance in realistic settings....... that are shown empirically to be scalable. Given a road network, a set of existing facilities, and a collection of customer route traversals, an optimal segment query returns the optimal road network segment(s) for a new facility. We propose a practical framework for computing this query, where each route...

  18. Trajectory Based Optimal Segment Computation in Road Network Databases

    DEFF Research Database (Denmark)

    Li, Xiaohui; Ceikute, Vaida; Jensen, Christian S.

    2013-01-01

    that adopt different approaches to computing the query. Algorithm AUG uses graph augmentation, and ITE uses iterative road-network partitioning. Empirical studies with real data sets demonstrate that the algorithms are capable of offering high performance in realistic settings....... that are shown empirically to be scalable. Given a road network, a set of existing facilities, and a collection of customer route traversals, an optimal segment query returns the optimal road network segment(s) for a new facility. We propose a practical framework for computing this query, where each route...

  19. Location in Mobile Networks Using Database Correlation (DCM

    Directory of Open Access Journals (Sweden)

    Michal Mada

    2008-01-01

    Full Text Available The article presents one of the methods of location (DCM, which is suitable for using in urban environments, where other methods are less accurate. The principle of this method is in comparison of measured samples of signal with samples stored in database. Next the article deals with methods for processing the data and other possibilities of correction of location.

  20. Content-based organization of the information space in multi-database networks

    NARCIS (Netherlands)

    Papazoglou, M.; Milliner, S.

    1998-01-01

    Abstract. Rapid growth in the volume of network-available data, complexity, diversity and terminological fluctuations, at different data sources, render network-accessible information increasingly difficult to achieve. The situation is particularly cumbersome for users of multi-database systems who

  1. Analyzing GAIAN Database (GaianDB) on a Tactical Network

    Science.gov (United States)

    2015-11-30

    NAME(S) AND ADDRESS(ES) US Army Research Laboratory ATTN: RDRL-CIN-T 2800 Powder Mill Road Adelphi, MD 20783-1138 8. PERFORMING ORGANIZATION...with the second-generation CSRs, we discovered that Internet Protocol (IP) encapsulation of GaianDB traffic in radio frequency (RF) transmissions...create a network bridge or, in our case, create an ad-hoc network by encapsulating raw packets (which contain preambles and Ethernet frames) inside

  2. Beyond emotion archetypes: databases for emotion modelling using neural networks.

    Science.gov (United States)

    Cowie, Roddy; Douglas-Cowie, Ellen; Cox, Cate

    2005-05-01

    There has been rapid development in conceptions of the kind of database that is needed for emotion research. Familiar archetypes are still influential, but the state of the art has moved beyond them. There is concern to capture emotion as it occurs in action and interaction ('pervasive emotion') as well as in short episodes dominated by emotion, and therefore in a range of contexts, which shape the way it is expressed. Context links to modality-different contexts favour different modalities. The strategy of using acted data is not suited to those aims, and has been supplemented by work on both fully natural emotion and emotion induced by various technique that allow more controlled records. Applications for that kind of work go far beyond the 'trouble shooting' that has been the focus for application: 'really natural language processing' is a key goal. The descriptions included in such a database ideally cover quality, emotional content, emotion-related signals and signs, and context. Several schemes are emerging as candidates for describing pervasive emotion. The major contemporary databases are listed, emphasising those which are naturalistic or induced, multimodal, and influential.

  3. Multiple k Nearest Neighbor Query Processing in Spatial Network Databases

    DEFF Research Database (Denmark)

    Xuegang, Huang; Jensen, Christian Søndergaard; Saltenis, Simonas

    2006-01-01

    This paper concerns the efficient processing of multiple k nearest neighbor queries in a road-network setting. The assumed setting covers a range of scenarios such as the one where a large population of mobile service users that are constrained to a road network issue nearest-neighbor queries...... for points of interest that are accessible via the road network. Given multiple k nearest neighbor queries, the paper proposes progressive techniques that selectively cache query results in main memory and subsequently reuse these for query processing. The paper initially proposes techniques for the case...... where an upper bound on k is known a priori and then extends the techniques to the case where this is not so. Based on empirical studies with real-world data, the paper offers insight into the circumstances under which the different proposed techniques can be used with advantage for multiple k nearest...

  4. Final report on the meteorological database, December 1944--1949. Hanford Environmental Dose Reconstruction Project

    Energy Technology Data Exchange (ETDEWEB)

    Stage, S.A.; Ramsdell, J.V. Jr.; Simonen, C.A.; Burk, K.W.; Berg, L.K.

    1993-11-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project is estimating radiation doses that individuals may have received from operations at Hanford from 1944 to the present. A number of computer programs are being developed by the HEDR Project to estimate doses and confidence ranges associated with radionuclides transported through the atmosphere and the Columbia River. One computer program is the Regional Atmospheric Transport Code for Hanford Emissions Tracking (RATCHET). RATCHET combines release data with information on atmospheric conditions including wind direction and speed. The RATCHET program uses these data to produce estimates of time-integrated air concentrations and surface contamination. These estimates are used in calculating dose by the Dynamic EStimates of Concentrations And Radionuclides in Terrestrial EnvironmentS (DESCARTES) and the Calculations of Individual Doses from Environmental Radionuclides (CIDER) computer programs. This report describes the final status of the meteorological database used by RATCHET. Data collection procedures and the preparation and control of the meteorological database are described, along with an assessment of the data quality.

  5. Causal biological network database: a comprehensive platform of causal biological network models focused on the pulmonary and vascular systems.

    Science.gov (United States)

    Boué, Stéphanie; Talikka, Marja; Westra, Jurjen Willem; Hayes, William; Di Fabio, Anselmo; Park, Jennifer; Schlage, Walter K; Sewer, Alain; Fields, Brett; Ansari, Sam; Martin, Florian; Veljkovic, Emilija; Kenney, Renee; Peitsch, Manuel C; Hoeng, Julia

    2015-01-01

    With the wealth of publications and data available, powerful and transparent computational approaches are required to represent measured data and scientific knowledge in a computable and searchable format. We developed a set of biological network models, scripted in the Biological Expression Language, that reflect causal signaling pathways across a wide range of biological processes, including cell fate, cell stress, cell proliferation, inflammation, tissue repair and angiogenesis in the pulmonary and cardiovascular context. This comprehensive collection of networks is now freely available to the scientific community in a centralized web-based repository, the Causal Biological Network database, which is composed of over 120 manually curated and well annotated biological network models and can be accessed at http://causalbionet.com. The website accesses a MongoDB, which stores all versions of the networks as JSON objects and allows users to search for genes, proteins, biological processes, small molecules and keywords in the network descriptions to retrieve biological networks of interest. The content of the networks can be visualized and browsed. Nodes and edges can be filtered and all supporting evidence for the edges can be browsed and is linked to the original articles in PubMed. Moreover, networks may be downloaded for further visualization and evaluation. Database URL: http://causalbionet.com

  6. Nuclear Physics Science Network Requirements Workshop, May 2008 - Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, Ed., Brian L; Dart, Ed., Eli; Carlson, Rich; Dattoria, Vince; Ernest, Michael; Hitchcock, Daniel; Johnston, William; Kowalski, Andy; Lauret, Jerome; Maguire, Charles; Olson, Douglas; Purschke, Martin; Rai, Gulshan; Watson, Chip; Vale, Carla

    2008-11-10

    to manage those transfers effectively. Network reliability is also becoming more important as there is often a narrow window between data collection and data archiving when transfer and analysis can be done. The instruments do not stop producing data, so extended network outages can result in data loss due to analysis pipeline stalls. Finally, as the scope of collaboration continues to increase, collaboration tools such as audio and video conferencing are becoming ever more critical to the productivity of scientific collaborations.

  7. Nuclear Physics Science Network Requirements Workshop, May 2008 - Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, Ed., Brian L; Dart, Ed., Eli; Carlson, Rich; Dattoria, Vince; Ernest, Michael; Hitchcock, Daniel; Johnston, William; Kowalski, Andy; Lauret, Jerome; Maguire, Charles; Olson, Douglas; Purschke, Martin; Rai, Gulshan; Watson, Chip; Vale, Carla

    2008-11-10

    to manage those transfers effectively. Network reliability is also becoming more important as there is often a narrow window between data collection and data archiving when transfer and analysis can be done. The instruments do not stop producing data, so extended network outages can result in data loss due to analysis pipeline stalls. Finally, as the scope of collaboration continues to increase, collaboration tools such as audio and video conferencing are becoming ever more critical to the productivity of scientific collaborations.

  8. STITCH 2: an interaction network database for small molecules and proteins

    DEFF Research Database (Denmark)

    Kuhn, Michael; Szklarczyk, Damian; Franceschini, Andrea

    2010-01-01

    Over the last years, the publicly available knowledge on interactions between small molecules and proteins has been steadily increasing. To create a network of interactions, STITCH aims to integrate the data dispersed over the literature and various databases of biological pathways, drug-target r......Over the last years, the publicly available knowledge on interactions between small molecules and proteins has been steadily increasing. To create a network of interactions, STITCH aims to integrate the data dispersed over the literature and various databases of biological pathways, drug......-target relationships and binding affinities. In STITCH 2, the number of relevant interactions is increased by incorporation of BindingDB, PharmGKB and the Comparative Toxicogenomics Database. The resulting network can be explored interactively or used as the basis for large-scale analyses. To facilitate links to other...

  9. An online database for informing ecological network models: http://kelpforest.ucsc.edu

    Science.gov (United States)

    Beas-Luna, Rodrigo; Tinker, M. Tim; Novak, Mark; Carr, Mark H.; Black, August; Caselle, Jennifer E.; Hoban, Michael; Malone, Dan; Iles, Alison C.

    2014-01-01

    Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/data​baseui).

  10. An online database for informing ecological network models: http://kelpforest.ucsc.edu.

    Science.gov (United States)

    Beas-Luna, Rodrigo; Novak, Mark; Carr, Mark H; Tinker, Martin T; Black, August; Caselle, Jennifer E; Hoban, Michael; Malone, Dan; Iles, Alison

    2014-01-01

    Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/databaseui).

  11. Assessing Database and Network Threats in Traditional and Cloud Computing

    Directory of Open Access Journals (Sweden)

    Katerina Lourida

    2015-05-01

    Full Text Available Cloud Computing is currently one of the most widely-spoken terms in IT. While it offers a range of technological and financial benefits, its wide acceptance by organizations is not yet wide spread. Security concerns are a main reason for this and this paper studies the data and network threats posed in both traditional and cloud paradigms in an effort to assert in which areas cloud computing addresses security issues and where it does introduce new ones. This evaluation is based on Microsoft’s STRIDE threat model and discusses the stakeholders, the impact and recommendations for tackling each threat.

  12. Databases as policy instruments. About extending networks as evidence-based policy

    Directory of Open Access Journals (Sweden)

    Stoevelaar Herman

    2007-12-01

    Full Text Available Abstract Background This article seeks to identify the role of databases in health policy. Access to information and communication technologies has changed traditional relationships between the state and professionals, creating new systems of surveillance and control. As a result, databases may have a profound effect on controlling clinical practice. Methods We conducted three case studies to reconstruct the development and use of databases as policy instruments. Each database was intended to be employed to control the use of one particular pharmaceutical in the Netherlands (growth hormone, antiretroviral drugs for HIV and Taxol, respectively. We studied the archives of the Dutch Health Insurance Board, conducted in-depth interviews with key informants and organized two focus groups, all focused on the use of databases both in policy circles and in clinical practice. Results Our results demonstrate that policy makers hardly used the databases, neither for cost control nor for quality assurance. Further analysis revealed that these databases facilitated self-regulation and quality assurance by (national bodies of professionals, resulting in restrictive prescription behavior amongst physicians. Conclusion The databases fulfill control functions that were formerly located within the policy realm. The databases facilitate collaboration between policy makers and physicians, since they enable quality assurance by professionals. Delegating regulatory authority downwards into a network of physicians who control the use of pharmaceuticals seems to be a good alternative for centralized control on the basis of monitoring data.

  13. Design of special purpose database for credit cooperation bank business processing network system

    Science.gov (United States)

    Yu, Yongling; Zong, Sisheng; Shi, Jinfa

    2011-12-01

    With the popularization of e-finance in the city, the construction of e-finance is transfering to the vast rural market, and quickly to develop in depth. Developing the business processing network system suitable for the rural credit cooperative Banks can make business processing conveniently, and have a good application prospect. In this paper, We analyse the necessity of adopting special purpose distributed database in Credit Cooperation Band System, give corresponding distributed database system structure , design the specical purpose database and interface technology . The application in Tongbai Rural Credit Cooperatives has shown that system has better performance and higher efficiency.

  14. Sailor: Maryland's Online Public Information Network. Sailor Network Assessment Final Report Compendium.

    Science.gov (United States)

    Bertot, John Carlo; McClure, Charles R.

    This compendium is a companion document to the Maryland Sailor Online Public Information Network assessment final report, and contains detailed study findings, study data collection activity write-ups, detailed methodologies, data collection tools, and consultant notes on the uses of the study's data collection instruments. The purpose of the…

  15. Experience and Lessons learnt from running High Availability Databases on Network Attached Storage

    CERN Document Server

    Guijarro, Manuel

    2008-01-01

    The Database and Engineering Services Group of CERN's Information Technology Department supplies the Oracle Central Database services used in many activities at CERN. In order to provide High Availability and ease management for those services, a NAS (Network Attached Storage) based infrastructure has been setup. It runs several instances of the Oracle RAC (Real Application Cluster) using NFS (Network File System) as shared disk space for RAC purposes and Data hosting. It is composed of two private LANs (Local Area Network), one to provide access to the NAS filers and a second to implement the Oracle RAC private interconnect, both using Network Bonding. NAS filers are configured in partnership to prevent having single points of failure and to provide automatic NAS filer fail-over.

  16. Feature Selection in Detection of Adverse Drug Reactions from the Health Improvement Network (THIN Database

    Directory of Open Access Journals (Sweden)

    Yihui Liu

    2015-02-01

    Full Text Available Adverse drug reaction (ADR is widely concerned for public health issue. ADRs are one of most common causes to withdraw some drugs from market. Prescription event monitoring (PEM is an important approach to detect the adverse drug reactions. The main problem to deal with this method is how to automatically extract the medical events or side effects from high-throughput medical events, which are collected from day to day clinical practice. In this study we propose a novel concept of feature matrix to detect the ADRs. Feature matrix, which is extracted from big medical data from The Health Improvement Network (THIN database, is created to characterize the medical events for the patients who take drugs. Feature matrix builds the foundation for the irregular and big medical data. Then feature selection methods are performed on feature matrix to detect the significant features. Finally the ADRs can be located based on the significant features. The experiments are carried out on three drugs: Atorvastatin, Alendronate, and Metoclopramide. Major side effects for each drug are detected and better performance is achieved compared to other computerized methods. The detected ADRs are based on computerized methods, further investigation is needed.

  17. A Bayesian network approach to the database search problem in criminal proceedings

    Directory of Open Access Journals (Sweden)

    Biedermann Alex

    2012-08-01

    Full Text Available Abstract Background The ‘database search problem’, that is, the strengthening of a case - in terms of probative value - against an individual who is found as a result of a database search, has been approached during the last two decades with substantial mathematical analyses, accompanied by lively debate and centrally opposing conclusions. This represents a challenging obstacle in teaching but also hinders a balanced and coherent discussion of the topic within the wider scientific and legal community. This paper revisits and tracks the associated mathematical analyses in terms of Bayesian networks. Their derivation and discussion for capturing probabilistic arguments that explain the database search problem are outlined in detail. The resulting Bayesian networks offer a distinct view on the main debated issues, along with further clarity. Methods As a general framework for representing and analyzing formal arguments in probabilistic reasoning about uncertain target propositions (that is, whether or not a given individual is the source of a crime stain, this paper relies on graphical probability models, in particular, Bayesian networks. This graphical probability modeling approach is used to capture, within a single model, a series of key variables, such as the number of individuals in a database, the size of the population of potential crime stain sources, and the rarity of the corresponding analytical characteristics in a relevant population. Results This paper demonstrates the feasibility of deriving Bayesian network structures for analyzing, representing, and tracking the database search problem. The output of the proposed models can be shown to agree with existing but exclusively formulaic approaches. Conclusions The proposed Bayesian networks allow one to capture and analyze the currently most well-supported but reputedly counter-intuitive and difficult solution to the database search problem in a way that goes beyond the traditional

  18. Neural network for intelligent query of an FBI forensic database

    Science.gov (United States)

    Uvanni, Lee A.; Rainey, Timothy G.; Balasubramanian, Uma; Brettle, Dean W.; Weingard, Fred; Sibert, Robert W.; Birnbaum, Eric

    1997-02-01

    Examiner is an automated fired cartridge case identification system utilizing a dual-use neural network pattern recognition technology, called the statistical-multiple object detection and location system (S-MODALS) developed by Booz(DOT)Allen & Hamilton, Inc. in conjunction with Rome Laboratory. S-MODALS was originally designed for automatic target recognition (ATR) of tactical and strategic military targets using multisensor fusion [electro-optical (EO), infrared (IR), and synthetic aperture radar (SAR)] sensors. Since S-MODALS is a learning system readily adaptable to problem domains other than automatic target recognition, the pattern matching problem of microscopic marks for firearms evidence was analyzed using S-MODALS. The physics; phenomenology; discrimination and search strategies; robustness requirements; error level and confidence level propagation that apply to the pattern matching problem of military targets were found to be applicable to the ballistic domain as well. The Examiner system uses S-MODALS to rank a set of queried cartridge case images from the most similar to the least similar image in reference to an investigative fired cartridge case image. The paper presents three independent tests and evaluation studies of the Examiner system utilizing the S-MODALS technology for the Federal Bureau of Investigation.

  19. BioFNet: biological functional network database for analysis and synthesis of biological systems.

    Science.gov (United States)

    Kurata, Hiroyuki; Maeda, Kazuhiro; Onaka, Toshikazu; Takata, Takenori

    2014-09-01

    In synthetic biology and systems biology, a bottom-up approach can be used to construct a complex, modular, hierarchical structure of biological networks. To analyze or design such networks, it is critical to understand the relationship between network structure and function, the mechanism through which biological parts or biomolecules are assembled into building blocks or functional networks. A functional network is defined as a subnetwork of biomolecules that performs a particular function. Understanding the mechanism of building functional networks would help develop a methodology for analyzing the structure of large-scale networks and design a robust biological circuit to perform a target function. We propose a biological functional network database, named BioFNet, which can cover the whole cell at the level of molecular interactions. The BioFNet takes an advantage in implementing the simulation program for the mathematical models of the functional networks, visualizing the simulated results. It presents a sound basis for rational design of biochemical networks and for understanding how functional networks are assembled to create complex high-level functions, which would reveal design principles underlying molecular architectures.

  20. BioSYNTHESIS: access to a knowledge network of health sciences databases.

    Science.gov (United States)

    Broering, N C; Hylton, J S; Guttmann, R; Eskridge, D

    1991-04-01

    Users of the IAIMS Knowledge Network at the Georgetown University Medical Center have access to multiple in-house and external databases from a single point of entry through BioSYNTHESIS. The IAIMS project has developed a rich environment of biomedical information resources that represent a medical decision support system for campus physicians and students. The BioSYNTHESIS system is an information navigator that provides transparent access to a Knowledge Network of over a dozen databases. These multiple health sciences databases consist of bibliographic, informational, diagnostic, and research systems which reside on diverse computers such as DEC VAXs, SUN 490, AT&T 3B2s, Macintoshes, IBM PC/PS2s and the AT&T ISN and SYTEK network systems. Ethernet and TCP/IP protocols are used in the network architecture. BioSYNTHESIS also provides network links to the other campus libraries and to external institutions. As additional knowledge resources and technological advances have become available. BioSYNTHESIS has evolved from a two phase to a three phase program. Major components of the system including recent achievements and future plans are described.

  1. Networking and Information Technology Workforce Study: Final Report

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This report presents the results of a study of the global Networking and Information Technology NIT workforce undertaken for the Networking and Information...

  2. A Novel Lightweight Main Memory Database for Telecom Network Performance Management System

    Directory of Open Access Journals (Sweden)

    Lina Lan

    2012-04-01

    Full Text Available Today telecom network are growing complex. Although the amount of network performance data increased dramatically, telecom network operators require better performance on network performance data collection and analysis. Database is the important component in modern network management model. Since main memory database (MMDB store data in main physical memory and provide very high-speed access, MMDB can suffice the requirements on data intensive and real time response in network performance management system. This paper presents a novel lightweight design on MMDB for network performance data persistence. This design improves data access performance in following aspects.  The data persistence mechanism employs user mode memory map provided by UNIX OS. To reduce the cost of data copy and data interpretation, the data storage format is designed as consistent with binary format in application memory. The database is provided as program library and the application can access data in shared memory to avoid the cost on inter-process communication. Once data is updated in memory, query application can get updated data without disk I/O cost. The data access methods adopt multi-level RB-Tree structure. In best case, the algorithm complexity is O(N. In worst case, the algorithm complexity is O(N*lgN. In real performance data distribution scenarios, the complexity is nearly O(N. The system approach has been tested in laboratory using benchmark data. The result shows the performances of the application fully meet the requirements of the product index. The CPU and memory consumption are also lower than network management system requirements.

  3. CoryneRegNet 4.0 – A reference database for corynebacterial gene regulatory networks

    Directory of Open Access Journals (Sweden)

    Baumbach Jan

    2007-11-01

    Full Text Available Abstract Background Detailed information on DNA-binding transcription factors (the key players in the regulation of gene expression and on transcriptional regulatory interactions of microorganisms deduced from literature-derived knowledge, computer predictions and global DNA microarray hybridization experiments, has opened the way for the genome-wide analysis of transcriptional regulatory networks. The large-scale reconstruction of these networks allows the in silico analysis of cell behavior in response to changing environmental conditions. We previously published CoryneRegNet, an ontology-based data warehouse of corynebacterial transcription factors and regulatory networks. Initially, it was designed to provide methods for the analysis and visualization of the gene regulatory network of Corynebacterium glutamicum. Results Now we introduce CoryneRegNet release 4.0, which integrates data on the gene regulatory networks of 4 corynebacteria, 2 mycobacteria and the model organism Escherichia coli K12. As the previous versions, CoryneRegNet provides a web-based user interface to access the database content, to allow various queries, and to support the reconstruction, analysis and visualization of regulatory networks at different hierarchical levels. In this article, we present the further improved database content of CoryneRegNet along with novel analysis features. The network visualization feature GraphVis now allows the inter-species comparisons of reconstructed gene regulatory networks and the projection of gene expression levels onto that networks. Therefore, we added stimulon data directly into the database, but also provide Web Service access to the DNA microarray analysis platform EMMA. Additionally, CoryneRegNet now provides a SOAP based Web Service server, which can easily be consumed by other bioinformatics software systems. Stimulons (imported from the database, or uploaded by the user can be analyzed in the context of known

  4. Research on Performance Evaluation of Biological Database based on Layered Queuing Network Model under the Cloud Computing Environment

    OpenAIRE

    Zhengbin Luo; Dongmei Sun

    2013-01-01

    To evaluate the performance of biological database based on layered queuing network model and under cloud computing environment is a premise, as well as an important step for biological database optimization. Based on predecessors’ researches concerning computer software and hardware performance evaluation under cloud environment, the study has further constructed a model system to evaluate the performance of biological database based on layered queuing network model and under cloud environme...

  5. Characteristics of networks of interventions: a description of a database of 186 published networks.

    Science.gov (United States)

    Nikolakopoulou, Adriani; Chaimani, Anna; Veroniki, Areti Angeliki; Vasiliadis, Haris S; Schmid, Christopher H; Salanti, Georgia

    2014-01-01

    Systematic reviews that employ network meta-analysis are undertaken and published with increasing frequency while related statistical methodology is evolving. Future statistical developments and evaluation of the existing methodologies could be motivated by the characteristics of the networks of interventions published so far in order to tackle real rather than theoretical problems. Based on the recently formed network meta-analysis literature we aim to provide an insight into the characteristics of networks in healthcare research. We searched PubMed until end of 2012 for meta-analyses that used any form of indirect comparison. We collected data from networks that compared at least four treatments regarding their structural characteristics as well as characteristics of their analysis. We then conducted a descriptive analysis of the various network characteristics. We included 186 networks of which 35 (19%) were star-shaped (treatments were compared to a common comparator but not between themselves). The median number of studies per network was 21 and the median number of treatments compared was 6. The majority (85%) of the non-star shaped networks included at least one multi-arm study. Synthesis of data was primarily done via network meta-analysis fitted within a Bayesian framework (113 (61%) networks). We were unable to identify the exact method used to perform indirect comparison in a sizeable number of networks (18 (9%)). In 32% of the networks the investigators employed appropriate statistical methods to evaluate the consistency assumption; this percentage is larger among recently published articles. Our descriptive analysis provides useful information about the characteristics of networks of interventions published the last 16 years and the methods for their analysis. Although the validity of network meta-analysis results highly depends on some basic assumptions, most authors did not report and evaluate them adequately. Reviewers and editors need to be aware

  6. Application of a Database in the Monitoring of Workstations in a Local Area Network

    Directory of Open Access Journals (Sweden)

    Eyo O. Ukem

    2009-01-01

    Full Text Available Problem statement: Computer hardware fault management and repairs can be a big challenge, especially if the number of staff available for the job is small. The task becomes more complicated if remote sites are managed and an engineer or technician has to be dispatched. Approach: Availability of relevant information when needed could ease the burden of maintenance by removing uncertainties. Such required information could be accumulated in a database and accessed as needed. Results: This study considered such a database, to assist a third party hardware maintenance firm keep track of its operations, including the machines that it services, together with their owners. A software application was developed in Java programming language, in the form of a database, using Microsoft Access as the database management system. It was designed to run on a local area network and to allow remote workstations to log on to a central computer in a client/server configuration. With this application it was possible to enter fault reports into the database residing on the central computer from any workstation on the network. Conclusion/Recommendations: The information generated from this data can be used by the third party hardware maintenance firm to speed up its service delivery, thus putting the firm in a position to render more responsive and efficient service to the customers.

  7. An online database for informing ecological network models: http://kelpforest.ucsc.edu.

    Directory of Open Access Journals (Sweden)

    Rodrigo Beas-Luna

    Full Text Available Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/ to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training. To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/databaseui.

  8. A systems biology approach to construct the gene regulatory network of systemic inflammation via microarray and databases mining

    Directory of Open Access Journals (Sweden)

    Lan Chung-Yu

    2008-09-01

    Full Text Available Abstract Background Inflammation is a hallmark of many human diseases. Elucidating the mechanisms underlying systemic inflammation has long been an important topic in basic and clinical research. When primary pathogenetic events remains unclear due to its immense complexity, construction and analysis of the gene regulatory network of inflammation at times becomes the best way to understand the detrimental effects of disease. However, it is difficult to recognize and evaluate relevant biological processes from the huge quantities of experimental data. It is hence appealing to find an algorithm which can generate a gene regulatory network of systemic inflammation from high-throughput genomic studies of human diseases. Such network will be essential for us to extract valuable information from the complex and chaotic network under diseased conditions. Results In this study, we construct a gene regulatory network of inflammation using data extracted from the Ensembl and JASPAR databases. We also integrate and apply a number of systematic algorithms like cross correlation threshold, maximum likelihood estimation method and Akaike Information Criterion (AIC on time-lapsed microarray data to refine the genome-wide transcriptional regulatory network in response to bacterial endotoxins in the context of dynamic activated genes, which are regulated by transcription factors (TFs such as NF-κB. This systematic approach is used to investigate the stochastic interaction represented by the dynamic leukocyte gene expression profiles of human subject exposed to an inflammatory stimulus (bacterial endotoxin. Based on the kinetic parameters of the dynamic gene regulatory network, we identify important properties (such as susceptibility to infection of the immune system, which may be useful for translational research. Finally, robustness of the inflammatory gene network is also inferred by analyzing the hubs and "weak ties" structures of the gene network

  9. Universal data-based method for reconstructing complex networks with binary-state dynamics

    Science.gov (United States)

    Li, Jingwen; Shen, Zhesi; Wang, Wen-Xu; Grebogi, Celso; Lai, Ying-Cheng

    2017-03-01

    To understand, predict, and control complex networked systems, a prerequisite is to reconstruct the network structure from observable data. Despite recent progress in network reconstruction, binary-state dynamics that are ubiquitous in nature, technology, and society still present an outstanding challenge in this field. Here we offer a framework for reconstructing complex networks with binary-state dynamics by developing a universal data-based linearization approach that is applicable to systems with linear, nonlinear, discontinuous, or stochastic dynamics governed by monotonic functions. The linearization procedure enables us to convert the network reconstruction into a sparse signal reconstruction problem that can be resolved through convex optimization. We demonstrate generally high reconstruction accuracy for a number of complex networks associated with distinct binary-state dynamics from using binary data contaminated by noise and missing data. Our framework is completely data driven, efficient, and robust, and does not require any a priori knowledge about the detailed dynamical process on the network. The framework represents a general paradigm for reconstructing, understanding, and exploiting complex networked systems with binary-state dynamics.

  10. Collaboration networks from a large CV database: dynamics, topology and bonus impact

    CERN Document Server

    Araújo, E B; Furtado, V; Pequeno, T H C; Andrade, J S

    2013-01-01

    Understanding the dynamics of research production and collaboration may reveal better strategies for scientific careers, academic institutions and funding agencies. Here we propose the use of a large and multidisciplinar database of scientific curricula in Brazil, namely, the Lattes Platform, to study patterns of scientific production and collaboration. In this database, detailed information about publications and researchers are made available by themselves so that coauthorship is unambiguous and individuals can be evaluated by scientific productivity, geographical location and field of expertise. Our results show that the collaboration network is growing exponentially for the last three decades, with a distribution of number of collaborators per researcher that approaches a power-law as the network gets older. Moreover, both the distributions of number of collaborators and production per researcher obey power-law behaviors, regardless of the geographical location or field, suggesting that the same universal...

  11. The designing and implementation of PE teaching information resource database based on broadband network

    Science.gov (United States)

    Wang, Jian

    2017-01-01

    In order to change traditional PE teaching mode and realize the interconnection, interworking and sharing of PE teaching resources, a distance PE teaching platform based on broadband network is designed and PE teaching information resource database is set up. The designing of PE teaching information resource database takes Windows NT 4/2000Server as operating system platform, Microsoft SQL Server 7.0 as RDBMS, and takes NAS technology for data storage and flow technology for video service. The analysis of system designing and implementation shows that the dynamic PE teaching information resource sharing platform based on Web Service can realize loose coupling collaboration, realize dynamic integration and active integration and has good integration, openness and encapsulation. The distance PE teaching platform based on Web Service and the design scheme of PE teaching information resource database can effectively solve and realize the interconnection, interworking and sharing of PE teaching resources and adapt to the informatization development demands of PE teaching.

  12. HBVPathDB: A database of HBV infection-related molecular interaction network

    Institute of Scientific and Technical Information of China (English)

    Yi Zhang; Xiao-Chen Bo; Jing Yang; Sheng-Qi Wang

    2005-01-01

    AIM: To describe molecules or genes interaction between hepatitis B viruses (HBV) and host, for understanding how virus' and host's genes and molecules are networked to form a biological system and for perceiving mechanism of HBV infection.METHODS: The knowledge of HBV infection-related reactions was organized into various kinds of pathways with carefully drawn graphs in HBVPathDB. Pathway information is stored with relational database management system (DBMS), which is currently the most efficient way to manage large amounts of data and query is implemented with powerful Structured Query Language (SQL). The search engine is written using Personal Home Page (PHP) with SQL embedded and web retrieval interface is developed for searching with Hypertext Markup Language (HTML).RESULTS: We present the first version of HBVPathDB,which is a HBV infection-related molecular interaction network database composed of 306 pathways with 1050molecules involved. With carefully drawn graphs, pathway information stored in HBVPathDB can be browsed in an intuitive way. We develop an easy-to-use interface for flexible accesses to the details of database. Convenient software is implemented to query and browse the pathway information of HBVPathDB. Four search page layout options-category search, gene search, description search,unitized search-are supported by the search engine ofthe database. The database is freely available at http://www.bio-inf, net/HBVPathDB/HBV/.CONCLUSION: The conventional perspective HBVPathDB have already contained a considerable amount of pathway information with HBV infection related, which is suitable for in-depth analysis of molecular interaction network of virus and host. HBVPathDB integrates pathway data-sets with convenient software for query, browsing,visualization, that provides users more opportunity to identify regulatory key molecules as potential drug targets and to explore the possible mechanism of HBV infection based on gene expression datasets.

  13. FINAL DIGITAL FLOOD INSURANCE RATE MAP DATABASE, LOGAN COUNTY, OK USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  14. FINAL DIGITAL FLOOD INSURANCE RATE MAP DATABASE, CLEVELAND COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  15. FINAL DIGITAL FLOOD INSURANCE RATE MAP DATABASE, SAN DIEGO COUNTY, CALIFORNIA (AND INCORPORATED AREAS)

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  16. Final DIGITAL FLOOD INSURANCE RATE MAP DATABASE, McLean County, ILLINOIS USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  17. FINAL DIGITAL FLOOD INSURANCE RATE MAP DATABASE, POTTAWATOMIE COUNTY, OK, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  18. FINAL DIGITAL FLOOD INSURANCE RATE MAP DATABASE, CLEVELAND COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  19. FINAL DIGITAL FLOOD INSURANCE RATE MAP DATABASE, TULSA COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  20. Construction and evaluation of yeast expression networks by database-guided predictions

    Directory of Open Access Journals (Sweden)

    Katharina Papsdorf

    2016-05-01

    Full Text Available DNA-Microarrays are powerful tools to obtain expression data on the genome-wide scale. We performed microarray experiments to elucidate the transcriptional networks, which are up- or down-regulated in response to the expression of toxic polyglutamine proteins in yeast. Such experiments initially generate hit lists containing differentially expressed genes. To look into transcriptional responses, we constructed networks from these genes. We therefore developed an algorithm, which is capable of dealing with very small numbers of microarrays by clustering the hits based on co-regulatory relationships obtained from the SPELL database. Here, we evaluate this algorithm according to several criteria and further develop its statistical capabilities. Initially, we define how the number of SPELL-derived co-regulated genes and the number of input hits influences the quality of the networks. We then show the ability of our networks to accurately predict further differentially expressed genes. Including these predicted genes into the networks improves the network quality and allows quantifying the predictive strength of the networks based on a newly implemented scoring method. We find that this approach is useful for our own experimental data sets and also for many other data sets which we tested from the SPELL microarray database. Furthermore, the clusters obtained by the described algorithm greatly improve the assignment to biological processes and transcription factors for the individual clusters. Thus, the described clustering approach, which will be available through the ClusterEx web interface, and the evaluation parameters derived from it represent valuable tools for the fast and informative analysis of yeast microarray data.

  1. Caucasus Seismic Information Network: Data and Analysis Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Randolph Martin; Mary Krasovec; Spring Romer; Timothy O' Connor; Emanuel G. Bombolakis; Youshun Sun; Nafi Toksoz

    2007-02-22

    The geology and tectonics of the Caucasus region (Armenia, Azerbaijan, and Georgia) are highly variable. Consequently, generating a structural model and characterizing seismic wave propagation in the region require data from local seismic networks. As of eight years ago, there was only one broadband digital station operating in the region – an IRIS station at Garni, Armenia – and few analog stations. The Caucasus Seismic Information Network (CauSIN) project is part of a nulti-national effort to build a knowledge base of seismicity and tectonics in the region. During this project, three major tasks were completed: 1) collection of seismic data, both in event catalogus and phase arrival time picks; 2) development of a 3-D P-wave velocity model of the region obtained through crustal tomography; 3) advances in geological and tectonic models of the region. The first two tasks are interrelated. A large suite of historical and recent seismic data were collected for the Caucasus. These data were mainly analog prior to 2000, and more recently, in Georgia and Azerbaijan, the data are digital. Based on the most reliable data from regional networks, a crustal model was developed using 3-D tomographic inversion. The results of the inversion are presented, and the supporting seismic data are reported. The third task was carried out on several fronts. Geologically, the goal of obtaining an integrated geological map of the Caucasus on a scale of 1:500,000 was initiated. The map for Georgia has been completed. This map serves as a guide for the final incorporation of the data from Armenia and Azerbaijan. Description of the geological units across borders has been worked out and formation boundaries across borders have been agreed upon. Currently, Armenia and Azerbaijan are working with scientists in Georgia to complete this task. The successful integration of the geologic data also required addressing and mapping active faults throughout the greater Caucasus. Each of the major

  2. A national internet-linked based database for pediatric interstitial lung diseases: the French network

    Directory of Open Access Journals (Sweden)

    Nathan Nadia

    2012-06-01

    Full Text Available Abstract Background Interstitial lung diseases (ILDs in children represent a heterogeneous group of rare respiratory disorders that affect the lung parenchyma. After the launch of the French Reference Centre for Rare Lung Diseases (RespiRare®, we created a national network and a web-linked database to collect data on pediatric ILD. Methods Since 2008, the database has been set up in all RespiRare® centres. After patient's parents' oral consent is obtained, physicians enter the data of children with ILD: identity, social data and environmental data; specific aetiological diagnosis of the ILD if known, genetics, patient visits to the centre, and all medical examinations and tests done for the diagnosis and/or during follow up. Each participating centre has a free access to his own patients' data only, and cross-centre studies require mutual agreement. Physicians may use the system as a daily aid for patient care through a web-linked medical file, backed on this database. Results Data was collected for 205 cases of ILD. The M/F sex ratio was 0.9. Median age at diagnosis was 1.5 years old [0–16.9]. A specific aetiology was identified in 149 (72.7% patients while 56 (27.3% cases remain undiagnosed. Surfactant deficiencies and alveolar proteinosis, haemosiderosis, and sarcoidosis represent almost half of the diagnoses. Median length of follow-up is 2.9 years [0–17.2]. Conclusions We introduce here the French network and the largest national database in pediatric ILDs. The diagnosis spectrum and the estimated incidence are consistent with other European databases. An important challenge will be to reduce the proportion of unclassified ILDs by a standardized diagnosis work-up. This database is a great opportunity to improve patient care and disease pathogenesis knowledge. A European network including physicians and European foundations is now emerging with the initial aim of devising a simplified European database/register as a first step to

  3. EcoliNet: a database of cofunctional gene network for Escherichia coli.

    Science.gov (United States)

    Kim, Hanhae; Shim, Jung Eun; Shin, Junha; Lee, Insuk

    2015-01-01

    During the past several decades, Escherichia coli has been a treasure chest for molecular biology. The molecular mechanisms of many fundamental cellular processes have been discovered through research on this bacterium. Although much basic research now focuses on more complex model organisms, E. coli still remains important in metabolic engineering and synthetic biology. Despite its long history as a subject of molecular investigation, more than one-third of the E. coli genome has no pathway annotation supported by either experimental evidence or manual curation. Recently, a network-assisted genetics approach to the efficient identification of novel gene functions has increased in popularity. To accelerate the speed of pathway annotation for the remaining uncharacterized part of the E. coli genome, we have constructed a database of cofunctional gene network with near-complete genome coverage of the organism, dubbed EcoliNet. We find that EcoliNet is highly predictive for diverse bacterial phenotypes, including antibiotic response, indicating that it will be useful in prioritizing novel candidate genes for a wide spectrum of bacterial phenotypes. We have implemented a web server where biologists can easily run network algorithms over EcoliNet to predict novel genes involved in a pathway or novel functions for a gene. All integrated cofunctional associations can be downloaded, enabling orthology-based reconstruction of gene networks for other bacterial species as well. Database URL: http://www.inetbio.org/ecolinet.

  4. The European Network of Health Economic Evaluation Databases (EURO NHEED) Project.

    Science.gov (United States)

    Nixon, John; Ulmann, Philippe; Glanville, Julie; Boulenger, Stéphanie; Drummond, Michael; de Pouvourville, Gérard

    2004-06-01

    This paper provides a first outline of the European Network of Health Economic Evaluation Databases (EURO NHEED) project. The project is funded by the European Commission and will implement, in 7 European centres based in France, Germany, Italy, The Netherlands, Spain, Sweden and the United Kingdom, databases on the economic evaluation of healthcare interventions. The network will be based on two existing and well-established resources, namely the UK's NHS Economic Evaluation Database (NHS EED), and France's Connaissances et Décision en EConomie de la Santé (CODECS) database. EURO NHEED will initially cover 17 European countries and will provide its users with bibliographic records, detailing the main characteristics of all included studies. In addition, structured abstracts will be provided for articles identified as full economic evaluations (cost-benefit, cost-effectiveness or cost-utility), which will offer a detailed critique of the findings and the methodology used. These databases will be accessible free of charge on the Internet. The EURO NHEED project is the first attempt to develop such a resource on a multi-national basis. The project will bring together Health Economists and Information Scientists from the European Union and beyond and is anticipated to facilitate a number of benefits and advances in the field of Health Economics. These include harmonisation and increased understanding of the theory and methodology of economic evaluation in healthcare, the interpretation of the generalisability of studies to target settings, and the influence of healthcare system variations among the European countries. The project will therefore advance the state of the art in collecting, summarising, critiquing and disseminating economic evaluations of healthcare conducted within Europe.

  5. A Quality-Control-Oriented Database for a Mesoscale Meteorological Observation Network

    Science.gov (United States)

    Lussana, C.; Ranci, M.; Uboldi, F.

    2012-04-01

    In the operational context of a local weather service, data accessibility and quality related issues must be managed by taking into account a wide set of user needs. This work describes the structure and the operational choices made for the operational implementation of a database system storing data from highly automated observing stations, metadata and information on data quality. Lombardy's environmental protection agency, ARPA Lombardia, manages a highly automated mesoscale meteorological network. A Quality Assurance System (QAS) ensures that reliable observational information is collected and disseminated to the users. The weather unit in ARPA Lombardia, at the same time an important QAS component and an intensive data user, has developed a database specifically aimed to: 1) providing quick access to data for operational activities and 2) ensuring data quality for real-time applications, by means of an Automatic Data Quality Control (ADQC) procedure. Quantities stored in the archive include hourly aggregated observations of: precipitation amount, temperature, wind, relative humidity, pressure, global and net solar radiation. The ADQC performs several independent tests on raw data and compares their results in a decision-making procedure. An important ADQC component is the Spatial Consistency Test based on Optimal Interpolation. Interpolated and Cross-Validation analysis values are also stored in the database, providing further information to human operators and useful estimates in case of missing data. The technical solution adopted is based on a LAMP (Linux, Apache, MySQL and Php) system, constituting an open source environment suitable for both development and operational practice. The ADQC procedure itself is performed by R scripts directly interacting with the MySQL database. Users and network managers can access the database by using a set of web-based Php applications.

  6. Final report for the network authentication investigation and pilot.

    Energy Technology Data Exchange (ETDEWEB)

    Eldridge, John M.; Dautenhahn, Nathan; Miller, Marc M.; Wiener, Dallas J; Witzke, Edward L.

    2006-11-01

    New network based authentication mechanisms are beginning to be implemented in industry. This project investigated different authentication technologies to see if and how Sandia might benefit from them. It also investigated how these mechanisms can integrate with the Sandia Two-Factor Authentication Project. The results of these investigations and a network authentication path forward strategy are documented in this report.

  7. Enhanced diagnostic accuracy for quantitative bone scan using an artificial neural network system: a Japanese multi-center database project

    National Research Council Canada - National Science Library

    Nakajima, Kenichi; Nakajima, Yasuo; Horikoshi, Hiroyuki; Ueno, Munehisa; Wakabayashi, Hiroshi; Shiga, Tohru; Yoshimura, Mana; Ohtake, Eiji; Sugawara, Yoshifumi; Matsuyama, Hideyasu; Edenbrandt, Lars

    2013-01-01

    Artificial neural network (ANN)-based bone scan index (BSI), a marker of the amount of bone metastasis, has been shown to enhance diagnostic accuracy and reproducibility but is potentially affected by training database...

  8. LmSmdB: an integrated database for metabolic and gene regulatory network in Leishmania major and Schistosoma mansoni.

    Science.gov (United States)

    Patel, Priyanka; Mandlik, Vineetha; Singh, Shailza

    2016-03-01

    A database that integrates all the information required for biological processing is essential to be stored in one platform. We have attempted to create one such integrated database that can be a one stop shop for the essential features required to fetch valuable result. LmSmdB (L. major and S. mansoni database) is an integrated database that accounts for the biological networks and regulatory pathways computationally determined by integrating the knowledge of the genome sequences of the mentioned organisms. It is the first database of its kind that has together with the network designing showed the simulation pattern of the product. This database intends to create a comprehensive canopy for the regulation of lipid metabolism reaction in the parasite by integrating the transcription factors, regulatory genes and the protein products controlled by the transcription factors and hence operating the metabolism at genetic level.

  9. CLASCN: Candidate Network Selection for Efficient Top-k Keyword Queries over Databases

    Institute of Scientific and Technical Information of China (English)

    Jun Zhang; Zhao-Hui Peng; Shan Wang; Hui-Jing Nie

    2007-01-01

    Keyword Search Over Relational Databases (KSORD) enables casual or Web users easily access databases through free-form keyword queries. Improving the performance of KSORD systems is a critical issue in this area. In this paper, a new approach CLASCN (Classification, Learning And Selection of Candidate Network) is developed to efficiently perform top-k keyword queries in schema-graph-based online KSORD systems. In this approach, the Candidate Networks(CNs) from trained keyword queries or executed user queries are classified and stored in the databases, and top-k results from the CNs are learned for constructing CN Language Models (CNLMs). The CNLMs are used to compute the similarity scores between a new user query and the CNs from the query. The CNs with relatively large similarity score, which are the most promising ones to produce top-k results, will be selected and performed. Currently, CLASCN is only applicable for past queries and New All-keyword-Used (NAU) queries which are frequently submitted queries. Extensive experiments also show the efficiency and effectiveness of our CLASCN approach.

  10. Active system area networks for data intensive computations. Final report

    Energy Technology Data Exchange (ETDEWEB)

    None

    2002-04-01

    The goal of the Active System Area Networks (ASAN) project is to develop hardware and software technologies for the implementation of active system area networks (ASANs). The use of the term ''active'' refers to the ability of the network interfaces to perform application-specific as well as system level computations in addition to their traditional role of data transfer. This project adopts the view that the network infrastructure should be an active computational entity capable of supporting certain classes of computations that would otherwise be performed on the host CPUs. The result is a unique network-wide programming model where computations are dynamically placed within the host CPUs or the NIs depending upon the quality of service demands and network/CPU resource availability. The projects seeks to demonstrate that such an approach is a better match for data intensive network-based applications and that the advent of low-cost powerful embedded processors and configurable hardware makes such an approach economically viable and desirable.

  11. Database use and technology in Japan: JTEC panel report. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Wiederhold, G.; Beech, D.; Bourne, C.; Farmer, N.; Jajodia, Sushil; Kahaner, D.; Minoura, Toshi; Smith, D.; Smith, J.M.

    1992-04-01

    This report presents the findings of a group of database experts, sponsored by the Japanese Technology Evaluation Center (JTEC), based on an intensive study trip to Japan during March 1991. Academic, industrial, and governmental sites were visited. The primary findings are that Japan is supporting its academic research establishment poorly, that industry is making progress in key areas, and that both academic and industrial researchers are well aware of current domestic and foreign technology. Information sharing between industry and academia is effectively supported by governmental sponsorship of joint planning and review activities, and enhances technology transfer. In two key areas, multimedia and object-oriented databases, the authors can expect to see future export of Japanese database products, typically integrated into larger systems. Support for academic research is relatively modest. Nevertheless, the senior faculty are well-known and respected, and communicate frequently and in depth with each other, with government agencies, and with industry. In 1988 there were a total of 1,717 Ph.D.`s in engineering and 881 in science. It appears that only about 30 of these were academic Ph.D.`s in the basic computer sciences.

  12. National information network and database system of hazardous waste management in China

    Energy Technology Data Exchange (ETDEWEB)

    Ma Hongchang [National Environmental Protection Agency, Beijing (China)

    1996-12-31

    Industries in China generate large volumes of hazardous waste, which makes it essential for the nation to pay more attention to hazardous waste management. National laws and regulations, waste surveys, and manifest tracking and permission systems have been initiated. Some centralized hazardous waste disposal facilities are under construction. China`s National Environmental Protection Agency (NEPA) has also obtained valuable information on hazardous waste management from developed countries. To effectively share this information with local environmental protection bureaus, NEPA developed a national information network and database system for hazardous waste management. This information network will have such functions as information collection, inquiry, and connection. The long-term objective is to establish and develop a national and local hazardous waste management information network. This network will significantly help decision makers and researchers because it will be easy to obtain information (e.g., experiences of developed countries in hazardous waste management) to enhance hazardous waste management in China. The information network consists of five parts: technology consulting, import-export management, regulation inquiry, waste survey, and literature inquiry.

  13. A shortest path algorithm for moving objects in spatial network databases

    Institute of Scientific and Technical Information of China (English)

    Xiaolan Yin; Zhiming Ding; Jing Li

    2008-01-01

    One of the most important kinds of queries in Spatial Network Databases (SNDB) to support location-based services (LBS) is the shortest path query. Given an object in a network, e.g. A location of a car on a road network, and a set of objects of interests, e.g. Hotels,gas station, and car, the shortest path query returns the shortest path from the query object to interested objects. The studies of shortest path query have two kinds of ways, online processing and preprocessing. The studies of preprocessing suppose that the interest objects are static. This paper proposes a shortest path algorithm with a set of index structures to support the situation of moving objects. This algorithm can transform a dynamic problem to a static problem. In this paper we focus on road networks. However, our algorithms do not use any domain specific information, and therefore can be applied to any network. This algorithm's complexity is O(klog2i), and traditional Dijkstra's complexity is O((I + k)2).

  14. DATABASE SECURITY IN WIRELESS SENSOR NETWORK THROUGH PGP AND ID3*

    Directory of Open Access Journals (Sweden)

    Mandeep Kaur

    2012-06-01

    Full Text Available In the new era of this century, data is sent and receive in electronic form i.e. E-mail. So the risk of data hack and loss is increased. So for the security of data is the main requirement of any organization. This paper propose an idea of using wireless sensor network for the security of database. For this purpose two algorithms are used PGP(Pretty Good Privacy algorithm and ID3(Iterative Dichotomiser 3 algorithm which will help for security as well as speed of data.

  15. Experience and Lessons learnt from running high availability databases on Network Attached Storage

    CERN Document Server

    Guijarro, Juan Manuel; Segura Chinchilla, Nilo

    2008-01-01

    The Database and Engineering Services Group of CERN's Information Technology Department provides the Oracle based Central Data Base services used in many activities at CERN. In order to provide High Availability and ease management for those services, a NAS (Network Attached Storage) based infrastructure has been set up. It runs several instances of the Oracle RAC (Real Application Cluster) using NFS as share disk space for RAC purposes and Data hosting. It is composed of two private LAN's to provide access to the NAS file servers and Oracle RAC interconnect, both using network bonding. NAS nodes are configured in partnership to prevent having single points of failure and to provide automatic NAS fail-over. This presentation describes that infrastructure and gives some advice on how to automate its management and setup using a Fabric Management framework such as Quattor. It also covers aspects related with NAS Performance and Monitoring as well Data Backup and Archive of such facility using already existing i...

  16. Dynamic Real Time Distributed Sensor Network Based Database Management System Using XML, JAVA and PHP Technologies

    Directory of Open Access Journals (Sweden)

    D. Sudharsan

    2012-03-01

    Full Text Available Wireless Sensor Network (WSN is well known for distributed real time systems for various applications. In order to handle the increasing functionality and complexity of high resolution spatio-temporal sensorydatabase, there is a strong need for a system/tool to analyse real time data associated with distributed sensor network systems. There are a few package/systems available to maintain the near real time database system/management, which are expensive and requires expertise. Hence, there is a need for a cost effective and easy to use dynamic real-time data repository system to provide real time data (raw as well as usable units in a structured format. In the present study, a distributed sensor network system, with Agrisens (AS and FieldServer (FS as well as FS-based Flux Tower and FieldTwitter, is used, which consists of network of sensors and field images to observe/collect the real time weather, crop and environmental parameters for precision agriculture. The real time FieldServer-based spatio-temporal high resolution dynamic sensory data was converted into Dynamic Real-Time Database Management System (DRTDBMS in a structured format for both raw and converted (with usable units data. A web interface has been developed to access the DRTDBMS and exclusive domain has been created with the help of open/free Information and Communication Technology (ICT tools in Extendable Markup Language (XML using (Hypertext preprocessor PHP algorithms and with eXtensible Hyper Text Markup Language (XHTML self-scripting. The proposed DRTDBMS prototype, called GeoSense DRTDBMS, which is a part of the ongoing IndoJapan initiative ‘ICT and Sensor Network based Decision Support Systems in Agriculture and EnvironmentAssessment’, will be integrated with GeoSense cloud server to provide database (dynamic real-time weather/soil/crop and environmental parameters and modeling services (crop water requirement and simulated rice yield modeling. GeoSense-cloud server

  17. Implementation of the BDFGEOTHERM Database (Geothermal Fluids in Switzerland) on Google Earth - Final report

    Energy Technology Data Exchange (ETDEWEB)

    Sonney, R.; Vuataz, F.-D.; Cattin, S.

    2008-12-15

    The database BDFGeotherm compiled in 2007 on ACCESS code was modified to improve its availability and attractivity by using Google Earth free software and the CREGE web site. This database allows gathering existing geothermal data, generally widely dispersed and often difficult to reach, towards a user's friendly tool. Downloading the file 'BDFGeotherm.kmz' from the CREGE web site makes possible to visualize a total of 84 geothermal sites from Switzerland and neighbouring areas. Each one is represented with a pinpoint of different colour, for different temperature ranges. A large majority of sites is located in the northern part of the Jura Mountain and in the upper Rhone Valley. General information about water use, geology, flow rate, temperature and mineralization are given in a small window by clicking on the desired pinpoint. Moreover, two links to an Internet address are available for each site in each window, allowing returning to the CREGE web site and providing more details on each sampling point such as: geographical description, reservoir geology, hydraulics, hydrochemistry, isotopes and geothermal parameters. For a limited number of sites, photos and a geological log can be viewed and exported. (author)

  18. A central database for the Global Terrestrial Network for Permafrost (GTN-P)

    Science.gov (United States)

    Elger, Kirsten; Lanckman, Jean-Pierre; Lantuit, Hugues; Karlsson, Ævar Karl; Johannsson, Halldór

    2013-04-01

    The Global Terrestrial Network for Permafrost (GTN-P) is the primary international observing network for permafrost sponsored by the Global Climate Observing System (GCOS) and the Global Terrestrial Observing System (GTOS), and managed by the International Permafrost Association (IPA). It monitors the Essential Climate Variable (ECV) permafrost that consists of permafrost temperature and active-layer thickness, with the long-term goal of obtaining a comprehensive view of the spatial structure, trends, and variability of changes in the active layer and permafrost. The network's two international monitoring components are (1) CALM (Circumpolar Active Layer Monitoring) and the (2) Thermal State of Permafrost (TSP), which is made of an extensive borehole-network covering all permafrost regions. Both programs have been thoroughly overhauled during the International Polar Year 2007-2008 and extended their coverage to provide a true circumpolar network stretching over both Hemispheres. GTN-P has gained considerable visibility in the science community in providing the baseline against which models are globally validated and incorporated in climate assessments. Yet it was until now operated on a voluntary basis, and is now being redesigned to meet the increasing expectations from the science community. To update the network's objectives and deliver the best possible products to the community, the IPA organized a workshop to define the user's needs and requirements for the production, archival, storage and dissemination of the permafrost data products it manages. From the beginning on, GNT-P data was "outfitted" with an open data policy with free data access via the World Wide Web. The existing data, however, is far from being homogeneous: is not yet optimized for databases, there is no framework for data reporting or archival and data documentation is incomplete. As a result, and despite the utmost relevance of permafrost in the Earth's climate system, the data has not been

  19. Research on Performance Evaluation of Biological Database based on Layered Queuing Network Model under the Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Zhengbin Luo

    2013-06-01

    Full Text Available To evaluate the performance of biological database based on layered queuing network model and under cloud computing environment is a premise, as well as an important step for biological database optimization. Based on predecessors’ researches concerning computer software and hardware performance evaluation under cloud environment, the study has further constructed a model system to evaluate the performance of biological database based on layered queuing network model and under cloud environment. Moreover, traditional layered queuing network model is also optimized and upgraded in this process. After having constructed the performance evaluation system, the study applies laboratory experiment method to test the validity of the constructed performance model. Shown by the test result, this model is effective in evaluating the performance of biological system under cloud environment and the predicted result is quite close to the tested result. This has demonstrated the validity of the model in evaluating the performance of biological database.

  20. SoyNet: a database of co-functional networks for soybean Glycine max.

    Science.gov (United States)

    Kim, Eiru; Hwang, Sohyun; Lee, Insuk

    2017-01-04

    Soybean (Glycine max) is a legume crop with substantial economic value, providing a source of oil and protein for humans and livestock. More than 50% of edible oils consumed globally are derived from this crop. Soybean plants are also important for soil fertility, as they fix atmospheric nitrogen by symbiosis with microorganisms. The latest soybean genome annotation (version 2.0) lists 56 044 coding genes, yet their functional contributions to crop traits remain mostly unknown. Co-functional networks have proven useful for identifying genes that are involved in a particular pathway or phenotype with various network algorithms. Here, we present SoyNet (available at www.inetbio.org/soynet), a database of co-functional networks for G. max and a companion web server for network-based functional predictions. SoyNet maps 1 940 284 co-functional links between 40 812 soybean genes (72.8% of the coding genome), which were inferred from 21 distinct types of genomics data including 734 microarrays and 290 RNA-seq samples from soybean. SoyNet provides a new route to functional investigation of the soybean genome, elucidating genes and pathways of agricultural importance. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. Ranking transitive chemical-disease inferences using local network topology in the comparative toxicogenomics database.

    Directory of Open Access Journals (Sweden)

    Benjamin L King

    Full Text Available Exposure to chemicals in the environment is believed to play a critical role in the etiology of many human diseases. To enhance understanding about environmental effects on human health, the Comparative Toxicogenomics Database (CTD; http://ctdbase.org provides unique curated data that enable development of novel hypotheses about the relationships between chemicals and diseases. CTD biocurators read the literature and curate direct relationships between chemicals-genes, genes-diseases, and chemicals-diseases. These direct relationships are then computationally integrated to create additional inferred relationships; for example, a direct chemical-gene statement can be combined with a direct gene-disease statement to generate a chemical-disease inference (inferred via the shared gene. In CTD, the number of inferences has increased exponentially as the number of direct chemical, gene and disease interactions has grown. To help users navigate and prioritize these inferences for hypothesis development, we implemented a statistic to score and rank them based on the topology of the local network consisting of the chemical, disease and each of the genes used to make an inference. In this network, chemicals, diseases and genes are nodes connected by edges representing the curated interactions. Like other biological networks, node connectivity is an important consideration when evaluating the CTD network, as the connectivity of nodes follows the power-law distribution. Topological methods reduce the influence of highly connected nodes that are present in biological networks. We evaluated published methods that used local network topology to determine the reliability of protein-protein interactions derived from high-throughput assays. We developed a new metric that combines and weights two of these methods and uniquely takes into account the number of common neighbors and the connectivity of each entity involved. We present several CTD inferences as case

  2. Final Technical Report for Terabit-scale hybrid networking project.

    Energy Technology Data Exchange (ETDEWEB)

    Veeraraghavan, Malathi [Univ. of Virginia, Charlottesville, VA (United States)

    2015-12-12

    This report describes our accomplishments and activities for the project titled Terabit-Scale Hybrid Networking. The key accomplishment is that we developed, tested and deployed an Alpha Flow Characterization System (AFCS) in ESnet. It is being run in production mode since Sept. 2015. Also, a new QoS class was added to ESnet5 to support alpha flows.

  3. Final Report. Analysis and Reduction of Complex Networks Under Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef M. [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Coles, T. [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Spantini, A. [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Tosatto, L. [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2013-09-30

    The project was a collaborative effort among MIT, Sandia National Laboratories (local PI Dr. Habib Najm), the University of Southern California (local PI Prof. Roger Ghanem), and The Johns Hopkins University (local PI Prof. Omar Knio, now at Duke University). Our focus was the analysis and reduction of large-scale dynamical systems emerging from networks of interacting components. Such networks underlie myriad natural and engineered systems. Examples important to DOE include chemical models of energy conversion processes, and elements of national infrastructure—e.g., electric power grids. Time scales in chemical systems span orders of magnitude, while infrastructure networks feature both local and long-distance connectivity, with associated clusters of time scales. These systems also blend continuous and discrete behavior; examples include saturation phenomena in surface chemistry and catalysis, and switching in electrical networks. Reducing size and stiffness is essential to tractable and predictive simulation of these systems. Computational singular perturbation (CSP) has been effectively used to identify and decouple dynamics at disparate time scales in chemical systems, allowing reduction of model complexity and stiffness. In realistic settings, however, model reduction must contend with uncertainties, which are often greatest in large-scale systems most in need of reduction. Uncertainty is not limited to parameters; one must also address structural uncertainties—e.g., whether a link is present in a network—and the impact of random perturbations, e.g., fluctuating loads or sources. Research under this project developed new methods for the analysis and reduction of complex multiscale networks under uncertainty, by combining computational singular perturbation (CSP) with probabilistic uncertainty quantification. CSP yields asymptotic approximations of reduceddimensionality “slow manifolds” on which a multiscale dynamical system evolves. Introducing

  4. Kaliningrad regional district heating network 2004-2006. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-06-15

    This report concerns: Kaliningrad Regional District Heating Network project which was implemented from 2004 to 2006. The task of the project was to establish and operate an association for district heating companies in the region in order to transfer and distribute district heating know-how to the sector and through activities strengthen the sector. The long term aim was to contribute to establishment of an association to continue as a real association for the heat supply companies in the region. (au)

  5. Integration of glacier databases within the Global Terrestrial Network for Glaciers (GTN-G)

    Science.gov (United States)

    Zemp, M.; Raup, B. H.; Armstrong, R.; Ballagh, L.; Gärtner-Roer, I.; Haeberli, W.; Hoelzle, M.; Kääb, A.; Kargel, J.; Paul, F.

    2009-04-01

    Changes in glaciers and ice caps provide some of the clearest evidence of climate change and have impacts on global sea level fluctuations, regional hydrological cycles and local natural hazard situations. Internationally coordinated collection and distribution of standardized information about glaciers and ice caps was initiated in 1894 and is today coordinated within the Global Terrestrial Network for Glaciers (GTN-G). A recently established GTN-G Steering Committee coordinates, supports and advices the operational bodies responsible for the international glacier monitoring, which are the World Glacier Monitoring Service (WGMS), the US National Snow and Ice Data Center (NSIDC) and the Global Land Ice Measurements from Space (GLIMS) initiative. In this presentation, we provide an overview of (i) the integration of the various operational databases, (ii) the development of a one-stop web-interface to these databases, and (iii) the available datasets. By joint efforts consistency and interoperability of the different glacier databases is elaborated. Thereby, the lack of a complete worldwide, detailed glacier inventory as well as different historical developments and methodological contexts of the datasets are major challenges for linking individual glaciers throughout the databases. A map-based web-interface, implemented based on OpenLayer 2.0 and Web Map/Feature Services, is elaborated to spatially link the available data and to provide data users a fast overview of all available data. With this new online service, GTN-G provides fast access to information on glacier inventory data from 100,000 glaciers mainly based on aerial photographs and from 80,000 glaciers mainly based on satellite images, length change series from 1,800 glaciers, mass balance series from 230 glaciers, special events (e.g., hazards, surges, calving instabilities) from 130 glaciers, as well as 10,000 photographs from some 470 glaciers.

  6. The diffusion of health economics knowledge in Europe : The EURONHEED (European Network of Health Economics Evaluation Database) project.

    Science.gov (United States)

    de Pouvourville, Gérard; Ulmann, Philippe; Nixon, John; Boulenger, Stéphanie; Glanville, Julie; Drummond, Michael

    2005-01-01

    This paper overviews the EURONHEED (EUROpean Network of Health Economics Evaluation Databases) project. Launched in 2003, this project is funded by the EU. Its aim is to create a network of national and international databases dedicated to health economic evaluation of health services and innovations. Seven centres (France, Germany, Italy, The Netherlands, Spain, Sweden and the UK) are involved covering 17 countries. The network is based on two existing databases, the French CODECS (COnnaissance et Decision en EConomie de la Sante) database, created in 2000 by the French Health Economists Association (College des Economistes de la Sante), and the UK NHS-EED (NHS Economic Electronic Database), run by the Centre for Reviews and Dissemination, University of York, York, England. The network will provide bibliographic records of published full health economic evaluation studies (cost-benefit, cost-utility and cost-effectiveness studies) as well as cost studies, methodological articles and review papers. Moreover, a structured abstract of full evaluation studies will be provided to users, allowing them access to a detailed description of each study and to a commentary stressing the implications and limits, for decision making, of the study. Access will be free of charge. The database features and its ease of access (via the internet: http://www.euronheed.org) should facilitate the diffusion of existing economic evidence on health services and the generalisation of common standards in the field at the European level, thereby improving the quality, generalisability and transferability of results across countries.

  7. Effect of database drift on network topology and enrichment analyses: a case study for RegulonDB.

    Science.gov (United States)

    Beber, Moritz E; Muskhelishvili, Georgi; Hütt, Marc-Thorsten

    2016-01-01

    RegulonDB is a database storing the biological information behind the transcriptional regulatory network (TRN) of the bacterium Escherichia coli. It is one of the key bioinformatics resources for Systems Biology investigations of bacterial gene regulation. Like most biological databases, the content drifts with time, both due to the accumulation of new information and due to refinements in the underlying biological concepts. Conclusions based on previous database versions may no longer hold. Here, we study the change of some topological properties of the TRN of E. coli, as provided by RegulonDB across 16 versions, as well as a simple index, digital control strength, quantifying the match between gene expression profiles and the transcriptional regulatory networks. While many of network characteristics change dramatically across the different versions, the digital control strength remains rather robust and in tune with previous results for this index. Our study shows that: (i) results derived from network topology should, when possible, be studied across a range of database versions, before detailed biological conclusions are derived, and (ii) resorting to simple indices, when interpreting high-throughput data from a network perspective, may help achieving a robustness of the findings against variation of the underlying biological information. Database URL: www.regulondb.ccg.unam.mx. © The Author(s) 2016. Published by Oxford University Press.

  8. Firewall Architectures for High-Speed Networks: Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Errin W. Fulp

    2007-08-20

    Firewalls are a key component for securing networks that are vital to government agencies and private industry. They enforce a security policy by inspecting and filtering traffic arriving or departing from a secure network. While performing these critical security operations, firewalls must act transparent to legitimate users, with little or no effect on the perceived network performance (QoS). Packets must be inspected and compared against increasingly complex rule sets and tables, which is a time-consuming process. As a result, current firewall systems can introduce significant delays and are unable to maintain QoS guarantees. Furthermore, firewalls are susceptible to Denial of Service (DoS) attacks that merely overload/saturate the firewall with illegitimate traffic. Current firewall technology only offers a short-term solution that is not scalable; therefore, the \\textbf{objective of this DOE project was to develop new firewall optimization techniques and architectures} that meet these important challenges. Firewall optimization concerns decreasing the number of comparisons required per packet, which reduces processing time and delay. This is done by reorganizing policy rules via special sorting techniques that maintain the original policy integrity. This research is important since it applies to current and future firewall systems. Another method for increasing firewall performance is with new firewall designs. The architectures under investigation consist of multiple firewalls that collectively enforce a security policy. Our innovative distributed systems quickly divide traffic across different levels based on perceived threat, allowing traffic to be processed in parallel (beyond current firewall sandwich technology). Traffic deemed safe is transmitted to the secure network, while remaining traffic is forwarded to lower levels for further examination. The result of this divide-and-conquer strategy is lower delays for legitimate traffic, higher throughput

  9. Virginia Regional Seismic Network. Final report (1986--1992)

    Energy Technology Data Exchange (ETDEWEB)

    Bollinger, G.A.; Sibol, M.S.; Chapman, M.C.; Snoke, J.A. [Virginia Polytechnic Inst. and State Univ., Blacksburg, VA (US). Seismological Observatory

    1993-07-01

    In 1986, the Virginia Regional Seismic Network was one of the few fully calibrated digital seismic networks in the United States. Continued operation has resulted in the archival of signals from 2,000+ local, regional and teleseismic sources. Seismotectonic studies of the central Virginia seismic zone showed the activity in the western part to be related to a large antiformal structure while seismicity in the eastern portion is associated spatially with dike swarms. The eastern Tennessee seismic zone extends over a 300x50 km area and is the result of a compressive stress field acting at the intersection between two large crustal blocks. Hydroseismicity, which proposes a significant role for meteoric water in intraplate seismogenesis, found support in the observation of common cyclicities between streamflow and earthquake strain data. Seismic hazard studies have provided the following results: (1) Damage areas in the eastern United States are three to five times larger than those observed in the west. (2) Judged solely on the basis of cataloged earthquake recurrence rates, the next major shock in the southeast region will probably occur outside the Charleston, South Carolina area. (3) Investigations yielded necessary hazard parameters (for example, maximum magnitudes) for several sites in the southeast. Basic to these investigations was the development and maintenance of several seismological data bases.

  10. Epilepsy in Rett syndrome--lessons from the Rett networked database

    DEFF Research Database (Denmark)

    Nissenkorn, Andreea; Levy-Drummer, Rachel S; Bondi, Ori

    2015-01-01

    OBJECTIVE: Rett syndrome is an X-linked dominant neurodevelopmental disorder caused by mutations in the MECP2 gene, and characterized by cognitive and communicative regression, loss of hand use, and midline hand stereotypies. Epilepsy is a core symptom, but literature is controversial regarding...... genotype-phenotype correlation. Analysis of data from a large cohort should overcome this shortcoming. METHODS: Data from the Rett Syndrome Networked Database on 1,248 female patients were included. Data on phenotypic and genotypic parameters, age of onset, severity of epilepsy, and type of seizures were...... collected. Statistical analysis was done using the IBM SPSS Version 21 software, logistic regression, and Kaplan-Meier survival curves. RESULTS: Epilepsy was present in 68.1% of the patients, with uncontrolled seizures in 32.6% of the patients with epilepsy. Mean age of onset of epilepsy was 4...

  11. Fast reproducible identification and large-scale databasing of individual functional cognitive networks

    Directory of Open Access Journals (Sweden)

    Jobert Antoinette

    2007-10-01

    Full Text Available Abstract Background Although cognitive processes such as reading and calculation are associated with reproducible cerebral networks, inter-individual variability is considerable. Understanding the origins of this variability will require the elaboration of large multimodal databases compiling behavioral, anatomical, genetic and functional neuroimaging data over hundreds of subjects. With this goal in mind, we designed a simple and fast acquisition procedure based on a 5-minute functional magnetic resonance imaging (fMRI sequence that can be run as easily and as systematically as an anatomical scan, and is therefore used in every subject undergoing fMRI in our laboratory. This protocol captures the cerebral bases of auditory and visual perception, motor actions, reading, language comprehension and mental calculation at an individual level. Results 81 subjects were successfully scanned. Before describing inter-individual variability, we demonstrated in the present study the reliability of individual functional data obtained with this short protocol. Considering the anatomical variability, we then needed to correctly describe individual functional networks in a voxel-free space. We applied then non-voxel based methods that automatically extract main features of individual patterns of activation: group analyses performed on these individual data not only converge to those reported with a more conventional voxel-based random effect analysis, but also keep information concerning variance in location and degrees of activation across subjects. Conclusion This collection of individual fMRI data will help to describe the cerebral inter-subject variability of the correlates of some language, calculation and sensorimotor tasks. In association with demographic, anatomical, behavioral and genetic data, this protocol will serve as the cornerstone to establish a hybrid database of hundreds of subjects suitable to study the range and causes of variation in the

  12. DATABASE STRUCTURE FOR THE INTEGRATION OF RS WITH GIS BASED ON SEMANTIC NETWORK

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The integration of remote sensing (RS) with geographical information sy stem (GIS) is a hotspot in geographical information science.A good da tabase structure is important to the integration of RS with GIS,which should be b eneficial to the complete integration of RS with GIS,able to deal with the disag reement between the resolution of remote sensing images and the precision of GIS data,and also helpful to the knowledge discovery and exploitation.In this pap er,the database structure storing the spatial data based on semantic network is presented.This database structure has several advantages.Firstly,the spatial data is stored as raster data with space index,so the image processing can be done directly on the GIS data that is stored hierarchically according to the di stinguishing precision.Secondly,the simple objects are aggregated into compl ex ones.Thirdly,because we use the indexing tree to depict the relationship of aggregation and the indexing pictures expressed by 2_D strings to describe the topolo gy structure of the objects,the concepts of surrounding and region are expressed clearly and the semantic content of the landscape can be illustrated well.All the factors that affect the recognition of the objects are depicted in the facto r space,which provides a uniform mathematical frame for the fusion of the seman tic and non_semantic information.Lastly,the object node,knowledge node and th e indexing node are integrated into one node.This feature enhances the ability of system in knowledge expressing,intelligent inference and association.The ap plication shows that this database structure can benefit the interpretation of r emote sensing image with the information of GIS.

  13. Artificial Neural Network Solutions to Eclipsing Binary Lightcurves from the Kepler Space Telescope Database

    Science.gov (United States)

    Hause, Connor; Prsa, Andrej; Matijevic, Gal; Guinan, Edward F.

    2017-01-01

    Fully automated methods of data analysis are necessary for surpassing the human bottleneck in astrophysical data processing and maximizing scientific results from the great volume of observations to be taken over the next few decades. Prsa et al. (2008, ApJ, 687:542) addressed this issue by introducing an artificial neural network (ANN) which estimates the principal parameters of detached eclipsing binary (EB) stars. Parameters obtained by the process can be passed on to advanced modeling engines to produce a qualified EB database. The ANN was originally developed and trained for the OGLE EBs. Our project focuses on retraining this ANN for EBs from NASA’s Kepler Space Telescope database and serves as an extension to the eclipsing binaries via artificial intelligence (EBAI) project. The Kepler photometry is much more precise than photometry available from OGLE and other previous ground-based studies.. For our training set, we generated theoretical lightcurves via a Monte Carlo based Python script utilizing PHOEBE which samples EB parameter values according to prior distribution functions. Novel to our analysis is the use of chi-squared statistical tests which serve to qualify the overlap between the calculated exemplars and observed data. This enables the trained ANN to more accurately parameterize each EB. We describe our training process, present principal parameter estimates of Kepler EBs obtained by the ANNs, and discuss ongoing endeavors to refine those solutions. This research was supported by the National Science Foundation grant #1517474 which we gratefully acknowledge.

  14. S-MODALS neural network query of medical and forensic imagery databases

    Science.gov (United States)

    Rainey, Timothy G.; Brettle, Dean W.; Lavin, Andrew; Weingard, Fred; Henschke, Claudia I.; Yankelevitz, David; Mateescu, Ioan; Uvanni, Lee A.; Sibert, Robert W.; Birnbaum, Eric

    1995-01-01

    A dual-use neural network technology, called the statistical-multiple object detection and location system (S-MODALS), has been developed by Booz(DOT)Allen & Hamilton, Inc. over a five year period, funded by various U.S. Air Force organizations for automatic target recognition (ATR). S-MODALS performs multi-sensor fusion (Visible(EO), IR, ASARS) and multi-look evidence accrual for tactical and strategic reconnaissance. This paper presents the promising findings of applying S-MODALS to the medical field of lung cancer and the S- MODALS investigation into the intelligent database query of the FBI's ballistic forensic imagery. Since S-MODALS is a learning system, it is readily adaptable to object recognition problems other than ATR as evidenced by this joint government-academia-industry investigation into the S-MODALS automated lung nodule detection and characterization of CT imagery. This paper also presents the full results of a FBI test of the S-MODALS neural network's capabilities to perform an intelligent query of the FBI's ballistic forensic imagery.

  15. FunCoup 3.0: database of genome-wide functional coupling networks.

    Science.gov (United States)

    Schmitt, Thomas; Ogris, Christoph; Sonnhammer, Erik L L

    2014-01-01

    We present an update of the FunCoup database (http://FunCoup.sbc.su.se) of functional couplings, or functional associations, between genes and gene products. Identifying these functional couplings is an important step in the understanding of higher level mechanisms performed by complex cellular processes. FunCoup distinguishes between four classes of couplings: participation in the same signaling cascade, participation in the same metabolic process, co-membership in a protein complex and physical interaction. For each of these four classes, several types of experimental and statistical evidence are combined by Bayesian integration to predict genome-wide functional coupling networks. The FunCoup framework has been completely re-implemented to allow for more frequent future updates. It contains many improvements, such as a regularization procedure to automatically downweight redundant evidences and a novel method to incorporate phylogenetic profile similarity. Several datasets have been updated and new data have been added in FunCoup 3.0. Furthermore, we have developed a new Web site, which provides powerful tools to explore the predicted networks and to retrieve detailed information about the data underlying each prediction.

  16. myGRN: a database and visualisation system for the storage and analysis of developmental genetic regulatory networks

    Directory of Open Access Journals (Sweden)

    Bacha Jamil

    2009-06-01

    Full Text Available Abstract Background Biological processes are regulated by complex interactions between transcription factors and signalling molecules, collectively described as Genetic Regulatory Networks (GRNs. The characterisation of these networks to reveal regulatory mechanisms is a long-term goal of many laboratories. However compiling, visualising and interacting with such networks is non-trivial. Current tools and databases typically focus on GRNs within simple, single celled organisms. However, data is available within the literature describing regulatory interactions in multi-cellular organisms, although not in any systematic form. This is particularly true within the field of developmental biology, where regulatory interactions should also be tagged with information about the time and anatomical location of development in which they occur. Description We have developed myGRN (http://www.myGRN.org, a web application for storing and interrogating interaction data, with an emphasis on developmental processes. Users can submit interaction and gene expression data, either curated from published sources or derived from their own unpublished data. All interactions associated with publications are publicly visible, and unpublished interactions can only be shared between collaborating labs prior to publication. Users can group interactions into discrete networks based on specific biological processes. Various filters allow dynamic production of network diagrams based on a range of information including tissue location, developmental stage or basic topology. Individual networks can be viewed using myGRV, a tool focused on displaying developmental networks, or exported in a range of formats compatible with third party tools. Networks can also be analysed for the presence of common network motifs. We demonstrate the capabilities of myGRN using a network of zebrafish interactions integrated with expression data from the zebrafish database, ZFIN. Conclusion Here we

  17. Final Results From the Circumarctic Lakes Observation Network (CALON) Project

    Science.gov (United States)

    Hinkel, K. M.; Arp, C. D.; Eisner, W. R.; Frey, K. E.; Grosse, G.; Jones, B. M.; Kim, C.; Lenters, J. D.; Liu, H.; Townsend-Small, A.

    2015-12-01

    Since 2012, the physical and biogeochemical properties of ~60 lakes in northern Alaska have been investigated under CALON, a project to document landscape-scale variability of Arctic lakes in permafrost terrain. The network has ten nodes along two latitudinal transects extending inland 200 km from the Arctic Ocean. A meteorological station is deployed at each node and six representative lakes instrumented and continuously monitored, with winter and summer visits for synoptic assessment of lake conditions. Over the 4-year period, winter and summer climatology varied to create a rich range of lake responses over a short period. For example, winter 2012-13 was very cold with a thin snowpack producing thick ice across the region. Subsequent years had relatively warm winters, yet regionally variable snow resulted in differing gradients of ice thickness. Ice-out timing was unusually late in 2014 and unusually early in 2015. Lakes are typically well-mixed and largely isothermal, with minor thermal stratification occurring in deeper lakes during calm, sunny periods in summer. Lake water temperature records and morphometric data were used to estimate the ground thermal condition beneath 28 lakes. Application of a thermal equilibrium steady-state model suggests a talik penetrating the permafrost under many larger lakes, but lake geochemical data do not indicate a significant contribution of subpermafrost groundwater. Biogeochemical data reveal distinct spatial and seasonal variability in chlorophyll biomass, chromophoric dissolved organic carbon (CDOM), and major cations/anions. Generally, waters sampled beneath ice in April had distinctly higher concentrations of inorganic solutes and methane compared with August. Chlorophyll concentrations and CDOM absorption were higher in April, suggesting significant biological/biogeochemical activity under lake ice. Lakes are a positive source of methane in summer, and some also emit nitrous oxide and carbon dioxide. As part of the

  18. Teradata University Network: A No Cost Web-Portal for Teaching Database, Data Warehousing, and Data-Related Subjects

    Science.gov (United States)

    Jukic, Nenad; Gray, Paul

    2008-01-01

    This paper describes the value that information systems faculty and students in classes dealing with database management, data warehousing, decision support systems, and related topics, could derive from the use of the Teradata University Network (TUN), a free comprehensive web-portal. A detailed overview of TUN functionalities and content is…

  19. ENER European network for Energy Economics Research. Final report. Thematic Network (ENERGIE Programme), European Commission, DG Research

    Energy Technology Data Exchange (ETDEWEB)

    Eichhammer, W.

    2004-11-15

    Objectives of the Forum of the European Network for Energy Economics Research ENER: to bring to debate latest research results based on both qualitative and quantitative analysis (modelling analysis results) in fields concerning the relationship of energy, climate change and economy to relevant stakeholders in policy, industry, academia and NGOs in foru forums; to strengthen the links between national centres in energy/environment, policy and economics research, in particular with Eastern European countries, in view of their accession to the European Union, and with Switzerland. The Network thus expands links which have been previously growing among EU centres of competence in energy economics research to the EU accession countries and close neighbours; to use the network as a sprinboard for collaborative research on a European scale which will as the final objective of the Network overpass the limited activities proposed in the current frame. (orig.)

  20. Inclusion Of Road Network In The Spatial Database For Features Searching Using Dynamic Index

    Directory of Open Access Journals (Sweden)

    S. Sivasubramanian

    2012-05-01

    Full Text Available Spatial database systems manage large collections of geographic entities, which apart from spatial attributes contain spatial information and non spatial information (e.g., name, size, type, price, etc.. An attractive type of preference queries, which select the best spatial location with respect to the quality of facilities in its spatial area. Given a set D of interesting objects (e.g., candidate locations, a top-k spatial preference query retrieves the k objects in D with the highest scores. The featured score of a given object is derived from the quality of features (e.g., location and nearby features in its spatial neighborhood. For example, using a landed property agency database of flats for Sale, a customer may want to rank the flats with respect to the appropriateness of their location, defined after aggregating the qualities of other features (e.g., restaurants, bus stop, hospital, market, school, etc. within their spatial neighborhood. This neighborhood concept can be defined by different functions by the user. It can be an explicit circular region within a given distance from the flat. Another sensitive definition is to assign higher rates to the features based on their proximity to the land. In this paper, we formally define spatial preference queries and propose suitable dynamic index techniques and searching algorithms for them. Weextend [1] results with dynamic index structure in order to accommodate time - variant changes in the spatial data. In my current work is the top-k spatial preference query on road network, in which the distance between object and road is defined by their shortest path distance.

  1. Advanced Scientific Computing Research Network Requirements: ASCR Network Requirements Review Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Bacon, Charles [Argonne National Lab. (ANL), Argonne, IL (United States); Bell, Greg [ESnet, Berkeley, CA (United States); Canon, Shane [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dart, Eli [ESnet, Berkeley, CA (United States); Dattoria, Vince [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Goodwin, Dave [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Lee, Jason [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hicks, Susan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Holohan, Ed [Argonne National Lab. (ANL), Argonne, IL (United States); Klasky, Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lauzon, Carolyn [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Rogers, Jim [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Skinner, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tierney, Brian [ESnet, Berkeley, CA (United States)

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  2. The BIOSCI electronic newsgroup network for the biological sciences. Final report, October 1, 1992--June 30, 1996

    Energy Technology Data Exchange (ETDEWEB)

    Kristofferson, D.; Mack, D.

    1996-10-01

    This is the final report for a DOE funded project on BIOSCI Electronic Newsgroup Network for the biological sciences. A usable network for scientific discussion, major announcements, problem solving, etc. has been created.

  3. Updates on drug-target network; facilitating polypharmacology and data integration by growth of DrugBank database.

    Science.gov (United States)

    Barneh, Farnaz; Jafari, Mohieddin; Mirzaie, Mehdi

    2016-11-01

    Network pharmacology elucidates the relationship between drugs and targets. As the identified targets for each drug increases, the corresponding drug-target network (DTN) evolves from solely reflection of the pharmaceutical industry trend to a portrait of polypharmacology. The aim of this study was to evaluate the potentials of DrugBank database in advancing systems pharmacology. We constructed and analyzed DTN from drugs and targets associations in the DrugBank 4.0 database. Our results showed that in bipartite DTN, increased ratio of identified targets for drugs augmented density and connectivity of drugs and targets and decreased modular structure. To clear up the details in the network structure, the DTNs were projected into two networks namely, drug similarity network (DSN) and target similarity network (TSN). In DSN, various classes of Food and Drug Administration-approved drugs with distinct therapeutic categories were linked together based on shared targets. Projected TSN also showed complexity because of promiscuity of the drugs. By including investigational drugs that are currently being tested in clinical trials, the networks manifested more connectivity and pictured the upcoming pharmacological space in the future years. Diverse biological processes and protein-protein interactions were manipulated by new drugs, which can extend possible target combinations. We conclude that network-based organization of DrugBank 4.0 data not only reveals the potential for repurposing of existing drugs, also allows generating novel predictions about drugs off-targets, drug-drug interactions and their side effects. Our results also encourage further effort for high-throughput identification of targets to build networks that can be integrated into disease networks. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  4. iGNM 2.0: the Gaussian network model database for biomolecular structural dynamics.

    Science.gov (United States)

    Li, Hongchun; Chang, Yuan-Yu; Yang, Lee-Wei; Bahar, Ivet

    2016-01-04

    Gaussian network model (GNM) is a simple yet powerful model for investigating the dynamics of proteins and their complexes. GNM analysis became a broadly used method for assessing the conformational dynamics of biomolecular structures with the development of a user-friendly interface and database, iGNM, in 2005. We present here an updated version, iGNM 2.0 http://gnmdb.csb.pitt.edu/, which covers more than 95% of the structures currently available in the Protein Data Bank (PDB). Advanced search and visualization capabilities, both 2D and 3D, permit users to retrieve information on inter-residue and inter-domain cross-correlations, cooperative modes of motion, the location of hinge sites and energy localization spots. The ability of iGNM 2.0 to provide structural dynamics data on the large majority of PDB structures and, in particular, on their biological assemblies makes it a useful resource for establishing the bridge between structure, dynamics and function.

  5. Self-Organizing Genetic Algorithm Based Method for Constructing Bayesian Networks from Databases

    Institute of Scientific and Technical Information of China (English)

    郑建军; 刘玉树; 陈立潮

    2003-01-01

    The typical characteristic of the topology of Bayesian networks (BNs) is the interdependence among different nodes (variables), which makes it impossible to optimize one variable independently of others, and the learning of BNs structures by general genetic algorithms is liable to converge to local extremum. To resolve efficiently this problem, a self-organizing genetic algorithm (SGA) based method for constructing BNs from databases is presented. This method makes use of a self-organizing mechanism to develop a genetic algorithm that extended the crossover operator from one to two, providing mutual competition between them, even adjusting the numbers of parents in recombination (crossover/recomposition) schemes. With the K2 algorithm, this method also optimizes the genetic operators, and utilizes adequately the domain knowledge. As a result, with this method it is able to find a global optimum of the topology of BNs, avoiding premature convergence to local extremum. The experimental results proved to be and the convergence of the SGA was discussed.

  6. The USA National Phenology Network's National Phenology Database Is a Resource Ripe for Picking

    Science.gov (United States)

    Crimmins, T. M.; Enquist, C.; Rosemartin, A.; Denny, E. G.; Weltzin, J. F.

    2011-12-01

    The National Phenology Database, maintained by the USA National Phenology Network (USA-NPN), is experiencing steady growth in the number of data records it houses. As of July 2011, over 200,000 observation records encompassing three years of plant phenology observations and two years of animal phenology observations have been contributed by participants in Nature's Notebook, the online phenology observation program developed by the National Coordinating Office of the USA-NPN, and are available for download and analysis (www.usanpn.org/results/data). Participants in Nature's Notebook follow protocols that employ phenological "status" monitoring, rather than "event" monitoring. On each visit to their site, the observer indicates the status of each phenophase for an individual plant or an animal species with a 'yes' if the phenophase is occurring and 'no' if it is not. This approach has a number of advantages over event monitoring (e.g., calculation of error, estimation of effort, "negative" or "absence" data, capture of multiple events and duration, flexibility of definitions for phenological metrics, adaptability for animal monitoring). This approach has a number of advantages over event monitoring, enabling researchers to move beyond a focus on first events (e.g., calculation of error, estimation of effort, "negative" or "absence" data, capture of multiple events and duration, flexibility of definitions for phenological metrics, adaptability for animal monitoring). These strengths will ultimately improve our understanding of changes in the timing of seasonal events. We will describe event monitoring and ways this rich form of data can be intepreted in detail in this presentation. Patterns in the data collected by Nature's Notebook participants are beginning to emerge, even at this early stage, demonstrating the value of this data resource. In addition to year to year variability in the dates of onset and commencement of various phenophases, the observations show

  7. The new database of the Global Terrestrial Network for Permafrost (GTN-P)

    Science.gov (United States)

    Biskaborn, B. K.; Lanckman, J.-P.; Lantuit, H.; Elger, K.; Streletskiy, D. A.; Cable, W. L.; Romanovsky, V. E.

    2015-09-01

    The Global Terrestrial Network for Permafrost (GTN-P) provides the first dynamic database associated with the Thermal State of Permafrost (TSP) and the Circumpolar Active Layer Monitoring (CALM) programs, which extensively collect permafrost temperature and active layer thickness (ALT) data from Arctic, Antarctic and mountain permafrost regions. The purpose of GTN-P is to establish an early warning system for the consequences of climate change in permafrost regions and to provide standardized thermal permafrost data to global models. In this paper we introduce the GTN-P database and perform statistical analysis of the GTN-P metadata to identify and quantify the spatial gaps in the site distribution in relation to climate-effective environmental parameters. We describe the concept and structure of the data management system in regard to user operability, data transfer and data policy. We outline data sources and data processing including quality control strategies based on national correspondents. Assessment of the metadata and data quality reveals 63 % metadata completeness at active layer sites and 50 % metadata completeness for boreholes. Voronoi tessellation analysis on the spatial sample distribution of boreholes and active layer measurement sites quantifies the distribution inhomogeneity and provides a potential method to locate additional permafrost research sites by improving the representativeness of thermal monitoring across areas underlain by permafrost. The depth distribution of the boreholes reveals that 73 % are shallower than 25 m and 27 % are deeper, reaching a maximum of 1 km depth. Comparison of the GTN-P site distribution with permafrost zones, soil organic carbon contents and vegetation types exhibits different local to regional monitoring situations, which are illustrated with maps. Preferential slope orientation at the sites most likely causes a bias in the temperature monitoring and should be taken into account when using the data for global

  8. Sailor: Maryland's Online Public Information Network. Sailor Network Assessment Final Report: Findings and Future Sailor Network Development.

    Science.gov (United States)

    Bertot, John Carlo; McClure, Charles R.

    This report describes the results of an assessment of Sailor, Maryland's Online Public Information Network, which provides statewide Internet connection to 100% of Maryland public libraries. The concept of a "statewide networked environment" includes information services, products, hardware and software, telecommunications…

  9. Incremental View Maintenance for Deductive Graph Databases Using Generalized Discrimination Networks

    Directory of Open Access Journals (Sweden)

    Thomas Beyhl

    2016-12-01

    Full Text Available Nowadays, graph databases are employed when relationships between entities are in the scope of database queries to avoid performance-critical join operations of relational databases. Graph queries are used to query and modify graphs stored in graph databases. Graph queries employ graph pattern matching that is NP-complete for subgraph isomorphism. Graph database views can be employed that keep ready answers in terms of precalculated graph pattern matches for often stated and complex graph queries to increase query performance. However, such graph database views must be kept consistent with the graphs stored in the graph database. In this paper, we describe how to use incremental graph pattern matching as technique for maintaining graph database views. We present an incremental maintenance algorithm for graph database views, which works for imperatively and declaratively specified graph queries. The evaluation shows that our maintenance algorithm scales when the number of nodes and edges stored in the graph database increases. Furthermore, our evaluation shows that our approach can outperform existing approaches for the incremental maintenance of graph query results.

  10. Investigating the Potential Impacts of Energy Production in the Marcellus Shale Region Using the Shale Network Database

    Science.gov (United States)

    Brantley, S.; Pollak, J.

    2016-12-01

    The Shale Network's extensive database of water quality observations in the Marcellus Shale region enables educational experiences about the potential impacts of resource extraction and energy production with real data. Through tools that are open source and free to use, interested parties can access and analyze the very same data that the Shale Network team has used in peer-reviewed publications about the potential impacts of hydraulic fracturing on water. The development of the Shale Network database has been made possible through efforts led by an academic team and involving numerous individuals from government agencies, citizen science organizations, and private industry. With these tools and data, the Shale Network team has engaged high school students, university undergraduate and graduate students, as well as citizens so that all can discover how energy production impacts the Marcellus Shale region, which includes Pennsylvania and other nearby states. This presentation will describe these data tools, how the Shale Network has used them in educational settings, and the resources available to learn more.

  11. End-System Network Interface Controller for 100 Gb/s Wide Area Networks: Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Wen, Jesse

    2013-08-30

    In recent years, network bandwidth requirements have scaled multiple folds, pushing the need for the development of data exchange mechanisms at 100 Gb/s and beyond. High performance computing, climate modeling, large-scale storage, and collaborative scientific research are examples of applications that can greatly benefit by leveraging high bandwidth capabilities of the order of 100 Gb/s. Such requirements and advances in IEEE Ethernet standards, Optical Transport Unit4 (OTU4), and host-system interconnects demand a network infrastructure supporting throughput rates of the order of 100 Gb/s with a single wavelength. To address such a demand Acadia Optronics in collaboration with the University of New Mexico, proposed and developed a end-system Network Interface Controller (NIC) for the 100Gbps WANs. Acadia’s 100G NIC employs an FPGA based system with a high-performance processor interconnect (PCIe 3.0) and a high capacity optical transmission link (CXP) to provide data transmission at the rate of 100 Gbps.

  12. Online Databases for Taxonomy and Identification of Pathogenic Fungi and Proposal for a Cloud-Based Dynamic Data Network Platform.

    Science.gov (United States)

    Prakash, Peralam Yegneswaran; Irinyi, Laszlo; Halliday, Catriona; Chen, Sharon; Robert, Vincent; Meyer, Wieland

    2017-04-01

    The increase in public online databases dedicated to fungal identification is noteworthy. This can be attributed to improved access to molecular approaches to characterize fungi, as well as to delineate species within specific fungal groups in the last 2 decades, leading to an ever-increasing complexity of taxonomic assortments and nomenclatural reassignments. Thus, well-curated fungal databases with substantial accurate sequence data play a pivotal role for further research and diagnostics in the field of mycology. This minireview aims to provide an overview of currently available online databases for the taxonomy and identification of human and animal-pathogenic fungi and calls for the establishment of a cloud-based dynamic data network platform. Copyright © 2017 American Society for Microbiology.

  13. Events Triggered by two Network Database Concurrency Control Thinking%由两个网络事件引发的数据库并发控制的思考

    Institute of Scientific and Technical Information of China (English)

    俞席忠

    2012-01-01

      This paper analyzes the two network standstill database concurrency control efficiency of the system plays a vital role, optimistic locking and pessimistic locking, indicating high concurrent network environment, database concurrency control op⁃tions.

  14. Segmentation of pulmonary nodules in computed tomography using a regression neural network approach and its application to the Lung Image Database Consortium and Image Database Resource Initiative dataset.

    Science.gov (United States)

    Messay, Temesguen; Hardie, Russell C; Tuinstra, Timothy R

    2015-05-01

    We present new pulmonary nodule segmentation algorithms for computed tomography (CT). These include a fully-automated (FA) system, a semi-automated (SA) system, and a hybrid system. Like most traditional systems, the new FA system requires only a single user-supplied cue point. On the other hand, the SA system represents a new algorithm class requiring 8 user-supplied control points. This does increase the burden on the user, but we show that the resulting system is highly robust and can handle a variety of challenging cases. The proposed hybrid system starts with the FA system. If improved segmentation results are needed, the SA system is then deployed. The FA segmentation engine has 2 free parameters, and the SA system has 3. These parameters are adaptively determined for each nodule in a search process guided by a regression neural network (RNN). The RNN uses a number of features computed for each candidate segmentation. We train and test our systems using the new Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) data. To the best of our knowledge, this is one of the first nodule-specific performance benchmarks using the new LIDC-IDRI dataset. We also compare the performance of the proposed methods with several previously reported results on the same data used by those other methods. Our results suggest that the proposed FA system improves upon the state-of-the-art, and the SA system offers a considerable boost over the FA system.

  15. Two neural network algorithms for designing optimal terminal controllers with open final time

    Science.gov (United States)

    Plumer, Edward S.

    1992-01-01

    Multilayer neural networks, trained by the backpropagation through time algorithm (BPTT), have been used successfully as state-feedback controllers for nonlinear terminal control problems. Current BPTT techniques, however, are not able to deal systematically with open final-time situations such as minimum-time problems. Two approaches which extend BPTT to open final-time problems are presented. In the first, a neural network learns a mapping from initial-state to time-to-go. In the second, the optimal number of steps for each trial run is found using a line-search. Both methods are derived using Lagrange multiplier techniques. This theoretical framework is used to demonstrate that the derived algorithms are direct extensions of forward/backward sweep methods used in N-stage optimal control. The two algorithms are tested on a Zermelo problem and the resulting trajectories compare favorably to optimal control results.

  16. Harvesting Covert Networks: The Case Study of the iMiner Database

    DEFF Research Database (Denmark)

    Memon, Nasrullah; Wiil, Uffe Kock; Alhajj, Reda

    2011-01-01

    collected by intelligence agencies and government organisations is inaccessible to researchers. To counter the information scarcity, we designed and built a database of terrorist-related data and information by harvesting such data from publicly available authenticated websites. The database...... was incorporated in the iMiner prototype tool, which makes use of investigative data mining techniques to analyse data. This paper will present the developed framework along with the form and structure of the terrorist data in the database. Selected cases will be referenced to highlight the effectiveness of the i...

  17. FoodMicrobionet: A database for the visualisation and exploration of food bacterial communities based on network analysis.

    Science.gov (United States)

    Parente, Eugenio; Cocolin, Luca; De Filippis, Francesca; Zotta, Teresa; Ferrocino, Ilario; O'Sullivan, Orla; Neviani, Erasmo; De Angelis, Maria; Cotter, Paul D; Ercolini, Danilo

    2016-02-16

    Amplicon targeted high-throughput sequencing has become a popular tool for the culture-independent analysis of microbial communities. Although the data obtained with this approach are portable and the number of sequences available in public databases is increasing, no tool has been developed yet for the analysis and presentation of data obtained in different studies. This work describes an approach for the development of a database for the rapid exploration and analysis of data on food microbial communities. Data from seventeen studies investigating the structure of bacterial communities in dairy, meat, sourdough and fermented vegetable products, obtained by 16S rRNA gene targeted high-throughput sequencing, were collated and analysed using Gephi, a network analysis software. The resulting database, which we named FoodMicrobionet, was used to analyse nodes and network properties and to build an interactive web-based visualisation. The latter allows the visual exploration of the relationships between Operational Taxonomic Units (OTUs) and samples and the identification of core- and sample-specific bacterial communities. It also provides additional search tools and hyperlinks for the rapid selection of food groups and OTUs and for rapid access to external resources (NCBI taxonomy, digital versions of the original articles). Microbial interaction network analysis was carried out using CoNet on datasets extracted from FoodMicrobionet: the complexity of interaction networks was much lower than that found for other bacterial communities (human microbiome, soil and other environments). This may reflect both a bias in the dataset (which was dominated by fermented foods and starter cultures) and the lower complexity of food bacterial communities. Although some technical challenges exist, and are discussed here, the net result is a valuable tool for the exploration of food bacterial communities by the scientific community and food industry.

  18. Vision of future energy networks - Final report; Vision of future energy networks - Schlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Froehlich, K.; Andersson, G.

    2008-07-01

    In the framework of the project 'Vision of Future Networks', models and methods have been developed that enable a greenfield approach for energy systems with multiple energy carriers. Applying a greenfield approach means that no existing infrastructure is taken into account when designing the energy system, i.e. the system is virtually put up on a green field. The developed models refer to the impacts of energy storage on power systems with stochastic generation, to the integrated modelling and optimization of multi-carrier energy systems, to reliability considerations of future energy systems as well as to possibilities of combined transmission of multiple energy carriers. Key concepts, which have been developed in the framework of this project, are the Energy Hub (for the conversion and storage of energy) and the Energy Interconnector (for energy transmission). By means of these concepts, it is possible to design structures for future energy systems being able to cope with the growing requirements regarding energy supply. (author)

  19. NABIC marker database: A molecular markers information network of agricultural crops

    OpenAIRE

    2013-01-01

    In 2013, National Agricultural Biotechnology Information Center (NABIC) reconstructs a molecular marker database for useful genetic resources. The web-based marker database consists of three major functional categories: map viewer, RSN marker and gene annotation. It provides 7250 marker locations, 3301 RSN marker property, 3280 molecular marker annotation information in agricultural plants. The individual molecular marker provides information such as marker name, expressed sequence tag number...

  20. Final Technical Report on the Genome Sequence DataBase (GSDB): DE-FG03 95 ER 62062 September 1997-September 1999

    Energy Technology Data Exchange (ETDEWEB)

    Harger, Carol A.

    1999-10-28

    Since September 1997 NCGR has produced two web-based tools for researchers to use to access and analyze data in the Genome Sequence DataBase (GSDB). These tools are: Sequence Viewer, a nucleotide sequence and annotation visualization tool, and MAR-Finder, a tool that predicts, base upon statistical inferences, the location of matrix attachment regions (MARS) within a nucleotide sequence. [The annual report for June 1996 to August 1997 is included as an attachment to this final report.

  1. A meta-database comparison from various European research networks dedicated to forests sites

    NARCIS (Netherlands)

    Danielewska, A.; Clarke, N.; Olejnik, J.; Hansen, K.; Vries, de W.

    2013-01-01

    Of a wide variety of international forest research and monitoring networks, several networks are dedicated to the effects of climate change on forests, while the effects of anthropogenic pollutants on forests have been a major area for both monitoring and research for decades. The large amounts of d

  2. The Transformation of Schools’ Social Networks During a Data-Based Decision Making Reform

    NARCIS (Netherlands)

    Keuning, Trynke; Geel, van Marieke; Visscher, Adrie; Fox, Jean-Paul; Moolenaar, Nienke M.

    2016-01-01

    Context: Collaboration within school teams is considered to be important to build the capacity school teams need to work in a data-based way. In a school characterized by a strong collaborative culture, teachers may have more access to the knowledge and skills for analyzing data, teachers have more

  3. The Transformation of Schools' Social Networks during a Data-Based Decision Making Reform

    Science.gov (United States)

    Keuning, Trynke

    2016-01-01

    Context: Collaboration within school teams is considered to be important to build the capacity school teams need to work in a data-based way. In a school characterized by a strong collaborative culture, teachers may have more access to the knowledge and skills for analyzing data, teachers have more opportunity to discuss the performance goals to…

  4. Establishment of database and network for research of stream generator and state of the art technology review

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jae Bong; Hur, Nam Su; Moon, Seong In; Seo, Hyeong Won; Park, Bo Kyu; Park, Sung Ho; Kim, Hyung Geun [Sungkyunkwan Univ., Seoul (Korea, Republic of)

    2004-02-15

    A significant number of steam generator tubes are defective and are removed from service or repaired world widely. This wide spread damage has been caused by diverse degradation mechanisms, some of which are difficult to detect and predict. Regarding domestic nuclear power plants, also, the increase of number of operating nuclear power plants and operating periods may result in the increase of steam generator tube failure. So, it is important to carry out the integrity evaluation process to prevent the steam generator tube damage. There are two objectives of this research. The one is to make database for the research of steam generator at domestic research institution. It will increase the efficiency and capability of limited domestic research resources by sharing data and information through network organization. Also, it will enhance the current standard of integrity evaluation procedure that is considerably conservative but can be more reasonable. The second objective is to establish the standard integrity evaluation procedure for steam generator tube by reviewing state of the art technology. The research resources related to steam generator tubes are managed by the established web-based database system. The following topics are covered in this project: development of web-based network for research on steam generator tubes review of state of the art technology.

  5. CMRegNet-An interspecies reference database for corynebacterial and mycobacterial regulatory networks

    DEFF Research Database (Denmark)

    Abreu, Vinicius A C; Almeida, Sintia; Tiwari, Sandeep

    2015-01-01

    BACKGROUND: Organisms utilize a multitude of mechanisms for responding to changing environmental conditions, maintaining their functional homeostasis and to overcome stress situations. One of the most important mechanisms is transcriptional gene regulation. In-depth study of the transcriptional g......Net to date the most comprehensive database of regulatory interactions of CMNR bacteria. The content of CMRegNet is publicly available online via a web interface found at http://lgcm.icb.ufmg.br/cmregnet ....

  6. Airborne Network Data Availability Using Peer to Peer Database Replication on a Distributed Hash Table

    Science.gov (United States)

    2013-03-01

    AODV ) were used as the three routing protocols . All routing protocols were configured with the default values of their parameters. The...Optimized Link State Routing OSI Open Systems Interconnection P2P Peer-to-Peer PDP Peer Database Protocol SAR Spatially Aware Routing UAV Unmanned Aerial...concludes that one major aspect is the interaction between two routing systems: the ad-hoc routing protocol and the DHT routing algorithms. Since

  7. World-wide interactive access to scientific databases via satellite and terrestrial data network

    Science.gov (United States)

    Sanderson, T. R.; Albrecht, M. A.; Ciarlo, A.; Brett, M.; Blank, K.; Hughes, P. M. T.; Wallum, G.; Hills, H. K.; Green, J. L.; Mcguire, R. E.; hide

    1990-01-01

    In order to demonstrate the possibilities for scientific networking and data transfer, a first temporary satellite network link was installed between Czecholovakia and the European space operations center in Darmstadt, during the meeting of the inter-agency consultative group for space science in Prague. Several experiments to show interactive nature of the facility and the capability of the system were carried out, and it was proven that, despite the temporary nature of the installation, the planned demonstrations could be conducted in real time. Demonstrations included electronic mail message, orbit prediction and solar X-ray data. The results of the experiment provided insight into possibilities of data exchange.

  8. World-wide interactive access to scientific databases via satellite and terrestrial data network

    Science.gov (United States)

    Sanderson, T. R.; Albrecht, M. A.; Ciarlo, A.; Brett, M.; Blank, K.; Hughes, P. M. T.; Wallum, G.; Hills, H. K.; Green, J. L.; Mcguire, R. E.; Kamei, T.; Kiplinger, A.; Waite, J. H., Jr.

    1990-01-01

    In order to demonstrate the possibilities for scientific networking and data transfer, a first temporary satellite network link was installed between Czecholovakia and the European space operations center in Darmstadt, during the meeting of the inter-agency consultative group for space science in Prague. Several experiments to show interactive nature of the facility and the capability of the system were carried out, and it was proven that, despite the temporary nature of the installation, the planned demonstrations could be conducted in real time. Demonstrations included electronic mail message, orbit prediction and solar X-ray data. The results of the experiment provided insight into possibilities of data exchange.

  9. Networked neuroscience : brain scans and visual knowing at the intersection of atlases and databases

    NARCIS (Netherlands)

    Beaulieu, Anne; de Rijcke, Sarah; Coopmans, Catelijne; Woolgar, Steve

    2014-01-01

    This chapter discusses the development of authoritative collections of brain scans known as “brain atlases”, focusing in particular on how such scans are constituted as authoritative visual objects. Three dimensions are identified: first, brain scans are parts of suites of networked technologies rat

  10. PERANCANGAN MODEL NETWORK PADA MESIN DATABASE NON SPATIAL UNTUK MANUVER JARINGAN LISTRIK SEKTOR DISTRIBUSI DENGAN PL SQ

    Directory of Open Access Journals (Sweden)

    I Made Sukarsa

    2009-06-01

    Full Text Available Saat ini aplikasi di bidang SIG telah banyak yang dikembangkan berbasis mesin DBMS (Database Management System non spatial sehingga mampu mendukung model penyajian data secara client server dan menangani data dalam jumlah yang besar. Salah satunya telah dikembangkan untuk menangani data jaringan listrik.Kenyataannya, mesin-mesin DBMS belum dilengkapi dengan kemampuan untuk melakukan analisis network seperti manuver jaringan dan merupakan dasar untuk pengembangan berbagai aplikasi lainnya. Oleh karena itu,perlu dikembangkan suatu model network untuk manuver jaringan listrik dengan berbagai kekhasannya.Melalui beberapa tahapan penelitian yang dilakukan, telah dapat dikembangkan suatu model network yangdapat digunakan untuk menangani manuver jaringan. Model ini dibangun dengan memperhatikan kepentingan pengintegrasian dengan sistem eksisting dengan meminimalkan adanya perubahan pada aplikasi eksisting.Pemilihan implementasi berbasis PL SQL (Pragrammable Language Structure Query Language akan memberikan berbagai keuntungan termasuk unjuk kerja sistem. Model ini telah diujikan untuk simulasi pemadaman,menghitung perubahan struktur pembebanan jaringan dan dapat dikembangkan untuk analisis sistem tenaga listrik seperti rugi-rugi, load flow dan sebagainya sehingga pada akhirnya aplikasi SIG akan mampu mensubstitusi danmengatasi kelemahan aplikasi analisis sistem tenaga yang banyak dipakai saat ini seperti EDSA (Electrical DesignSystem Anaysis .

  11. Building a Learning Database for the Neural Network Retrieval of Sea Surface Salinity from SMOS Brightness Temperatures

    CERN Document Server

    Ammar, Adel; Obligis, Estelle; Crépon, Michel; Thiria, Sylvie

    2016-01-01

    This article deals with an important aspect of the neural network retrieval of sea surface salinity (SSS) from SMOS brightness temperatures (TBs). The neural network retrieval method is an empirical approach that offers the possibility of being independent from any theoretical emissivity model, during the in-flight phase. A Previous study [1] has proven that this approach is applicable to all pixels on ocean, by designing a set of neural networks with different inputs. The present study focuses on the choice of the learning database and demonstrates that a judicious distribution of the geophysical parameters allows to markedly reduce the systematic regional biases of the retrieved SSS, which are due to the high noise on the TBs. An equalization of the distribution of the geophysical parameters, followed by a new technique for boosting the learning process, makes the regional biases almost disappear for latitudes between 40{\\deg}S and 40{\\deg}N, while the global standard deviation remains between 0.6 psu (at t...

  12. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  13. Summary of Web-Database Technologies%Web数据库技术综述

    Institute of Scientific and Technical Information of China (English)

    周军

    2000-01-01

    Web-Database is the base of many network applications such as Web information retrieval system, Web information publishing and Electronic Commerce. This article focuses on several popular Web-Database technologies such as CGI, ISAPI, IDC, ASP and Java Applet, analyzing and comparing their structure, characteristics, advantages and disadvantages. Finally, it discusses the main structure of the Web-Database technology.

  14. Network discovery, characterization, and prediction : a grand challenge LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Kegelmeyer, W. Philip, Jr.

    2010-11-01

    This report is the final summation of Sandia's Grand Challenge LDRD project No.119351, 'Network Discovery, Characterization and Prediction' (the 'NGC') which ran from FY08 to FY10. The aim of the NGC, in a nutshell, was to research, develop, and evaluate relevant analysis capabilities that address adversarial networks. Unlike some Grand Challenge efforts, that ambition created cultural subgoals, as well as technical and programmatic ones, as the insistence on 'relevancy' required that the Sandia informatics research communities and the analyst user communities come to appreciate each others needs and capabilities in a very deep and concrete way. The NGC generated a number of technical, programmatic, and cultural advances, detailed in this report. There were new algorithmic insights and research that resulted in fifty-three refereed publications and presentations; this report concludes with an abstract-annotated bibliography pointing to them all. The NGC generated three substantial prototypes that not only achieved their intended goals of testing our algorithmic integration, but which also served as vehicles for customer education and program development. The NGC, as intended, has catalyzed future work in this domain; by the end it had already brought in, in new funding, as much funding as had been invested in it. Finally, the NGC knit together previously disparate research staff and user expertise in a fashion that not only addressed our immediate research goals, but which promises to have created an enduring cultural legacy of mutual understanding, in service of Sandia's national security responsibilities in cybersecurity and counter proliferation.

  15. SPAN: A Network Providing Integrated, End-to-End, Sensor-to-Database Solutions for Environmental Sciences

    Science.gov (United States)

    Benzel, T.; Cho, Y. H.; Deschon, A.; Gullapalli, S.; Silva, F.

    2009-12-01

    In recent years, advances in sensor network technology have shown great promise to revolutionize environmental data collection. Still, wide spread adoption of these systems by domain experts has been lacking, and these have remained the purview of the engineers who design them. While there are many data logging options for basic data collection in the field currently, scientists are often required to visit the deployment sites to retrieve their data and manually import it into spreadsheets. Some advanced commercial software systems do allow scientists to collect data remotely, but most of these systems only allow point-to-point access, and require proprietary hardware. Furthermore, these commercial solutions preclude the use of sensors from other manufacturers or integration with internet based database repositories and compute engines. Therefore, scientists often must download and manually reformat their data before uploading it to the repositories if they wish to share their data. We present an open-source, low-cost, extensible, turnkey solution called Sensor Processing and Acquisition Network (SPAN) which provides a robust and flexible sensor network service. At the deployment site, SPAN leverages low-power generic embedded processors to integrate variety of commercially available sensor hardware to the network of environmental observation systems. By bringing intelligence close to the sensed phenomena, we can remotely control configuration and re-use, establish rules to trigger sensor activity, manage power requirements, and control the two-way flow of sensed data as well as control information to the sensors. Key features of our design include (1) adoption of a hardware agnostic architecture: our solutions are compatible with several programmable platforms, sensor systems, communication devices and protocols. (2) information standardization: our system supports several popular communication protocols and data formats, and (3) extensible data support: our

  16. Populating the i2b2 database with heterogeneous EMR data: a semantic network approach.

    Science.gov (United States)

    Mate, Sebastian; Bürkle, Thomas; Köpcke, Felix; Breil, Bernhard; Wullich, Bernd; Dugas, Martin; Prokosch, Hans-Ulrich; Ganslandt, Thomas

    2011-01-01

    In an ongoing effort to share heterogeneous electronic medical record (EMR) data in an i2b2 instance between the University Hospitals Münster and Erlangen for joint cancer research projects, an ontology based system for the mapping of EMR data to a set of common data elements has been developed. The system translates the mappings into local SQL scripts, which are then used to extract, transform and load the facts data from each EMR into the i2b2 database. By using Semantic Web standards, it is the authors' goal to reuse the laboriously compiled "mapping knowledge" in future projects, such as a comprehensive cancer ontology or even a hospital-wide clinical ontology.

  17. A COMPARISON STUDY FOR INTRUSION DATABASE (KDD99, NSL-KDD BASED ON SELF ORGANIZATION MAP (SOM ARTIFICIAL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    LAHEEB M. IBRAHIM

    2013-02-01

    Full Text Available Detecting anomalous traffic on the internet has remained an issue of concern for the community of security researchers over the years. The advances in the area of computing performance, in terms of processing power and storage, have fostered their ability to host resource-intensive intelligent algorithms, to detect intrusive activity, in a timely manner. As part of this project, we study and analyse the performance of Self Organization Map (SOM Artificial Neural Network, when implemented as part of an Intrusion Detection System, to detect anomalies on acknowledge Discovery in Databases KDD 99 and NSL-KDD datasets of internet traffic activity simulation. Results obtained are compared and analysed based on several performance metrics, where the detection rate for KDD 99 dataset is 92.37%, while detection rate for NSL-KDD dataset is 75.49%.

  18. Turning text into research networks: information retrieval and computational ontologies in the creation of scientific databases.

    Directory of Open Access Journals (Sweden)

    Flávio Ceci

    Full Text Available BACKGROUND: Web-based, free-text documents on science and technology have been increasing growing on the web. However, most of these documents are not immediately processable by computers slowing down the acquisition of useful information. Computational ontologies might represent a possible solution by enabling semantically machine readable data sets. But, the process of ontology creation, instantiation and maintenance is still based on manual methodologies and thus time and cost intensive. METHOD: We focused on a large corpus containing information on researchers, research fields, and institutions. We based our strategy on traditional entity recognition, social computing and correlation. We devised a semi automatic approach for the recognition, correlation and extraction of named entities and relations from textual documents which are then used to create, instantiate, and maintain an ontology. RESULTS: We present a prototype demonstrating the applicability of the proposed strategy, along with a case study describing how direct and indirect relations can be extracted from academic and professional activities registered in a database of curriculum vitae in free-text format. We present evidence that this system can identify entities to assist in the process of knowledge extraction and representation to support ontology maintenance. We also demonstrate the extraction of relationships among ontology classes and their instances. CONCLUSION: We have demonstrated that our system can be used for the conversion of research information in free text format into database with a semantic structure. Future studies should test this system using the growing number of free-text information available at the institutional and national levels.

  19. Genetic Networks of Complex Disorders: from a Novel Search Engine for PubMed Article Database.

    Science.gov (United States)

    Jung, Jae-Yoon; Wall, Dennis Paul

    2013-01-01

    Finding genetic risk factors of complex disorders may involve reviewing hundreds of genes or thousands of research articles iteratively, but few tools have been available to facilitate this procedure. In this work, we built a novel publication search engine that can identify target-disorder specific, genetics-oriented research articles and extract the genes with significant results. Preliminary test results showed that the output of this engine has better coverage in terms of genes or publications, than other existing applications. We consider it as an essential tool for understanding genetic networks of complex disorders.

  20. Validation of computerized diagnostic information in a clinical database from a national equine clinic network

    Directory of Open Access Journals (Sweden)

    Egenvall Agneta

    2009-12-01

    Full Text Available Abstract Background Computerized diagnostic information offers potential for epidemiological research; however data accuracy must be addressed. The principal aim of this study was to evaluate the completeness and correctness of diagnostic information in a computerized equine clinical database compared to corresponding hand written veterinary clinical records, used as gold standard, and to assess factors related to correctness. Further, the aim was to investigate completeness (epidemiologic sensitivity, correctness (positive predictive value, specificity and prevalence for diagnoses for four body systems and correctness for affected limb information for four joint diseases. Methods A random sample of 450 visits over the year 2002 (nvisits = 49,591 was taken from 18 nation wide clinics headed under one company. Computerized information for the visits selected and copies of the corresponding veterinary clinical records were retrieved. Completeness and correctness were determined using semi-subjective criteria. Logistic regression was used to examine factors associated with correctness for diagnosis. Results Three hundred and ninety six visits had veterinary clinical notes that were retrievable. The overall completeness and correctness were 91% and 92%, respectively; both values considered high. Descriptive analyses showed significantly higher degree of correctness for first visits compared to follow up visits and for cases with a diagnostic code recorded in the veterinary records compared to those with no code noted. The correctness was similar regardless of usage category (leisure/sport horse, racing trotter and racing thoroughbred or gender. For the four body systems selected (joints, skin and hooves, respiratory, skeletal the completeness varied between 71% (respiration and 91% (joints and the correctness ranged from 87% (skin and hooves to 96% (respiration, whereas the specificity was >95% for all systems. Logistic regression showed that

  1. SISSY: An example of a multi-threaded, networked, object-oriented databased application

    Energy Technology Data Exchange (ETDEWEB)

    Scipioni, B.; Liu, D.; Song, T.

    1993-05-01

    The Systems Integration Support SYstem (SISSY) is presented and its capabilities and techniques are discussed. It is fully automated data collection and analysis system supporting the SSCL`s systems analysis activities as they relate to the Physics Detector and Simulation Facility (PDSF). SISSY itself is a paradigm of effective computing on the PDSF. It uses home-grown code (C++), network programming (RPC, SNMP), relational (SYBASE) and object-oriented (ObjectStore) DBMSs, UNIX operating system services (IRIX threads, cron, system utilities, shells scripts, etc.), and third party software applications (NetCentral Station, Wingz, DataLink) all of which act together as a single application to monitor and analyze the PDSF.

  2. Historical database for estimating flows in a water supply network; Base de datos historica para estimacion de caudales en una red de abastecimiento de agua

    Energy Technology Data Exchange (ETDEWEB)

    Menendez Martinez, A.; Ariel Gomez Gutierrez, A.; Alvarez Ramos, I.; Biscarri Trivino, F. [Universidad de Sevilla (Spain)

    2000-07-01

    Monitoring the flows managed by water supply companies involves processing huge amounts of data. These data also have to correspond to the topology of the network in a way that is consistent with the data collection time. The specific purpose database described in this article was developed to meet such requirements. (Author) 4 refs.

  3. Recognition of morphometric vertebral fractures by artificial neural networks: analysis from GISMO Lombardia Database.

    Directory of Open Access Journals (Sweden)

    Cristina Eller-Vainicher

    Full Text Available BACKGROUND: It is known that bone mineral density (BMD predicts the fracture's risk only partially and the severity and number of vertebral fractures are predictive of subsequent osteoporotic fractures (OF. Spinal deformity index (SDI integrates the severity and number of morphometric vertebral fractures. Nowadays, there is interest in developing algorithms that use traditional statistics for predicting OF. Some studies suggest their poor sensitivity. Artificial Neural Networks (ANNs could represent an alternative. So far, no study investigated ANNs ability in predicting OF and SDI. The aim of the present study is to compare ANNs and Logistic Regression (LR in recognising, on the basis of osteoporotic risk-factors and other clinical information, patients with SDI≥1 and SDI≥5 from those with SDI = 0. METHODOLOGY: We compared ANNs prognostic performance with that of LR in identifying SDI≥1/SDI≥5 in 372 women with postmenopausal-osteoporosis (SDI≥1, n = 176; SDI = 0, n = 196; SDI≥5, n = 51, using 45 variables (44 clinical parameters plus BMD. ANNs were allowed to choose relevant input data automatically (TWIST-system-Semeion. Among 45 variables, 17 and 25 were selected by TWIST-system-Semeion, in SDI≥1 vs SDI = 0 (first and SDI≥5 vs SDI = 0 (second analysis. In the first analysis sensitivity of LR and ANNs was 35.8% and 72.5%, specificity 76.5% and 78.5% and accuracy 56.2% and 75.5%, respectively. In the second analysis, sensitivity of LR and ANNs was 37.3% and 74.8%, specificity 90.3% and 87.8%, and accuracy 63.8% and 81.3%, respectively. CONCLUSIONS: ANNs showed a better performance in identifying both SDI≥1 and SDI≥5, with a higher sensitivity, suggesting its promising role in the development of algorithm for predicting OF.

  4. Berkeley Sensor Database, an Implementation of CUAHSI's ODM for the Keck HydroWatch Wireless Sensor Network

    Science.gov (United States)

    Ogle, G.; Bode, C.; Fung, I.

    2010-12-01

    The Keck HydroWatch Project is a multidisciplinary project devoted to understanding how water interacts with atmosphere, vegetation, soil, and fractured bedrock. It is experimenting with novel techniques to monitor and trace water pathways through these mediums, including developing an intensive wireless sensor network, in the Angelo Coast Range and Sagehen Reserves in California. The sensor time-series data is being supplemented with periodic campaigns experimenting with sampling and tracing techniques, including water chemistry, stable isotope analysis, electrical resistivity tomography (ERT), and neutron probes. Mechanistic and statistical modeling is being performed with these datasets. One goal of the HydroWatch project is to prototype technologies for intensive sampling that can be upscaled to the watershed scale. The Berkeley Sensor Database was designed to manage the large volumes of heterogeneous data coming from this sensor network. This system is based on the Observations Data Model (ODM) developed by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI). Due to need for the use of open-source software, UC Berkeley ported the ODM to a LAMP system (Linux, Apache, MySQL, Perl). As of August 2010, the Berkeley Sensor Database contains 33 million measurements from 1200 devices, with several thousand new measurements being added each hour. Data for this research is being collected from a wide variety of equipment. Some of this equipment is experimental and subject to constant modification, others are industry standards. Well pressure transducers, sap flow sensors, experimental microclimate motes, standard weather stations, and multiple rock and soil moisture sensors are some examples. While the Hydrologic Information System (HIS) and the ODM are optimized for data interoperability, they are not focused on facility management and data quality control which occur at a complex research site. In this presentation, we describe our

  5. Practical Utilization of OryzaExpress and Plant Omics Data Center Databases to Explore Gene Expression Networks in Oryza Sativa and Other Plant Species.

    Science.gov (United States)

    Kudo, Toru; Terashima, Shin; Takaki, Yuno; Nakamura, Yukino; Kobayashi, Masaaki; Yano, Kentaro

    2017-01-01

    Analysis of a gene expression network (GEN), which is constructed based on similarity of gene expression profiles, is a widely used approach to gain clues for new biological insights. The recent abundant availability of transcriptome data in public databases is enabling GEN analysis under various experimental conditions, and even comparative GEN analysis across species. To provide a platform to gain biological insights from public transcriptome data, valuable databases have been created and maintained. This chapter introduces the web database OryzaExpress, providing omics information on Oryza sativa (rice). The integrated database Plant Omics Data Center, supporting a wide variety of plant species, is also described to compare omics information among multiple plant species.

  6. Developing a national database to recent Hispanic/minority graduates and professionals for employmnet, procurement, consulting, and educational research opportunities with the federal government. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Keller, G.D.; Garza, H.

    1996-09-30

    This report very briefly summarizes notable accomplishments of the grant collaboration between the Hispanic Experts Database/Minority Experts Database and the American Council on Education. The Directory of Hispanic Experts was compiled and distributed from the database. The database was expanded through several initiatives, and information dissemination was increased. The database is now available on the Internet and the World Wide Web. Restructuring of Web sites and other means of information dissemination was performed to coordinate the database with other government databases and eliminate duplication.

  7. Effect of diabetes and acute rejection on liver transplant outcomes: An analysis of the organ procurement and transplantation network/united network for organ sharing database.

    Science.gov (United States)

    Kuo, Hung-Tien; Lum, Erik; Martin, Paul; Bunnapradist, Suphamai

    2016-06-01

    The effects of diabetic status and acute rejection (AR) on liver transplant outcomes are largely unknown. We studied 13,736 liver recipients from the United Network for Organ Sharing/Organ Procurement Transplant Network database who underwent transplantation between 2004 and 2007 with a functioning graft for greater than 1 year. The association of pretransplant diabetes mellitus (PDM), new-onset diabetes after transplant (NODAT), and AR rates on allograft failure, all-cause mortality, and cardiovascular mortality were determined. To determine the differential and joint effects of diabetic status and AR on transplant outcomes, recipients were further stratified into 6 groups: neither (reference, n = 6600); NODAT alone (n = 2054); PDM alone (n = 2414); AR alone (n = 1448); NODAT and AR (n = 707); and PDM and AR (n = 513). An analysis with hepatitis C virus (HCV) serostatus was also performed (HCV recipients, n = 6384; and non-HCV recipient, n = 5934). The median follow-up was 2537 days. The prevalence of PDM was 21.3%. At 1 year after transplant, the rates of NODAT and AR were 25.5% and 19.4%, respectively. Overall, PDM, NODAT, and AR were associated with increased risks for graft failure (PDM, hazard ratio [HR] = 1.31, P < 0.01; NODAT, HR = 1.11, P = 0.02; AR, HR = 1.28, P < 0.01). A multivariate Cox regression analysis of the 6 recipient groups demonstrated that NODAT alone was not significantly associated with any study outcomes. The presence of PDM, AR, NODAT and AR, and PDM and AR were associated with higher overall graft failure risk and mortality risk. The presence of PDM was associated with higher cardiovascular mortality risk. The analyses in both HCV-positive and HCV-negative cohorts showed a similar trend as in the overall cohort. In conclusion, PDM and AR, but not NODAT, is associated with increased mortality and liver allograft failure. Liver Transplantation 22 796-804 2016 AASLD.

  8. Providing Access to CD-ROM Databases in a Campus Setting. Part II: Networking CD-ROMs via a LAN.

    Science.gov (United States)

    Koren, Judy

    1992-01-01

    The second part of a report on CD-ROM networking in libraries describes LAN (local area network) technology; networking software and towers; gateway software for connecting to campuswide networks; Macintosh LANs; and network licenses. Several product and software reviews are included, and a sidebar lists vendor addresses. (NRP)

  9. Fractal modeling of natural fracture networks. Final report, June 1994--June 1995

    Energy Technology Data Exchange (ETDEWEB)

    Ferer, M.V.; Dean, B.H.; Mick, C.

    1996-04-01

    Recovery from naturally fractured, tight-gas reservoirs is controlled by the fracture network. Reliable characterization of the actual fracture network in the reservoir is severely limited. The location and orientation of fractures intersecting the borehole can be determined, but the length of these fractures cannot be unambiguously determined. Fracture networks can be determined for outcrops, but there is little reason to believe that the network in the reservoir should be identical because of the differences in stresses and history. Because of the lack of detailed information about the actual fracture network, modeling methods must represent the porosity and permeability associated with the fracture network, as accurately as possible with very little apriori information. Three rather different types of approaches have been used: (1) dual porosity simulations; (2) `stochastic` modeling of fracture networks, and (3) fractal modeling of fracture networks. Stochastic models which assume a variety of probability distributions of fracture characteristics have been used with some success in modeling fracture networks. The advantage of these stochastic models over the dual porosity simulations is that real fracture heterogeneities are included in the modeling process. In the sections provided in this paper the authors will present fractal analysis of the MWX site, using the box-counting procedure; (2) review evidence testing the fractal nature of fracture distributions and discuss the advantages of using their fractal analysis over a stochastic analysis; (3) present an efficient algorithm for producing a self-similar fracture networks which mimic the real MWX outcrop fracture network.

  10. Re: Pregabalin prescriptions in the United Kingdom - a drug utilisation study of The Health Improvement Network (THIN) primary care database by Asomaning et al

    DEFF Research Database (Denmark)

    Pottegård, A; Tjäderborn, M; Schjerning, O

    2016-01-01

    Aim In Europe, pregabalin is approved for treatment of neuropathic pain, general anxiety disorder (GAD) and as adjunctive therapy for epilepsy. The purpose of this study was to assess utilisation of pregabalin in the UK, including patients with a recorded history of substance abuse, from a large...... general practice database. Methods This observational drug utilisation study (DUS) analysed pregabalin prescription data from the UK Health Improvement Network primary care database between September 2004 and July 2009. Patient demographics, diagnoses (by READ codes) and pregabalin dosing data were...

  11. The new IAGOS Database Portal

    Science.gov (United States)

    Boulanger, Damien; Gautron, Benoit; Thouret, Valérie; Fontaine, Alain

    2016-04-01

    IAGOS (In-service Aircraft for a Global Observing System) is a European Research Infrastructure which aims at the provision of long-term, regular and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. It contains IAGOS-core data and IAGOS-CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data. The IAGOS Database Portal (http://www.iagos.fr, damien.boulanger@obs-mip.fr) is part of the French atmospheric chemistry data center AERIS (http://www.aeris-data.fr). The new IAGOS Database Portal has been released in December 2015. The main improvement is the interoperability implementation with international portals or other databases in order to improve IAGOS data discovery. In the frame of the IGAS project (IAGOS for the Copernicus Atmospheric Service), a data network has been setup. It is composed of three data centers: the IAGOS database in Toulouse; the HALO research aircraft database at DLR (https://halo-db.pa.op.dlr.de); and the CAMS data center in Jülich (http://join.iek.fz-juelich.de). The CAMS (Copernicus Atmospheric Monitoring Service) project is a prominent user of the IGAS data network. The new portal provides improved and new services such as the download in NetCDF or NASA Ames formats, plotting tools (maps, time series, vertical profiles, etc.) and user management. Added value products are available on the portal: back trajectories, origin of air masses, co-location with satellite data, etc. The link with the CAMS data center, through JOIN (Jülich OWS Interface), allows to combine model outputs with IAGOS data for inter-comparison. Finally IAGOS metadata has been standardized (ISO 19115) and now provides complete information about data traceability and quality.

  12. Recovery Act: Energy Efficiency of Data Networks through Rate Adaptation (EEDNRA) - Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Matthew Andrews; Spyridon Antonakopoulos; Steve Fortune; Andrea Francini; Lisa Zhang

    2011-07-12

    This Concept Definition Study focused on developing a scientific understanding of methods to reduce energy consumption in data networks using rate adaptation. Rate adaptation is a collection of techniques that reduce energy consumption when traffic is light, and only require full energy when traffic is at full provisioned capacity. Rate adaptation is a very promising technique for saving energy: modern data networks are typically operated at average rates well below capacity, but network equipment has not yet been designed to incorporate rate adaptation. The Study concerns packet-switching equipment, routers and switches; such equipment forms the backbone of the modern Internet. The focus of the study is on algorithms and protocols that can be implemented in software or firmware to exploit hardware power-control mechanisms. Hardware power-control mechanisms are widely used in the computer industry, and are beginning to be available for networking equipment as well. Network equipment has different performance requirements than computer equipment because of the very fast rate of packet arrival; hence novel power-control algorithms are required for networking. This study resulted in five published papers, one internal report, and two patent applications, documented below. The specific technical accomplishments are the following: • A model for the power consumption of switching equipment used in service-provider telecommunication networks as a function of operating state, and measured power-consumption values for typical current equipment. • An algorithm for use in a router that adapts packet processing rate and hence power consumption to traffic load while maintaining performance guarantees on delay and throughput. • An algorithm that performs network-wide traffic routing with the objective of minimizing energy consumption, assuming that routers have less-than-ideal rate adaptivity. • An estimate of the potential energy savings in service-provider networks

  13. Technology solutions of computer network security database%计算机网络数据库安全技术方案

    Institute of Scientific and Technical Information of China (English)

    伍军

    2015-01-01

    With the development of computer ne twork database, the security threats facing more and more complex, and therefore, the network database security technology must continue to advance with the times. Can take three authentication, database encryption, active tracking and monitoring technology, from passive defense initiative to track both directions and evolving in response to the increasing complexity of the situation, to maximize network database to ensure the integrity and consistency of data sex.%随着时代的发展,计算机网络数据库面临的安全威胁越来越多,越来越复杂,因此,网络数据库安全防护技术必须不断与时俱进。可采取三级身份认证、数据库加密、主动追踪和监测技术,从被动防御和主动追踪两个方向不断发展,以应对越来越复杂的局面,最大程度地保证网络数据库数据信息的完整性和一致性。

  14. Steiner minimal trees—the final destinations for lipid nanotube networks with three-way junctions

    Science.gov (United States)

    Yin, YaJun; Wu, JiYe; Yin, Jie; Fan, QinShan

    2011-04-01

    Through the combination of the minimum energy principle in physics and the Steiner minimal tree (SMT) theory in geometry, this paper proves a universal law for lipid nanotube networks (LNNs): at stable equilibrium state, the network of three-way lipid nanotube junctions is equivalent to a SMT. Besides, an arbitrary (usually non-equilibrium) network of lipid nanotube junctions may fission into a SMT through diffusions and dynamic self-organizations of lipid molecules. Potential applications of the law to the micromanipulations of LNNs are presented.

  15. Final Progress Report on Robust and/or Adaptive Filtering by Neural Networks

    Science.gov (United States)

    2007-11-02

    Conference on Artificial Neural Networks in Engineering , Nov. 4-7, 2001, St. Louis, Missouri. This paper shows that the risk-averting error criterion is...sensitive estimates, i.e., exhibiting an extremely high level of nonuniqueness . • Recurrent Multilayer Perceptrons for Discrete-Time Dynamic System...Proceedings of the 2001 Conference on Artificial Neural Networks in Engineering , St. Louis, Missouri, November 2001. 7. Avoiding Poor Local Minima in Training

  16. Fractured reservoir discrete feature network technologies. Final report, March 7, 1996 to September 30, 1998

    Energy Technology Data Exchange (ETDEWEB)

    Dershowitz, William S.; Einstein, Herbert H.; LaPoint, Paul R.; Eiben, Thorsten; Wadleigh, Eugene; Ivanova, Violeta

    1998-12-01

    This report summarizes research conducted for the Fractured Reservoir Discrete Feature Network Technologies Project. The five areas studied are development of hierarchical fracture models; fractured reservoir compartmentalization, block size, and tributary volume analysis; development and demonstration of fractured reservoir discrete feature data analysis tools; development of tools for data integration and reservoir simulation through application of discrete feature network technologies for tertiary oil production; quantitative evaluation of the economic value of this analysis approach.

  17. Social networking and the Olympic Movement: social media analysis, opportunities and trends : final report

    OpenAIRE

    Fernández Peña, Emilio

    2011-01-01

    Table of contents : 1: Introduction. - 2 : Sociodemographic data of social networking sites. - 3 : The Vancouver 2010 Olympic Winter Games on Facebook, Twitter and Orkut. - 4 : Singapore 2010 Youth Olympic Games communication strategies on Facebook and Twitter. - 5 : Sport organizations social networking strategies : case study analysis. - 6 : Olympic athletes and social media use during a non olympic-period. - 7. The Olympic Games, NBA and FC Barcelona on Facebook : content and fan participa...

  18. DMPD: Toll-like receptor (TLR)-based networks regulate neutrophilic inflammation inrespiratory disease. [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 18031251 Toll-like receptor (TLR)-based networks regulate neutrophilic inflammation...l) (.csml) Show Toll-like receptor (TLR)-based networks regulate neutrophilic inflammation inrespiratory dis...ease. PubmedID 18031251 Title Toll-like receptor (TLR)-based networks regulate ne

  19. The USA National Phenology Network's National Phenology Database: a multi-taxa, continental-scale dataset for scientific inquiry

    Science.gov (United States)

    Weltzin, J. F.

    2012-12-01

    The USA National Phenology Network (USA-NPN; www.usanpn.org) serves science and society by promoting a broad understanding of plant and animal phenology and the relationships among phenological patterns and all aspects of environmental change. The National Phenology Database, maintained by the USA-NPN, is experiencing steady growth in the number of data records it houses. As of August 2012, participants in the USA-NPN national-scale, multi-taxa phenology observation program Nature's Notebook had contributed over 1.3 million observation records (encompassing four and three years of observations for plants and for animals, respectively). Data are freely available www.usanpn.org/results/data, and include FGDC-compliant metadata, data-use and data-attribution policies, vetted and documented methodologies and protocols, and version control. Quality assurance and quality control, and metadata data associated with field observations (e.g., effort and method reporting, site and organism condition) are also documented. Data are also available for exploration, visualization and preliminary analysis at www.usanpn.org/results/visualizations. Participants in Nature's Notebook, who include both professional and volunteer scientists, follow vetted protocols that employ phenological "status" monitoring rather than "event" monitoring: when sampling, observers indicate the status of each phenophase (e.g., "breaking leaf buds" or "active individuals"). This approach has a number of advantages over event monitoring (including estimation of error, estimation of effort, "negative" or "absence" data, capture of multiple events and phenophase duration) and is especially well-suited for integrated multi-taxa monitoring. Further, protocols and a user interface to facilitate the description of development or abundance data (e.g., tree canopy development, animal abundance) create a robust ecological dataset. We demonstrate several types of questions that can be addressed with this observing

  20. Remote facility sharing with ATM networks [PC based ATM Link Delay Simulator (LDS)]. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Kung, H. T.

    2001-06-01

    The ATM Link Delay Simulator (LDS) adds propagation delay to the ATM link on which it is installed, to allow control of link propagation delay in network protocol experiments simulating an adjustable piece of optical fiber. Our LDS simulates a delay of between 1.5 and 500 milliseconds and is built with commodity PC hardware, only the ATM network interface card is not generally available. Our implementation is special in that it preserves the exact spacing of ATM data cells a feature that requires sustained high performance. Our implementation shows that applications demanding sustained high performance are possible on commodity PC hardware. This illustrates the promise that PC hardware has for adaptability to demanding specialized testing of high speed network.

  1. Independent identification of meteor showers in EDMOND database

    CERN Document Server

    Rudawska, R; Tóth, J; Kornoš, L

    2014-01-01

    Cooperation and data sharing among national networks and International Meteor Organization Video Meteor Database (IMO VMDB) resulted in European viDeo MeteOr Network Database (EDMOND). The current version of the database (EDMOND 5.0) contains 144 751 orbits collected from 2001 to 2014. In our survey we used EDMOND database in order to identify existing and new meteor showers in the database. In the first step of the survey, using Dsh criterion we found groups around each meteor within similarity threshold. Mean parameters of the groups were calculated and compared using a new function based on geocentric parameters (solar longitude, right ascension, declination, and geocentric velocity). Similar groups were merged into final clusters (representing meteor showers), and compared with IAU Meteor Data Center list of meteor showers. This paper presents the results obtained by the proposed methodology.

  2. Journal Article: EPA's National Dioxin Air Monitoring Network (Ndamn): Design, Implementation, and Final Results

    Science.gov (United States)

    The U.S. Environmental Protection Agency (U.S. EPA) established the National Dioxin Air Monitoring Network (NDAMN) in June of 1998, and operated it until November of 2004. The objective of NDAMN was to determine background air concentrations of polychlorinated dibenzo-p-dioxins (...

  3. Project on Application of Modern Communication Technologies to Educational Networking. Final Technical Report.

    Science.gov (United States)

    Morgan, Robert P.; Eastwood, Lester F., Jr.

    Research on this National Science Foundation grant to study the application of modern communications technology to educational networking was divided into three parts: assessment of the role of technology in non-traditional post-secondary education; assessment of communications technologies and educational services of current or potential future…

  4. Developing Statistics and Performance Measures for the Networked Environment: Final Report.

    Science.gov (United States)

    Bertot, John Carlo; McClure, Charles R.; Ryan, Joe

    This report summarizes the findings, issues, and lessons learned from the Developing National Public Library and Statewide Network Statistics and Performance Measures study conducted between January 1999 and August 2000. The overall goal of the study was to develop a core set of national statistics and performance measures that librarians,…

  5. Neurovascular Network Explorer 1.0: a database of 2-photon single-vessel diameter measurements with MATLAB® graphical user interface

    Directory of Open Access Journals (Sweden)

    Vishnu B Sridhar

    2014-05-01

    Full Text Available We present a database client software – Neurovascular Network Explorer 1.0 (NNE 1.0 – that uses MATLAB® based Graphical User Interface (GUI for interaction with a database of 2-photon single-vessel diameter measurements from our previous publication [1]. These data are of particular interest for modeling the hemodynamic response. NNE 1.0 is downloaded by the user and then runs either as a MATLAB script or as a standalone program on a Windows platform. The GUI allows browsing the database according to parameters specified by the user, simple manipulation and visualization of the retrieved records (such as averaging and peak-normalization, and export of the results. Further, we provide NNE 1.0 source code. With this source code, the user can database their own experimental results, given the appropriate data structure and naming conventions, and thus share their data in a user-friendly format with other investigators. NNE 1.0 provides an example of seamless and low-cost solution for sharing of experimental data by a regular size neuroscience laboratory and may serve as a general template, facilitating dissemination of biological results and accelerating data-driven modeling approaches.

  6. Neurovascular Network Explorer 1.0: a database of 2-photon single-vessel diameter measurements with MATLAB(®) graphical user interface.

    Science.gov (United States)

    Sridhar, Vishnu B; Tian, Peifang; Dale, Anders M; Devor, Anna; Saisan, Payam A

    2014-01-01

    We present a database client software-Neurovascular Network Explorer 1.0 (NNE 1.0)-that uses MATLAB(®) based Graphical User Interface (GUI) for interaction with a database of 2-photon single-vessel diameter measurements from our previous publication (Tian et al., 2010). These data are of particular interest for modeling the hemodynamic response. NNE 1.0 is downloaded by the user and then runs either as a MATLAB script or as a standalone program on a Windows platform. The GUI allows browsing the database according to parameters specified by the user, simple manipulation and visualization of the retrieved records (such as averaging and peak-normalization), and export of the results. Further, we provide NNE 1.0 source code. With this source code, the user can database their own experimental results, given the appropriate data structure and naming conventions, and thus share their data in a user-friendly format with other investigators. NNE 1.0 provides an example of seamless and low-cost solution for sharing of experimental data by a regular size neuroscience laboratory and may serve as a general template, facilitating dissemination of biological results and accelerating data-driven modeling approaches.

  7. LiHe$^+$ in the early Universe: a full assessment of its reaction network and final abundances

    CERN Document Server

    Bovino, Stefano; Galli, Daniele; Tacconi, Mario; Gianturco, Francesco A

    2012-01-01

    We present the results of quantum calculations based on entirely ab initio methods for a variety of molecular processes and chemical reactions involving the LiHe$^+$ ionic polar molecule. With the aid of these calculations we derive accurate reaction rates and fitting expressions valid over a range of gas temperatures representative of the typical conditions of the pregalactic gas. With the help of a full chemical network, we then compute the evolution of the abundance of LiHe$^+$ as function of redshift in the early Universe. Finally, we compare the relative abundance of LiHe$^+$ with that of other polar cations formed in the same redshift interval.

  8. Children's Culture Database (CCD)

    DEFF Research Database (Denmark)

    Wanting, Birgit

    a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996...

  9. Databases and tools for nuclear astrophysics applications. BRUSsels Nuclear LIBrary (BRUSLIB), Nuclear Astrophysics Compilation of REactions II (NACRE II) and Nuclear NETwork GENerator (NETGEN)

    Science.gov (United States)

    Xu, Y.; Goriely, S.; Jorissen, A.; Chen, G. L.; Arnould, M.

    2013-01-01

    An update of a previous description of the BRUSLIB + NACRE package of nuclear data for astrophysics and of the web-based nuclear network generator NETGEN is presented. The new version of BRUSLIB contains the latest predictions of a wide variety of nuclear data based on the most recent version of the Brussels-Montreal Skyrme-Hartree-Fock-Bogoliubov model. The nuclear masses, radii, spin/parities, deformations, single-particle schemes, matter densities, nuclear level densities, E1 strength functions, fission properties, and partition functions are provided for all nuclei lying between the proton and neutron drip lines over the 8 ≤ Z ≤ 110 range, whose evaluation is based on a unique microscopic model that ensures a good compromise between accuracy, reliability, and feasibility. In addition, these various ingredients are used to calculate about 100 000 Hauser-Feshbach neutron-, proton-, α-, and γ-induced reaction rates based on the reaction code TALYS. NACRE is superseded by the NACRE II compilation for 15 charged-particle transfer reactions and 19 charged-particle radiative captures on stable targets with mass numbers A < 16. NACRE II features the inclusion of experimental data made available after the publication of NACRE in 1999 and up to 2011. In addition, the extrapolation of the available data to the very low energies of astrophysical relevance is improved through the systematic use of phenomenological potential models. Uncertainties in the rates are also evaluated on this basis. Finally, the latest release v10.0 of the web-based tool NETGEN is presented. In addition to the data already used in the previous NETGEN package, it contains in a fully documented form the new BRUSLIB and NACRE II data, as well as new experiment-based radiative neutron capture cross sections. The full new versions of BRUSLIB, NACRE II, and NETGEN are available electronically from the nuclear database at http://www.astro.ulb.ac.be/NuclearData. The nuclear material is presented in

  10. BIOMASSCOMP: artificial neural networks and neurocomputers. Final report, 18 August 1987-18 February 1988

    Energy Technology Data Exchange (ETDEWEB)

    Dawes, R.L.

    1988-09-01

    BIOMASSCOMP is a project whose objective is to define and develop methods for automating the process of reverse engineering the brain for application to the development of intelligent sensors and controllers for avionic and other systems. What was done in this project was to quantify and apply concepts that many neural network and cognitive-science researchers have tacitly and qualitatively assumed to be work in self-organizing systems. During this Phase I SBIR project, the author defined, developed, and implemented an entropy-based scalar measure, DMORPH, of the common structure between two systems, as evidenced by measurement of signals from the two systems. By design, DMORPH reflects only the cross-correlations between systems and not the intracorrelations within the separate systems. DMORPH was applied to the input and output signals from various artificial neural network architectures to attempt to determine which networks, and which parameter settings within each, induced the greatest structural similarity between input and output signals after learning had taken place. This research applies to the development and testing of real-time autonomous learning systems suitable for application to problems of avionics sensor fusion, adaptive sensor processing, and intelligent-resource management.

  11. DMPD: The interferon signaling network and transcription factor C/EBP-beta. [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 18163952 The interferon signaling network and transcription factor C/EBP-beta. Li H..., Gade P, Xiao W, Kalvakolanu DV. Cell Mol Immunol. 2007 Dec;4(6):407-18. (.png) (.svg) (.html) (.csml) Show The... interferon signaling network and transcription factor C/EBP-beta. PubmedID 18163952 Title The interfero

  12. 基于Web的网络数据库安全体系研究%Web-based Network Database Security System Study

    Institute of Scientific and Technical Information of China (English)

    孙慧清

    2011-01-01

    本文从基于Web的网络数据库安全体系结构进行全面分析,阐述了安全体系建设中的一些影响安全的问题,并对所遇到的问题提出了解决方案。%This article from the Web-based network to conduct a comprehensive database security architecture analysis,security system described in some of the impact of security issues,and problems encountered by proposed solutiuns.

  13. Computer application for database management and networking of service radio physics; Aplicacion informatica para la gestion de bases de datos y conexiones en red de un servicio de radiofisica

    Energy Technology Data Exchange (ETDEWEB)

    Ferrando Sanchez, A.; Cabello Murillo, E.; Diaz Fuentes, R.; Castro Novais, J.; Clemente Gutierrez, F.; Casa de Juan, M. A. de la; Adaimi Hernandez, P.

    2011-07-01

    The databases in the quality control prove to be a powerful tool for recording, management and statistical process control. Developed in a Windows environment and under Access (Microsoft Office) our service implements this philosophy on the centers computer network. A computer that acts as the server provides the database to the treatment units to record quality control measures daily and incidents. To remove shortcuts stop working with data migration, possible use of duplicate and erroneous data loss because of errors in network connections, which are common problems, we proceeded to manage connections and access to databases ease of maintenance and use horn to all service personnel.

  14. Cloud Databases: A Paradigm Shift in Databases

    Directory of Open Access Journals (Sweden)

    Indu Arora

    2012-07-01

    Full Text Available Relational databases ruled the Information Technology (IT industry for almost 40 years. But last few years have seen sea changes in the way IT is being used and viewed. Stand alone applications have been replaced with web-based applications, dedicated servers with multiple distributed servers and dedicated storage with network storage. Cloud computing has become a reality due to its lesser cost, scalability and pay-as-you-go model. It is one of the biggest changes in IT after the rise of World Wide Web. Cloud databases such as Big Table, Sherpa and SimpleDB are becoming popular. They address the limitations of existing relational databases related to scalability, ease of use and dynamic provisioning. Cloud databases are mainly used for data-intensive applications such as data warehousing, data mining and business intelligence. These applications are read-intensive, scalable and elastic in nature. Transactional data management applications such as banking, airline reservation, online e-commerce and supply chain management applications are write-intensive. Databases supporting such applications require ACID (Atomicity, Consistency, Isolation and Durability properties, but these databases are difficult to deploy in the cloud. The goal of this paper is to review the state of the art in the cloud databases and various architectures. It further assesses the challenges to develop cloud databases that meet the user requirements and discusses popularly used Cloud databases.

  15. Final Report for ?Queuing Network Models of Performance of High End Computing Systems?

    Energy Technology Data Exchange (ETDEWEB)

    Buckwalter, J

    2005-09-28

    The primary objective of this project is to perform general research into queuing network models of performance of high end computing systems. A related objective is to investigate and predict how an increase in the number of nodes of a supercomputer will decrease the running time of a user's software package, which is often referred to as the strong scaling problem. We investigate the large, MPI-based Linux cluster MCR at LLNL, running the well-known NAS Parallel Benchmark (NPB) applications. Data is collected directly from NPB and also from the low-overhead LLNL profiling tool mpiP. For a run, we break the wall clock execution time of the benchmark into four components: switch delay, MPI contention time, MPI service time, and non-MPI computation time. Switch delay is estimated from message statistics. MPI service time and non-MPI computation time are calculated directly from measurement data. MPI contention is estimated by means of a queuing network model (QNM), based in part on MPI service time. This model of execution time validates reasonably well against the measured execution time, usually within 10%. Since the number of nodes used to run the application is a major input to the model, we can use the model to predict application execution times for various numbers of nodes. We also investigate how the four components of execution time scale individually as the number of nodes increases. Switch delay and MPI service time scale regularly. MPI contention is estimated by the QNM submodel and also has a fairly regular pattern. However, non-MPI compute time has a somewhat irregular pattern, possibly due to caching effects in the memory hierarchy. In contrast to some other performance modeling methods, this method is relatively fast to set up, fast to calculate, simple for data collection, and yet accurate enough to be quite useful.

  16. Quantitative Tools for Dissection of Hydrogen-Producing Metabolic Networks-Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Rabinowitz, Joshua D.; Dismukes, G.Charles.; Rabitz, Herschel A.; Amador-Noguez, Daniel

    2012-10-19

    During this project we have pioneered the development of integrated experimental-computational technologies for the quantitative dissection of metabolism in hydrogen and biofuel producing microorganisms (i.e. C. acetobutylicum and various cyanobacteria species). The application of these new methodologies resulted in many significant advances in the understanding of the metabolic networks and metabolism of these organisms, and has provided new strategies to enhance their hydrogen or biofuel producing capabilities. As an example, using mass spectrometry, isotope tracers, and quantitative flux-modeling we mapped the metabolic network structure in C. acetobutylicum. This resulted in a comprehensive and quantitative understanding of central carbon metabolism that could not have been obtained using genomic data alone. We discovered that biofuel production in this bacterium, which only occurs during stationary phase, requires a global remodeling of central metabolism (involving large changes in metabolite concentrations and fluxes) that has the effect of redirecting resources (carbon and reducing power) from biomass production into solvent production. This new holistic, quantitative understanding of metabolism is now being used as the basis for metabolic engineering strategies to improve solvent production in this bacterium. In another example, making use of newly developed technologies for monitoring hydrogen and NAD(P)H levels in vivo, we dissected the metabolic pathways for photobiological hydrogen production by cyanobacteria Cyanothece sp. This investigation led to the identification of multiple targets for improving hydrogen production. Importantly, the quantitative tools and approaches that we have developed are broadly applicable and we are now using them to investigate other important biofuel producers, such as cellulolytic bacteria.

  17. Molecular Target-Oriented Phytochemical Database and Its Application to the Network Analysis of Action Mechanisms of Herbal Medicines

    Directory of Open Access Journals (Sweden)

    Yukihiro Eguchi

    2013-03-01

    Full Text Available Kampo medicines, the Japanese adaptation of traditional Chinese medicines, are formed by combining several herbs containing multiple phytochemicals. The considerable ambiguity of pharmacological profiles of Kampo medicines is expected to be clarified by identifying the molecular targets of constituent phytochemicals and analyzing the combined effects of the phytochemicals on the pharmacological pathways formed by those targets. To facilitate this line of study, we constructed paired databases named PhytodamaTarget and PhytodamaTaxon DBs which treat molecular targets of phytochemicals and constituent phytochemicals of plant taxa, respectively, by utilizing information from the literature. We then used the databases to explore possible mechanisms of synergism in analgesic activity between Glycyrrhiza globra and Paeonia lactiflora

  18. Network analysis of geomagnetic substorms using the SuperMAG database of ground-based magnetometer stations

    CERN Document Server

    Dods, J; Gjerloev, J W

    2016-01-01

    The overall morphology and dynamics of magnetospheric substorms is well established in terms of the observed qualitative auroral features seen in ground-based magnetometers. This paper focuses on the quantitative characterization of substorm dynamics captured by ground-based magnetometer stations. We present the first analysis of substorms using dynamical networks obtained from the full available set of ground-based magnetometer observations in the Northern Hemisphere. The stations are connected in the network when the correlation between the vector magnetometer time series from pairs of stations within a running time window exceeds a threshold. Dimensionless parameters can then be obtained that characterize the network and by extension, the spatiotemporal dynamics of the substorm under observation. We analyze four isolated substorm test cases as well as a steady magnetic convection (SMC) event and a day in which no substorms occur. These test case substorms are found to give a consistent characteristic netwo...

  19. Final design of the Switching Network Units for the JT-60SA Central Solenoid

    Energy Technology Data Exchange (ETDEWEB)

    Lampasi, Alessandro, E-mail: alessandro.lampasi@enea.it [National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA), Frascati (Italy); Coletti, Alberto; Novello, Luca [Fusion for Energy (F4E) Broader Fusion Development Department, Garching (Germany); Matsukawa, Makoto [Japan Atomic Energy Agency, Naka Fusion Institute, Mukouyama, Naka-si, Ibaraki-ken (Japan); Burini, Filippo; Taddia, Giuseppe; Tenconi, Sandro [OCEM Energy Technology, San Giorgio Di Piano (Italy)

    2014-04-15

    This paper describes the approved detailed design of the four Switching Network Units (SNUs) of the superconducting Central Solenoid of JT-60SA, the satellite tokamak that will be built in Naka, Japan, in the framework of the “Broader Approach” cooperation agreement between Europe and Japan. The SNUs can interrupt a current of 20 kA DC in less than 1 ms in order to produce a voltage of 5 kV. Such performance is obtained by inserting an electronic static circuit breaker in parallel to an electromechanical contactor and by matching and coordinating their operations. Any undesired transient overvoltage is limited by an advanced snubber circuit optimized for this application. The SNU resistance values can be adapted to the specific operation scenario. In particular, after successful plasma breakdown, the SNU resistance can be reduced by a making switch. The design choices of the main SNU elements are justified by showing and discussing the performed calculations and simulations. In most cases, the developed design is expected to exceed the performances required by the JT-60SA project.

  20. Neural network recognition of nuclear power plant transients. Final report, April 15, 1992--April 15, 1995

    Energy Technology Data Exchange (ETDEWEB)

    Bartlett, E.B.

    1995-05-15

    The objective of this report is to describe results obtained during the second year of funding that will lead to the development of an artificial neural network (A.N.N) fault diagnostic system for the real-time classification of operational transients at nuclear power plants. The ultimate goal of this three-year project is to design, build, and test a prototype diagnostic adviser for use in the control room or technical support center at Duane Arnold Energy Center (DAEC); such a prototype could be integrated into the plant process computer or safety-parameter display system. The adviser could then warn and inform plant operators and engineers of plant component failures in a timely manner. This report describes the work accomplished in the second of three scheduled years for the project. Included herein is a summary of the second year`s results as well as descriptions of each of the major topics undertaken by the researchers. Also included are reprints of the articles written under this funding as well as those that were published during the funded period.

  1. FINAL TECHNICAL REPORT: Underwater Active Acoustic Monitoring Network For Marine And Hydrokinetic Energy Projects

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Peter J. [Scientific Solutions, Inc, Nashua, NH (United States); Edson, Patrick L. [Scientific Solutions, Inc, Nashua, NH (United States)

    2013-12-20

    This project saw the completion of the design and development of a second generation, high frequency (90-120 kHz) Subsurface-Threat Detection Sonar Network (SDSN). The system was deployed, operated, and tested in Cobscook Bay, Maine near the site the Ocean Renewable Power Company TidGen™ power unit. This effort resulted in a very successful demonstration of the SDSN detection, tracking, localization, and classification capabilities in a high current, MHK environment as measured by results from the detection and tracking trials in Cobscook Bay. The new high frequency node, designed to operate outside the hearing range of a subset of marine mammals, was shown to detect and track objects of marine mammal-like target strength to ranges of approximately 500 meters. This performance range results in the SDSN system tracking objects for a significant duration - on the order of minutes - even in a tidal flow of 5-7 knots, potentially allowing time for MHK system or operator decision-making if marine mammals are present. Having demonstrated detection and tracking of synthetic targets with target strengths similar to some marine mammals, the primary hurdle to eventual automated monitoring is a dataset of actual marine mammal kinematic behavior and modifying the tracking algorithms and parameters which are currently tuned to human diver kinematics and classification.

  2. Accident Database Earlv Warning Based on BP Neural Network%基于BP神经网络的车祸库预警技术

    Institute of Scientific and Technical Information of China (English)

    冯继妙; 胡立芳

    2011-01-01

    In order to achieve the purpose of accident early warning, this paper presents a new method: establish the vehicle accident databases, and combine it with BP neural network technology. First, construct a suitable BP neural network. Second, use the accident feature information to train the BP neural network, then the trained BP neural network can determine the possibility of this specific car accident At last, send the vehicle information into the trained BP neural network, and it can predict the possibility of this specific car accident In this paper, the author simulates this method by Matlab7.0.1. Simulation results show that the method is feasible and effective.%针对如何有效预测车祸发生的可能性,从而达到车祸预警的目的,提出了一种新的车祸预警方法:通过建立车辆的车祸库,并结合BP神经网络技术达到车祸库预警目的.先构建合适的BP神经网络,再用车祸特征信息训练BP神经网络,训练好的BP神经网络就具有判断发生该类型车祸可能性的能力,最后把车辆行驶信息输入到已训练好的BP神经网络,就可以预测发生该类型车祸的可能性.用Matlab7.0.1进行了该方法的仿真实验,仿真结果表明该方法具有一定的可行性和有效性.

  3. Using Bayesian Belief Network (BBN) modelling for rapid source term prediction. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Knochenhauer, M.; Swaling, V.H.; Dedda, F.D.; Hansson, F.; Sjoekvist, S.; Sunnegaerd, K. [Lloyd' s Register Consulting AB, Sundbyberg (Sweden)

    2013-10-15

    The project presented in this report deals with a number of complex issues related to the development of a tool for rapid source term prediction (RASTEP), based on a plant model represented as a Bayesian belief network (BBN) and a source term module which is used for assigning relevant source terms to BBN end states. Thus, RASTEP uses a BBN to model severe accident progression in a nuclear power plant in combination with pre-calculated source terms (i.e., amount, composition, timing, and release path of released radio-nuclides). The output is a set of possible source terms with associated probabilities. One major issue has been associated with the integration of probabilistic and deterministic analyses are addressed, dealing with the challenge of making the source term determination flexible enough to give reliable and valid output throughout the accident scenario. The potential for connecting RASTEP to a fast running source term prediction code has been explored, as well as alternative ways of improving the deterministic connections of the tool. As part of the investigation, a comparison of two deterministic severe accident analysis codes has been performed. A second important task has been to develop a general method where experts' beliefs can be included in a systematic way when defining the conditional probability tables (CPTs) in the BBN. The proposed method includes expert judgement in a systematic way when defining the CPTs of a BBN. Using this iterative method results in a reliable BBN even though expert judgements, with their associated uncertainties, have been used. It also simplifies verification and validation of the considerable amounts of quantitative data included in a BBN. (Author)

  4. DMPD: Glucocorticoids and the innate immune system: crosstalk with the toll-likereceptor signaling network. [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 17576036 Glucocorticoids and the innate immune system: crosstalk with the toll-like...07 May 13. (.png) (.svg) (.html) (.csml) Show Glucocorticoids and the innate immune system: crosstalk with t...nd the innate immune system: crosstalk with the toll-likereceptor signaling network. Authors Chinenov Y, Rog

  5. The GTN-P Data Management System: A central database for permafrost monitoring parameters of the Global Terrestrial Network for Permafrost (GTN-P) and beyond

    Science.gov (United States)

    Lanckman, Jean-Pierre; Elger, Kirsten; Karlsson, Ævar Karl; Johannsson, Halldór; Lantuit, Hugues

    2013-04-01

    Permafrost is a direct indicator of climate change and has been identified as Essential Climate Variable (ECV) by the global observing community. The monitoring of permafrost temperatures, active-layer thicknesses and other parameters has been performed for several decades already, but it was brought together within the Global Terrestrial Network for Permafrost (GTN-P) in the 1990's only, including the development of measurement protocols to provide standardized data. GTN-P is the primary international observing network for permafrost sponsored by the Global Climate Observing System (GCOS) and the Global Terrestrial Observing System (GTOS), and managed by the International Permafrost Association (IPA). All GTN-P data was outfitted with an "open data policy" with free data access via the World Wide Web. The existing data, however, is far from being homogeneous: it is not yet optimized for databases, there is no framework for data reporting or archival and data documentation is incomplete. As a result, and despite the utmost relevance of permafrost in the Earth's climate system, the data has not been used by as many researchers as intended by the initiators of the programs. While the monitoring of many other ECVs has been tackled by organized international networks (e.g. FLUXNET), there is still no central database for all permafrost-related parameters. The European Union project PAGE21 created opportunities to develop this central database for permafrost monitoring parameters of GTN-P during the duration of the project and beyond. The database aims to be the one location where the researcher can find data, metadata, and information of all relevant parameters for a specific site. Each component of the Data Management System (DMS), including parameters, data levels and metadata formats were developed in cooperation with the GTN-P and the IPA. The general framework of the GTN-P DMS is based on an object oriented model (OOM), open for as many parameters as possible, and

  6. Numerical databases in marine biology

    Digital Repository Service at National Institute of Oceanography (India)

    Sarupria, J.S.; Bhargava, R.M.S.

    stream_size 9 stream_content_type text/plain stream_name Natl_Workshop_Database_Networking_Mar_Biol_1991_45.pdf.txt stream_source_info Natl_Workshop_Database_Networking_Mar_Biol_1991_45.pdf.txt Content-Encoding ISO-8859-1 Content...

  7. Network and Ensemble Enabled Entity Extraction in Informal Text (NEEEEIT) final report.

    Energy Technology Data Exchange (ETDEWEB)

    Kegelmeyer, W. Philip,; Shead, Timothy M. [Sandia National Laboratories, Albuquerque, NM; Dunlavy, Daniel M. [Sandia National Laboratories, Albuquerque, NM

    2013-09-01

    This SAND report summarizes the activities and outcomes of the Network and Ensemble Enabled Entity Extraction in Informal Text (NEEEEIT) LDRD project, which addressed improving the accuracy of conditional random fields for named entity recognition through the use of ensemble methods. Conditional random fields (CRFs) are powerful, flexible probabilistic graphical models often used in supervised machine learning prediction tasks associated with sequence data. Specifically, they are currently the best known option for named entity recognition (NER) in text. NER is the process of labeling words in sentences with semantic identifiers such as %E2%80%9Cperson%E2%80%9D, %E2%80%9Cdate%E2%80%9D, or %E2%80%9Corganization%E2%80%9D. Ensembles are a powerful statistical inference meta-method that can make most supervised machine learning methods more accurate, faster, or both. Ensemble methods are normally best suited to %E2%80%9Cunstable%E2%80%9D classification methods with high variance error. CRFs applied to NER are very stable classifiers, and as such, would initially seem to be resistant to the benefits of ensembles. The NEEEEIT project nonetheless worked out how to generalize ensemble methods to CRFs, demonstrated that accuracy can indeed be improved by proper use of ensemble techniques, and generated a new CRF code, %E2%80%9CpyCrust%E2%80%9D and a surrounding application environment, %E2%80%9CNEEEEIT%E2%80%9D, which implement those improvements. The summary practical advice that results from this work, then, is: When making use of CRFs for label prediction tasks in machine learning, use the pyCrust CRF base classifier with NEEEEIT's bagging ensemble implementation. (If those codes are not available, then de-stablize your CRF code via every means available, and generate the bagged training sets by hand.) If you have ample pre-processing computational time, do %E2%80%9Cforward feature selection%E2%80%9D to find and remove counter-productive feature classes. Conversely

  8. A reference model for database security proxy

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    How to protect the database, the kernel resources of information warfare, is becoming more and more important since the rapid development of computer and communication technology. As an application-level firewall, database security proxy can successfully repulse attacks originated from outside the network, reduce to zerolevel damage from foreign DBMS products. We enhanced the capability of the COAST's firewall reference model by adding a transmission unit modification function and an attribute value mapping function, describes the schematic and semantic layer reference model, and finally forms a reference model for DBMS security proxy which greatly helps in the design and implementation of database security proxies. This modeling process can clearly separate the system functionality into three layers, define the possible security functions for each layer, and estimate the computational cost for each layer.

  9. A reference model for database security proxy

    Institute of Scientific and Technical Information of China (English)

    蔡亮; 杨小虎; 董金祥

    2002-01-01

    How to protect the database, the kernel resources of information warfare, is becoming more and more important since the rapid development of computer and communication technology. As an application-level firewall, database security proxy can successfully repulse attacks originated from outside the network, reduce to zerolevel damage from foreign DBMS products. We enhanced the capability of the COAST' s firewall reference model by adding a transmission unit modification function and an attribute value mapping function,describes the schematic and semantic layer reference model, and finally forms a reference model for DBMS security proxy which greatly helps in the design and implementation of database security proxies. This modeling process can clearly separate the system functionality into three layers, define the possible security functions for each layer, and estimate the computational cost for each layer.

  10. Governing PatientsLikeMe: information production and research through an open, distributed, and data-based social media network

    OpenAIRE

    2015-01-01

    Many organizations develop social media networks with the aim of engaging a wide range of social groups in the production of information that fuels their processes. This effort appears to crucially depend on complex data structures that allow the organization to connect and collect data from a myriad of local contexts and actors. One such organization, PatientsLikeMe, is developing a platform with the aim of connecting patients with one another while collecting self-reported medical data, whi...

  11. Weighted gene co-expression network analysis in identification of metastasis-related genes of lung squamous cell carcinoma based on the Cancer Genome Atlas database

    Science.gov (United States)

    Tian, Feng; Zhao, Jinlong; Kang, Zhenxing

    2017-01-01

    Background Lung squamous cell carcinoma (lung SCC) is a common type of malignancy. Its pathogenesis mechanism of tumor development is unclear. The aim of this study was to identify key genes for diagnosis biomarkers in lung SCC metastasis. Methods We searched and downloaded mRNA expression data and clinical data from The Cancer Genome Atlas (TCGA) database to identify differences in mRNA expression of primary tumor tissues from lung SCC with and without metastasis. Gene co-expression network analysis, protein-protein interaction (PPI) network, Gene Ontology (GO), Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis and quantitative real-time polymerase chain reactions (qRT-PCR) were used to explore the biological functions of the identified dysregulated genes. Results Four hundred and eighty-two differentially expressed genes (DEGs) were identified between lung SCC with and without metastasis. Nineteen modules were identified in lung SCC through weighted gene co-expression network analysis (WGCNA). Twenty-three DEGs and 26 DEGs were significantly enriched in the respective pink and black module. KEGG pathway analysis displayed that 26 DEGs in the black module were significantly enriched in bile secretion pathway. Forty-nine DEGs in the two gene co-expression module were used to construct PPI network. CFTR in the black module was the hub protein, had the connectivity with 182 genes. The results of qRT-PCR displayed that FIGF, SFTPD, DYNLRB2 were significantly down-regulated in the tumor samples of lung SCC with metastasis and CFTR, SCGB3A2, SSTR1, SCTR, ROPN1L had the down-regulation tendency in lung SCC with metastasis compared to lung SCC without metastasis. Conclusions The dysregulated genes including CFTR, SCTR and FIGF might be involved in the pathology of lung SCC metastasis and could be used as potential diagnosis biomarkers or therapeutic targets for lung SCC.

  12. Weighted gene co-expression network analysis in identification of metastasis-related genes of lung squamous cell carcinoma based on the Cancer Genome Atlas database.

    Science.gov (United States)

    Tian, Feng; Zhao, Jinlong; Fan, Xinlei; Kang, Zhenxing

    2017-01-01

    Lung squamous cell carcinoma (lung SCC) is a common type of malignancy. Its pathogenesis mechanism of tumor development is unclear. The aim of this study was to identify key genes for diagnosis biomarkers in lung SCC metastasis. We searched and downloaded mRNA expression data and clinical data from The Cancer Genome Atlas (TCGA) database to identify differences in mRNA expression of primary tumor tissues from lung SCC with and without metastasis. Gene co-expression network analysis, protein-protein interaction (PPI) network, Gene Ontology (GO), Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis and quantitative real-time polymerase chain reactions (qRT-PCR) were used to explore the biological functions of the identified dysregulated genes. Four hundred and eighty-two differentially expressed genes (DEGs) were identified between lung SCC with and without metastasis. Nineteen modules were identified in lung SCC through weighted gene co-expression network analysis (WGCNA). Twenty-three DEGs and 26 DEGs were significantly enriched in the respective pink and black module. KEGG pathway analysis displayed that 26 DEGs in the black module were significantly enriched in bile secretion pathway. Forty-nine DEGs in the two gene co-expression module were used to construct PPI network. CFTR in the black module was the hub protein, had the connectivity with 182 genes. The results of qRT-PCR displayed that FIGF, SFTPD, DYNLRB2 were significantly down-regulated in the tumor samples of lung SCC with metastasis and CFTR, SCGB3A2, SSTR1, SCTR, ROPN1L had the down-regulation tendency in lung SCC with metastasis compared to lung SCC without metastasis. The dysregulated genes including CFTR, SCTR and FIGF might be involved in the pathology of lung SCC metastasis and could be used as potential diagnosis biomarkers or therapeutic targets for lung SCC.

  13. Hazard Analysis Database Report

    CERN Document Server

    Grams, W H

    2000-01-01

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from t...

  14. Structural evolution of trimesic acid (TMA)/Zn2 + ion network on Au(111) to final structure of (10√3 × 10√3)

    Science.gov (United States)

    Kim, Jandee; Lee, Jaesung; Rhee, Choong Kyun

    2016-02-01

    Presented is a scanning tunneling microscopy (STM) study of structural evolution of TMA/Zn2 + ion network on Au(111) to the final structure of (10√3 × 10√3) during solution phase post-modification of pristine trimesic acid (TMA) network of a (5√3 × 5√3) structure with Zn2 + ions. Coordination of Zn2 + ions into adsorbed TMA molecules transforms crown-like TMA hexamers in pristine TMA network to chevron pairs in TMA/Zn2 + ion network. Two ordered transient structures of TMA/Zn2 + ion network were observed. One is a (5√7 × 5√7) structure consisting of Zn2 + ion-containing chevron pairs and Zn2 + ion-free TMA dimers. The other is a (5√39 × 5√21) structure made of chevron pairs and chevron-pair-missing sites. An STM image showing domains of different stages of crystallization of chevron pairs demonstrates that the TMA/Zn2 + network before reaching to the final one is quite dynamic. The observed structural evolution of the TMA/Zn2 + ion network is discussed in terms of modification of configurations of adsorbed TMA as accommodating Zn2 + ions and re-ordering of Zn2 + ion-containing chevron pairs.

  15. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  16. Artificial Neural Networks, and Evolutionary Algorithms as a systems biology approach to a data-base on fetal growth restriction.

    Science.gov (United States)

    Street, Maria E; Buscema, Massimo; Smerieri, Arianna; Montanini, Luisa; Grossi, Enzo

    2013-12-01

    One of the specific aims of systems biology is to model and discover properties of cells, tissues and organisms functioning. A systems biology approach was undertaken to investigate possibly the entire system of intra-uterine growth we had available, to assess the variables of interest, discriminate those which were effectively related with appropriate or restricted intrauterine growth, and achieve an understanding of the systems in these two conditions. The Artificial Adaptive Systems, which include Artificial Neural Networks and Evolutionary Algorithms lead us to the first analyses. These analyses identified the importance of the biochemical variables IL-6, IGF-II and IGFBP-2 protein concentrations in placental lysates, and offered a new insight into placental markers of fetal growth within the IGF and cytokine systems, confirmed they had relationships and offered a critical assessment of studies previously performed.

  17. Automated classification of seismic sources in a large database: a comparison of Random Forests and Deep Neural Networks.

    Science.gov (United States)

    Hibert, Clement; Stumpf, André; Provost, Floriane; Malet, Jean-Philippe

    2017-04-01

    In the past decades, the increasing quality of seismic sensors and capability to transfer remotely large quantity of data led to a fast densification of local, regional and global seismic networks for near real-time monitoring of crustal and surface processes. This technological advance permits the use of seismology to document geological and natural/anthropogenic processes (volcanoes, ice-calving, landslides, snow and rock avalanches, geothermal fields), but also led to an ever-growing quantity of seismic data. This wealth of seismic data makes the construction of complete seismicity catalogs, which include earthquakes but also other sources of seismic waves, more challenging and very time-consuming as this critical pre-processing stage is classically done by human operators and because hundreds of thousands of seismic signals have to be processed. To overcome this issue, the development of automatic methods for the processing of continuous seismic data appears to be a necessity. The classification algorithm should satisfy the need of a method that is robust, precise and versatile enough to be deployed to monitor the seismicity in very different contexts. In this study, we evaluate the ability of machine learning algorithms for the analysis of seismic sources at the Piton de la Fournaise volcano being Random Forest and Deep Neural Network classifiers. We gather a catalog of more than 20,000 events, belonging to 8 classes of seismic sources. We define 60 attributes, based on the waveform, the frequency content and the polarization of the seismic waves, to parameterize the seismic signals recorded. We show that both algorithms provide similar positive classification rates, with values exceeding 90% of the events. When trained with a sufficient number of events, the rate of positive identification can reach 99%. These very high rates of positive identification open the perspective of an operational implementation of these algorithms for near-real time monitoring of

  18. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  19. Onzekere databases

    NARCIS (Netherlands)

    van Keulen, Maurice

    Een recente ontwikkeling in het databaseonderzoek betret zogenaamde 'onzekere databases'. Dit artikel beschrijft wat onzekere databases zijn, hoe ze gebruikt kunnen worden en welke toepassingen met name voordeel zouden kunnen hebben van deze technologie.

  20. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...

  1. Development of a relational database for rated values of metallic materials. Final report; Entwicklung einer relationalen Datenbank fuer Sollwerte metallischer Werkstoffe; Schlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Kowalski, P.; Hartung, T.; Kirchheiner, R.

    1993-03-01

    The ``Rated Value Material Database (SWDB)`` was prepared within the RandD project ``Corrosion Information System (CORIS)`` to manage the contents of standards, specifications and general information referring to metallic materials which are of relevance for an engineer`s practical work. The main function of this material database is to make information on the materials whose systems are included in CORIS available. Furthermore, the SWDB can also be used separately to find information on chemical compositions, mechanical-technological data, physical properties as well as general information on application and processing. A mass data transformer, which was developed in addition, integrates the rated data of a neutral material control board (RW-TUeV, Essen) and therefore guarantees the error-free application of SWDB on the basis of verified data. A relational database system comprising the corresponding programming tools as well as a graphical user interface, was used for reasons for compatibility and in order to simplify the conversational programming. All development stages - i.e. from the first design to the implementation - materialized in close cooperation with the partners within the project and the instructed subcontractor and institutions. (orig.) [Deutsch] Im Rahmen des FundE-Projektes Korrosionsinformationssystem CORIS wurde die Sollwert-Werkstoff-Datenbank (SWDB) entwickelt zur Verwaltung der in der Ingenieurpraxis relevanten Inhalte von Normen, Spezifikationen und allgemeinen Informationen ueber metallische Werkstoffe. Die Hauptfunktion dieser Werkstoff-Datenbank besteht in der Bereitstellung von Werkstoffinformationen zu den in CORIS erfassten Werkstoffsystemen. Darueber hinaus kann die SWDB auch isoliert eingesetzt werden zum Recherchieren der chemischen Zusammensetzungen, mechanisch-technologischer Werte, physikalischer Eigenschaften und allgemeiner Anwendungs- und Verarbeitungsinformationen. Ein zusaetzlich entwickelter Massendaten

  2. Text mining and manual curation of chemical-gene-disease networks for the comparative toxicogenomics database (CTD).

    Science.gov (United States)

    Wiegers, Thomas C; Davis, Allan Peter; Cohen, K Bretonnel; Hirschman, Lynette; Mattingly, Carolyn J

    2009-10-08

    The Comparative Toxicogenomics Database (CTD) is a publicly available resource that promotes understanding about the etiology of environmental diseases. It provides manually curated chemical-gene/protein interactions and chemical- and gene-disease relationships from the peer-reviewed, published literature. The goals of the research reported here were to establish a baseline analysis of current CTD curation, develop a text-mining prototype from readily available open source components, and evaluate its potential value in augmenting curation efficiency and increasing data coverage. Prototype text-mining applications were developed and evaluated using a CTD data set consisting of manually curated molecular interactions and relationships from 1,600 documents. Preliminary results indicated that the prototype found 80% of the gene, chemical, and disease terms appearing in curated interactions. These terms were used to re-rank documents for curation, resulting in increases in mean average precision (63% for the baseline vs. 73% for a rule-based re-ranking), and in the correlation coefficient of rank vs. number of curatable interactions per document (baseline 0.14 vs. 0.38 for the rule-based re-ranking). This text-mining project is unique in its integration of existing tools into a single workflow with direct application to CTD. We performed a baseline assessment of the inter-curator consistency and coverage in CTD, which allowed us to measure the potential of these integrated tools to improve prioritization of journal articles for manual curation. Our study presents a feasible and cost-effective approach for developing a text mining solution to enhance manual curation throughput and efficiency.

  3. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  4. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  5. 地铁车辆信息网络数据库在车辆管理调度中的应用%The Application of Subway Vehicle Information Network Database in Vehicle Scheduling Management

    Institute of Scientific and Technical Information of China (English)

    顾申生

    2014-01-01

    重点研究了网络数据库IBM DB2的关键技术,数据库设计和架构,自动管理和设置数据库的工具等作了深一步的研究。结合地铁管理信息系统中对DB2数据库网络特性的使用,进一步证明了网络数据库在地铁车辆运营系统中的必要性。%Focus on the key technology of network of IBM DB2 database, the database design and architecture, automatic man-agement and set up the database tool made a deeper research. Combined the subway station management information system in the use of DB2 database network , further proved the necessity of network database in metro vehicles operating system.

  6. Energy efficiency in Germany. Analysis based on the ODYSSEE database from the SAVE project 'Cross-country comparison on energy efficiency indicators'. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Eichhammer, W.; Mannsbart, W.; Schlomann, B.

    1998-03-01

    The most important change in the German data situation compared to the last national report in October 1995 was the adjustment of the national statistical system to the unified Germany including eastern Germany (in the following referred to as Germany). Therefore, the existing time series for western Germany (in the following referred to as ex-FRG) had not only to be updated, but German data had to be included in the ODYSSEE database, too. It is intended to improve in the future the quality of the data further and to extend the data through reasonable estimates as far as justified (improvement of the data for the ex-FRG up to 1994, completion and verification of the data for Germany in 1990/1991). It is clear that the re-unification (in combination with the change in 1995 of the national industrial energy consumption statistics to a classification which is compatible with the European NACE Rev. 1 classification) poses considerable difficulties for the Energy Efficiency Indicators (EEI) approach, in which longer time series improve the reliability of the results. The German reunification shows therefore the limitations of the approach which encounters difficulties in periods of rapid changes. It is, however, not a principle argument against the methodology. The difficulty stems rather from the fact that in periods of radical changes, statistical systems simply may break down for some time, and cannot be reconstructed afterwards. Fortunately, the German re-unification is exceptional within the European Union. However, in the case of the inclusion of Eastern European accession countries (PHARE countries) in the EEI approach, the same type of difficulties will occur, though to a lesser degree, because the changes were less radical, and because there was not a complete break in statistics. (orig.)

  7. 經由校園網路存取圖書館光碟資料庫之研究 Studies on Multiuser Access Library CD-ROM Database via Campus Network

    Directory of Open Access Journals (Sweden)

    Ruey-shun Chen

    1992-06-01

    Full Text Available 無Library CD-ROM with its enormous storage, retrieval capabilities and reasonable price. It has been gradually replacing some of its printed counterpart. But one of the greatest limitation on the use of stand-alone CD-ROM workstation is that only one user can access the CD-ROM database at a time. This paper is proposed a new method to solve this problem. The method use personal computer via standard network system Ethernet high speed fiber network FADDY and standard protocol TCP/IP can access library CD-ROM database and perform a practical CD-ROM campus network system. Its advantage reduce redundant CD-ROM purchase fee and reduce damage by handed in and out and allows multiuser to access the same CD-ROM disc simultaneously.

  8. How Can the USA National Phenology Network's Data Resource Benefit You? Recent Applications of the Phenology Data and Information Housed in the National Phenology Database

    Science.gov (United States)

    Crimmins, T. M.

    2015-12-01

    The USA National Phenology Network (USA-NPN; www.usanpn.org) serves science and society by promoting a broad understanding of plant and animal phenology and the relationships among phenological patterns and all aspects of environmental change. The National Phenology Database, maintained by the USA-NPN, is experiencing steady growth in the number of data records it houses. Since 2009, over 5,500 participants in Nature's Notebook, the national-scale, multi-taxa phenology observation program coordinated by the USA-NPN, have contributed nearly 6 million observation records of plants and animals. The phenology data curated by the USA-NPN are being used in a rapidly growing number of applications for science, conservation and resource management. Data and data products generated by the USA-NPN have been used in 17 peer-reviewed publications to date. Additionally, phenology data collected via Nature's Notebook is actively informing decisions ranging from efficiently scheduling street-sweeping activities to keep dropped leaves from entering inland lakes, to timing the spread of herbicide or other restoration activities to maximize their efficacy. We demonstrate several types of questions that can be addressed with this observing system and the resultant data, and highlight several ongoing local- to national-scale projects as well as some recently published studies. Additional data-mining and exploration by interested researchers and resource managers will undoubtedly continue to demonstrate the value of these data.

  9. The International Haemovigilance Network Database for the Surveillance of Adverse Reactions and Events in Donors and Recipients of Blood Components: technical issues and results.

    Science.gov (United States)

    Politis, C; Wiersum, J C; Richardson, C; Robillard, P; Jorgensen, J; Renaudier, P; Faber, J-C; Wood, E M

    2016-11-01

    The International Haemovigilance Network's ISTARE is an online database for surveillance of all adverse reactions (ARs) and adverse events (AEs) associated with donation of blood and transfusion of blood components, irrespective of severity or the harm caused. ISTARE aims to unify the collection and sharing of information with a view to harmonizing best practices for haemovigilance systems around the world. Adverse reactionss and adverse events are recorded by blood component, type of reaction, severity and imputability to transfusion, using internationally agreed standard definitions. From 2006 to 2012, 125 national sets of annual aggregated data were received from 25 countries, covering 132.8 million blood components issued. The incidence of all ARs was 77.5 per 100 000 components issued, of which 25% were severe (19.1 per 100 000). Of 349 deaths (0.26 per 100 000), 58% were due to the three ARs related to the respiratory system: transfusion-associated circulatory overload (TACO, 27%), transfusion-associated acute lung injury (TRALI, 19%) and transfusion-associated dyspnoea (TAD, 12%). Cumulatively, 594 477 donor complications were reported (rate 660 per 100 000), of which 2.9% were severe. ISTARE is a well-established surveillance tool offering important contributions to international efforts to maximize transfusion safety. © 2016 International Society of Blood Transfusion.

  10. Global profiling and rapid matching of natural products using diagnostic product ion network and in silico analogue database: Gastrodia elata as a case study.

    Science.gov (United States)

    Lai, Chang-Jiang-Sheng; Zha, Liangping; Liu, Da-Hui; Kang, Liping; Ma, Xiaojing; Zhan, Zhi-Lai; Nan, Tie-Gui; Yang, Jian; Li, Fajie; Yuan, Yuan; Huang, Lu-Qi

    2016-07-22

    Rapid discovery of novel compounds of a traditional herbal medicine is of vital significance for pharmaceutical industry and plant metabolic pathway analysis. However, discovery of unknown or trace natural products is an ongoing challenge. This study presents a universal targeted data-independent acquisition and mining strategy to globally profile and effectively match novel natural product analogues from an herbal extract. The famous medical plant Gastrodia elata was selected as an example. This strategy consists of three steps: (i) acquisition of accurate parent and adduct ions (PAIs) and the product ions data of all eluting compounds by untargeted full-scan MS(E) mode; (ii) rapid compound screening using diagnostic product ions (DPIs) network and in silico analogue database with SUMPRODUCT function to find novel candidates; and (iii) identification and isomerism discrimination of multiple types of compounds using ClogP and ions fragment behavior analyses. Using above data mining methods, a total of 152 compounds were characterized, and 70 were discovered for the first time, including series of phospholipids and novel gastroxyl derivatives. Furthermore, a number of gastronucleosides and phase II metabolites of gastrodin and parishins were discovered, including glutathionylated, cysteinylglycinated and cysteinated compounds, and phosphatidylserine analogues. This study extended the application of classical DPIs filter strategy and developed a structure-based screening approach with the potential for significant increase of efficiency for discovery and identification of trace novel natural products. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Identification of promising Twin Hub networks: Report of Work Package 1 of the Intermodal rail freight Twin hub Network Northwest Europe - project (final report)

    NARCIS (Netherlands)

    Kreutzberger, E.D.; Konings, J.W.; Meijer, S.; Witteveen, C.; Meijers, B.M.; Pekin, E.; Macharis, C.; Kiel, J.; Kawabata, Y.; Vos, W.

    2014-01-01

    This report is the first deliverable of the project Intermodal Rail Freight Twin Hub Network Northwest Europe. We call its subject Twin hub network and the organisational entity to carry out the actions the Twin hub project. The project is funded by INTERREG NWE (programme IVb). Its work started in

  12. Database Manager

    Science.gov (United States)

    Martin, Andrew

    2010-01-01

    It is normal practice today for organizations to store large quantities of records of related information as computer-based files or databases. Purposeful information is retrieved by performing queries on the data sets. The purpose of DATABASE MANAGER is to communicate to students the method by which the computer performs these queries. This…

  13. The Global Terrestrial Network for Permafrost Database: metadata statistics and prospective analysis on future permafrost temperature and active layer depth monitoring site distribution

    Science.gov (United States)

    Biskaborn, B. K.; Lanckman, J.-P.; Lantuit, H.; Elger, K.; Streletskiy, D. A.; Cable, W. L.; Romanovsky, V. E.

    2015-03-01

    The Global Terrestrial Network for Permafrost (GTN-P) provides the first dynamic database associated with the Thermal State of Permafrost (TSP) and the Circumpolar Active Layer Monitoring (CALM) programs, which extensively collect permafrost temperature and active layer thickness data from Arctic, Antarctic and Mountain permafrost regions. The purpose of the database is to establish an "early warning system" for the consequences of climate change in permafrost regions and to provide standardized thermal permafrost data to global models. In this paper we perform statistical analysis of the GTN-P metadata aiming to identify the spatial gaps in the GTN-P site distribution in relation to climate-effective environmental parameters. We describe the concept and structure of the Data Management System in regard to user operability, data transfer and data policy. We outline data sources and data processing including quality control strategies. Assessment of the metadata and data quality reveals 63% metadata completeness at active layer sites and 50% metadata completeness for boreholes. Voronoi Tessellation Analysis on the spatial sample distribution of boreholes and active layer measurement sites quantifies the distribution inhomogeneity and provides potential locations of additional permafrost research sites to improve the representativeness of thermal monitoring across areas underlain by permafrost. The depth distribution of the boreholes reveals that 73% are shallower than 25 m and 27% are deeper, reaching a maximum of 1 km depth. Comparison of the GTN-P site distribution with permafrost zones, soil organic carbon contents and vegetation types exhibits different local to regional monitoring situations on maps. Preferential slope orientation at the sites most likely causes a bias in the temperature monitoring and should be taken into account when using the data for global models. The distribution of GTN-P sites within zones of projected temperature change show a high

  14. The Global Terrestrial Network for Permafrost Database: metadata statistics and prospective analysis on future permafrost temperature and active layer depth monitoring site distribution

    Directory of Open Access Journals (Sweden)

    B. K. Biskaborn

    2015-03-01

    Full Text Available The Global Terrestrial Network for Permafrost (GTN-P provides the first dynamic database associated with the Thermal State of Permafrost (TSP and the Circumpolar Active Layer Monitoring (CALM programs, which extensively collect permafrost temperature and active layer thickness data from Arctic, Antarctic and Mountain permafrost regions. The purpose of the database is to establish an "early warning system" for the consequences of climate change in permafrost regions and to provide standardized thermal permafrost data to global models. In this paper we perform statistical analysis of the GTN-P metadata aiming to identify the spatial gaps in the GTN-P site distribution in relation to climate-effective environmental parameters. We describe the concept and structure of the Data Management System in regard to user operability, data transfer and data policy. We outline data sources and data processing including quality control strategies. Assessment of the metadata and data quality reveals 63% metadata completeness at active layer sites and 50% metadata completeness for boreholes. Voronoi Tessellation Analysis on the spatial sample distribution of boreholes and active layer measurement sites quantifies the distribution inhomogeneity and provides potential locations of additional permafrost research sites to improve the representativeness of thermal monitoring across areas underlain by permafrost. The depth distribution of the boreholes reveals that 73% are shallower than 25 m and 27% are deeper, reaching a maximum of 1 km depth. Comparison of the GTN-P site distribution with permafrost zones, soil organic carbon contents and vegetation types exhibits different local to regional monitoring situations on maps. Preferential slope orientation at the sites most likely causes a bias in the temperature monitoring and should be taken into account when using the data for global models. The distribution of GTN-P sites within zones of projected temperature change

  15. Genome databases

    Energy Technology Data Exchange (ETDEWEB)

    Courteau, J.

    1991-10-11

    Since the Genome Project began several years ago, a plethora of databases have been developed or are in the works. They range from the massive Genome Data Base at Johns Hopkins University, the central repository of all gene mapping information, to small databases focusing on single chromosomes or organisms. Some are publicly available, others are essentially private electronic lab notebooks. Still others limit access to a consortium of researchers working on, say, a single human chromosome. An increasing number incorporate sophisticated search and analytical software, while others operate as little more than data lists. In consultation with numerous experts in the field, a list has been compiled of some key genome-related databases. The list was not limited to map and sequence databases but also included the tools investigators use to interpret and elucidate genetic data, such as protein sequence and protein structure databases. Because a major goal of the Genome Project is to map and sequence the genomes of several experimental animals, including E. coli, yeast, fruit fly, nematode, and mouse, the available databases for those organisms are listed as well. The author also includes several databases that are still under development - including some ambitious efforts that go beyond data compilation to create what are being called electronic research communities, enabling many users, rather than just one or a few curators, to add or edit the data and tag it as raw or confirmed.

  16. Final report of the 'Nordic thermal-hydraulic and safety network (NOTNET)' - Project

    Energy Technology Data Exchange (ETDEWEB)

    Tuunanen, J.; Tuomainen, M. [VTT Processes (Finland)

    2005-04-01

    A Nordic network for thermal-hydraulics and nuclear safety research was started. The idea of the network is to combine the resources of different research teams in order to carry out more ambitious and extensive research programs than would be possible for the individual teams. From the very beginning, the end users of the research results have been integrated to the network. Aim of the network is to benefit the partners involved in nuclear energy in the Nordic Countries (power companies, reactor vendors, safety regulators, research units). First task within the project was to describe the resources (personnel, know-how, simulation tools, test facilities) of the various teams. Next step was to discuss with the end users about their research needs. Based on these steps, few most important research topics with defined goals were selected, and coarse road maps were prepared for reaching the targets. These road maps will be used as a starting point for planning the actual research projects in the future. The organisation and work plan for the network were established. National coordinators were appointed, as well as contact persons in each participating organisation, whether research unit or end user. This organisation scheme is valid for the short-term operation of NOTNET when only Nordic organisations take part in the work. Later on, it is possible to enlarge the network e.g. within EC framework programme. The network can now start preparing project proposals and searching funding for the first common research projects. (au)

  17. Cloud Database Management System (CDBMS

    Directory of Open Access Journals (Sweden)

    Snehal B. Shende

    2015-10-01

    Full Text Available Cloud database management system is a distributed database that delivers computing as a service. It is sharing of web infrastructure for resources, software and information over a network. The cloud is used as a storage location and database can be accessed and computed from anywhere. The large number of web application makes the use of distributed storage solution in order to scale up. It enables user to outsource the resource and services to the third party server. This paper include, the recent trend in cloud service based on database management system and offering it as one of the services in cloud. The advantages and disadvantages of database as a service will let you to decide either to use database as a service or not. This paper also will highlight the architecture of cloud based on database management system.

  18. Probabilistic Databases

    CERN Document Server

    Suciu, Dan; Koch, Christop

    2011-01-01

    Probabilistic databases are databases where the value of some attributes or the presence of some records are uncertain and known only with some probability. Applications in many areas such as information extraction, RFID and scientific data management, data cleaning, data integration, and financial risk assessment produce large volumes of uncertain data, which are best modeled and processed by a probabilistic database. This book presents the state of the art in representation formalisms and query processing techniques for probabilistic data. It starts by discussing the basic principles for rep

  19. Quantifying the spatio-temporal pattern of the ground impact of space weather events using dynamical networks formed from the SuperMAG database of ground based magnetometer stations.

    Science.gov (United States)

    Dods, Joe; Chapman, Sandra; Gjerloev, Jesper

    2016-04-01

    Quantitative understanding of the full spatial-temporal pattern of space weather is important in order to estimate the ground impact. Geomagnetic indices such as AE track the peak of a geomagnetic storm or substorm, but cannot capture the full spatial-temporal pattern. Observations by the ~100 ground based magnetometers in the northern hemisphere have the potential to capture the detailed evolution of a given space weather event. We present the first analysis of the full available set of ground based magnetometer observations of substorms using dynamical networks. SuperMAG offers a database containing ground station magnetometer data at a cadence of 1min from 100s stations situated across the globe. We use this data to form dynamic networks which capture spatial dynamics on timescales from the fast reconfiguration seen in the aurora, to that of the substorm cycle. Windowed linear cross-correlation between pairs of magnetometer time series along with a threshold is used to determine which stations are correlated and hence connected in the network. Variations in ground conductivity and differences in the response functions of magnetometers at individual stations are overcome by normalizing to long term averages of the cross-correlation. These results are tested against surrogate data in which phases have been randomised. The network is then a collection of connected points (ground stations); the structure of the network and its variation as a function of time quantify the detailed dynamical processes of the substorm. The network properties can be captured quantitatively in time dependent dimensionless network parameters and we will discuss their behaviour for examples of 'typical' substorms and storms. The network parameters provide a detailed benchmark to compare data with models of substorm dynamics, and can provide new insights on the similarities and differences between substorms and how they correlate with external driving and the internal state of the

  20. Use of antidepressant serotoninergic medications and cardiac valvulopathy: a nested case–control study in the health improvement network (THIN) database

    Science.gov (United States)

    Lapi, Francesco; Nicotra, Federica; Scotti, Lorenza; Vannacci, Alfredo; Thompson, Mary; Pieri, Francesco; Mugelli, Niccolò; Zambon, Antonella; Corrao, Giovanni; Mugelli, Alessandro; Rubino, Annalisa

    2012-01-01

    AIMS To quantify the risk of cardiac valvulopathy (CV) associated with the use of antidepressant serotoninergic medications (SMs). METHODS We conducted a case–control study nested in a cohort of users of antidepressant SMs selected from The Health Improvement Network database. Patients who experienced a CV event during follow-up were cases. Cases were ascertained in a random sample of them. Up to 10 controls were matched to each case by sex, age, month and year of the study entry. Use of antidepressant SMs during follow-up was defined as current (the last prescription for antidepressant SMs occurred in the 2 months before the CV event), recent (in the 2–12 months before the CV event) and past (>12 months before the CV event). We fitted a conditional regression model to estimate the association between use of antidepressant SMs and the risk of CV by means of odds ratios (ORs) and corresponding 95% confidence intervals (CIs). Sensitivity analyses were conducted to test the robustness of our results. RESULTS The study cohort included 752 945 subjects aged 18–89 years. Throughout follow-up, 1663 cases (incidence rate: 3.4 per 10 000 person-years) of CV were detected and were matched to 16 566 controls. The adjusted OR (95% CI) for current and recent users compared with past users of antidepressant SMs were 1.16 (0.96–1.40) and 1.06 (0.93–1.22), respectively. Consistent effect estimates were obtained when considering cumulative exposure to antidepressant SMs during follow-up. CONCLUSIONS These results would suggest that exposure to antidepressant SMs is not associated with an increased risk of CV. PMID:22356433

  1. Dealer Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dealer reporting databases contain the primary data reported by federally permitted seafood dealers in the northeast. Electronic reporting was implemented May 1,...

  2. RDD Databases

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database was established to oversee documents issued in support of fishery research activities including experimental fishing permits (EFP), letters of...

  3. National database

    DEFF Research Database (Denmark)

    Kristensen, Helen Grundtvig; Stjernø, Henrik

    1995-01-01

    Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen.......Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen....

  4. Towards Sensor Database Systems

    DEFF Research Database (Denmark)

    Bonnet, Philippe; Gehrke, Johannes; Seshadri, Praveen

    2001-01-01

    Sensor networks are being widely deployed for measurement, detection and surveillance applications. In these new applications, users issue long-running queries over a combination of stored data and sensor data. Most existing applications rely on a centralized system for collecting sensor data....... These systems lack flexibility because data is extracted in a predefined way; also, they do not scale to a large number of devices because large volumes of raw data are transferred regardless of the queries that are submitted. In our new concept of sensor database system, queries dictate which data is extracted...... from the sensors. In this paper, we define the concept of sensor databases mixing stored data represented as relations and sensor data represented as time series. Each long-running query formulated over a sensor database defines a persistent view, which is maintained during a given time interval. We...

  5. Database challenges and solutions in neuroscientific applications.

    Science.gov (United States)

    Dashti, A E; Ghandeharizadeh, S; Stone, J; Swanson, L W; Thompson, R H

    1997-02-01

    In the scientific community, the quality and progress of various endeavors depend in part on the ability of researchers to share and exchange large quantities of heterogeneous data with one another efficiently. This requires controlled sharing and exchange of information among autonomous, distributed, and heterogeneous databases. In this paper, we focus on a neuroscience application, Neuroanatomical Rat Brain Viewer (NeuART Viewer) to demonstrate alternative database concepts that allow neuroscientists to manage and exchange data. Requirements for the NeuART application, in combination with an underlying network-aware database, are described at a conceptual level. Emphasis is placed on functionality from the user's perspective and on requirements that the database must fulfill. The most important functionality required by neuroscientists is the ability to construct brain models using information from different repositories. To accomplish such a task, users need to browse remote and local sources and summaries of data and capture relevant information to be used in building and extending the brain models. Other functionalities are also required, including posing queries related to brain models, augmenting and customizing brain models, and sharing brain models in a collaborative environment. An extensible object-oriented data model is presented to capture the many data types expected in this application. After presenting conceptual level design issues, we describe several known database solutions that support these requirements and discuss requirements that demand further research. Data integration for heterogeneous databases is discussed in terms of reducing or eliminating semantic heterogeneity when translations are made from one system to another. Performance enhancement mechanisms such as materialized views and spatial indexing for three-dimensional objects are explained and evaluated in the context of browsing, incorporating, and sharing. Policies for providing

  6. Final Report: The self Reliance Foundation and Hispanic Radio Network Collaborative, September 30, 1995 - January 31, 1998

    Energy Technology Data Exchange (ETDEWEB)

    Multedo, Molly

    1998-09-30

    The Self Reliance Foundation, through its production subcontractor, Hispanic Radio Network, produced daily 1-3 minute radio capsules on science, education, and the environment. The programs were broadcast on over 100 U.S. Spanish-language radio stations from 1995-1998, reaching 2 million weekly listeners.

  7. Biological Databases

    Directory of Open Access Journals (Sweden)

    Kaviena Baskaran

    2013-12-01

    Full Text Available Biology has entered a new era in distributing information based on database and this collection of database become primary in publishing information. This data publishing is done through Internet Gopher where information resources easy and affordable offered by powerful research tools. The more important thing now is the development of high quality and professionally operated electronic data publishing sites. To enhance the service and appropriate editorial and policies for electronic data publishing has been established and editors of article shoulder the responsibility.

  8. 76 FR 45689 - Financial Crimes Enforcement Network; Repeal of the Final Rule and Withdrawal of the Finding of...

    Science.gov (United States)

    2011-08-01

    ...; Repeal of the Final Rule and Withdrawal of the Finding of Primary Money Laundering Concern Against VEF... anti-money laundering provisions of the BSA, codified at 12 U.S.C. 1829b, 12 U.S.C. 1951-1959, and 31 U..., or type of account is of ``primary money laundering concern,'' to require domestic...

  9. Maintaining the Database for Information Object Analysis, Intent, Dissemination and Enhancement (IOAIDE) and the US Army Research Laboratory Campus Sensor Network (ARL CSN)

    Science.gov (United States)

    2017-01-04

    ARL-TR-7921 ● JAN 2017 US Army Research Laboratory Maintaining the Database for Information Object Analysis , Intent...ARL-TR-7921 ● JAN 2017 US Army Research Laboratory Maintaining the Database for Information Object Analysis , Intent, Dissemination and...YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) January 2017 2. REPORT TYPE Technical Report 3. DATES COVERED (From - To) 4. TITLE

  10. Inspection Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — FDA is disclosing the final inspection classification for inspections related to currently marketed FDA-regulated products. The disclosure of this information is...

  11. Inspection Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — FDA is disclosing the final inspection classification for inspections related to currently marketed FDA-regulated products. The disclosure of this information is not...

  12. Case retrieval in medical databases by fusing heterogeneous information.

    Science.gov (United States)

    Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Roux, Christian; Cochener, Béatrice

    2011-01-01

    A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper. It was designed to retrieve possibly incomplete documents, consisting of several images and semantic information, from a database; more complex data types such as videos can also be included in the framework. The proposed retrieval method relies on image processing, in order to characterize each individual image in a document by their digital content, and information fusion. Once the available images in a query document are characterized, a degree of match, between the query document and each reference document stored in the database, is defined for each attribute (an image feature or a metadata). A Bayesian network is used to recover missing information if need be. Finally, two novel information fusion methods are proposed to combine these degrees of match, in order to rank the reference documents by decreasing relevance for the query. In the first method, the degrees of match are fused by the Bayesian network itself. In the second method, they are fused by the Dezert-Smarandache theory: the second approach lets us model our confidence in each source of information (i.e., each attribute) and take it into account in the fusion process for a better retrieval performance. The proposed methods were applied to two heterogeneous medical databases, a diabetic retinopathy database and a mammography screening database, for computer aided diagnosis. Precisions at five of 0.809 ± 0.158 and 0.821 ± 0.177, respectively, were obtained for these two databases, which is very promising.

  13. First diagnosis and management of incontinence in older people with and without dementia in primary care: a cohort study using The Health Improvement Network primary care database.

    Directory of Open Access Journals (Sweden)

    Robert L Grant

    2013-08-01

    Full Text Available BACKGROUND: Dementia is one of the most disabling and burdensome diseases. Incontinence in people with dementia is distressing, adds to carer burden, and influences decisions to relocate people to care homes. Successful and safe management of incontinence in people with dementia presents additional challenges. The aim of this study was to investigate the rates of first diagnosis in primary care of urinary and faecal incontinence among people aged 60-89 with dementia, and the use of medication or indwelling catheters for urinary incontinence. METHODS AND FINDINGS: We extracted data on 54,816 people aged 60-89 with dementia and an age-gender stratified sample of 205,795 people without dementia from 2001 to 2010 from The Health Improvement Network (THIN, a United Kingdom primary care database. THIN includes data on patients and primary care consultations but does not identify care home residents. Rate ratios were adjusted for age, sex, and co-morbidity using multilevel Poisson regression. The rates of first diagnosis per 1,000 person-years at risk (95% confidence interval for urinary incontinence in the dementia cohort, among men and women, respectively, were 42.3 (40.9-43.8 and 33.5 (32.6-34.5. In the non-dementia cohort, the rates were 19.8 (19.4-20.3 and 18.6 (18.2-18.9. The rates of first diagnosis for faecal incontinence in the dementia cohort were 11.1 (10.4-11.9 and 10.1 (9.6-10.6. In the non-dementia cohort, the rates were 3.1 (2.9-3.3 and 3.6 (3.5-3.8. The adjusted rate ratio for first diagnosis of urinary incontinence was 3.2 (2.7-3.7 in men and 2.7 (2.3-3.2 in women, and for faecal incontinence was 6.0 (5.1-7.0 in men and 4.5 (3.8-5.2 in women. The adjusted rate ratio for pharmacological treatment of urinary incontinence was 2.2 (1.4-3.7 for both genders, and for indwelling urinary catheters was 1.6 (1.3-1.9 in men and 2.3 (1.9-2.8 in women. CONCLUSIONS: Compared with those without a dementia diagnosis, those with a dementia diagnosis

  14. The magnet components database system

    Energy Technology Data Exchange (ETDEWEB)

    Baggett, M.J. (Brookhaven National Lab., Upton, NY (USA)); Leedy, R.; Saltmarsh, C.; Tompkins, J.C. (Superconducting Supercollider Lab., Dallas, TX (USA))

    1990-01-01

    The philosophy, structure, and usage MagCom, the SSC magnet components database, are described. The database has been implemented in Sybase (a powerful relational database management system) on a UNIX-based workstation at the Superconducting Super Collider Laboratory (SSCL); magnet project collaborators can access the database via network connections. The database was designed to contain the specifications and measured values of important properties for major materials, plus configuration information (specifying which individual items were used in each cable, coil, and magnet) and the test results on completed magnets. These data will facilitate the tracking and control of the production process as well as the correlation of magnet performance with the properties of its constituents. 3 refs., 10 figs.

  15. Physical database design using Oracle

    CERN Document Server

    Burleson, Donald K

    2004-01-01

    INTRODUCTION TO ORACLE PHYSICAL DESIGNPrefaceRelational Databases and Physical DesignSystems Analysis and Physical Database DesignIntroduction to Logical Database DesignEntity/Relation ModelingBridging between Logical and Physical ModelsPhysical Design Requirements Validation PHYSICAL ENTITY DESIGN FOR ORACLEData Relationships and Physical DesignMassive De-Normalization: STAR Schema DesignDesigning Class HierarchiesMaterialized Views and De-NormalizationReferential IntegrityConclusionORACLE HARDWARE DESIGNPlanning the Server EnvironmentDesigning the Network Infrastructure for OracleOracle Netw

  16. Fundamental Research of Distributed Database

    Directory of Open Access Journals (Sweden)

    Swati Gupta

    2011-08-01

    Full Text Available The purpose of this paper is to present an introduction toDistributed Databases which are becoming very popularnow a days. Today’s business environment has anincreasing need for distributed database and Client/server applications as the desire for reliable, scalable and accessible information is Steadily rising. Distributed database systems provide an improvement on communication and data processing due to its datadistribution throughout different network sites. Not Only isdata access faster, but a single-point of failure is less likelyto occur, and it provides local control of data for users.

  17. The biological coherence of human phenome databases.

    NARCIS (Netherlands)

    Oti, M.O.; Huynen, M.A.; Brunner, H.G.

    2009-01-01

    Disease networks are increasingly explored as a complement to networks centered around interactions between genes and proteins. The quality of disease networks is heavily dependent on the amount and quality of phenotype information in phenotype databases of human genetic diseases. We explored which

  18. Routing Protocols for Transmitting Large Databases or Multi-databases Systems

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Most knowledgeable people agree that networking and routingtechnologi es have been around about 25 years. Routing is simultaneously the most complicat ed function of a network and the most important. It is of the same kind that mor e than 70% of computer application fields are MIS applications. So the challenge in building and using a MIS in the network is developing the means to find, acc ess, and communicate large databases or multi-databases systems. Because genera l databases are not time continuous, in fact, they can not be streaming, so we ca n't obtain reliable and secure quality of service by deleting some unimportant d atagrams in the databases transmission. In this article, we will discuss which k ind of routing protocol is the best type for large databases or multi-databases systems transmission in the networks.

  19. Web interfaces to relational databases

    Science.gov (United States)

    Carlisle, W. H.

    1996-01-01

    This reports on a project to extend the capabilities of a Virtual Research Center (VRC) for NASA's Advanced Concepts Office. The work was performed as part of NASA's 1995 Summer Faculty Fellowship program and involved the development of a prototype component of the VRC - a database system that provides data creation and access services within a room of the VRC. In support of VRC development, NASA has assembled a laboratory containing the variety of equipment expected to be used by scientists within the VRC. This laboratory consists of the major hardware platforms, SUN, Intel, and Motorola processors and their most common operating systems UNIX, Windows NT, Windows for Workgroups, and Macintosh. The SPARC 20 runs SUN Solaris 2.4, an Intel Pentium runs Windows NT and is installed on a different network from the other machines in the laboratory, a Pentium PC runs Windows for Workgroups, two Intel 386 machines run Windows 3.1, and finally, a PowerMacintosh and a Macintosh IIsi run MacOS.

  20. High-resolution subsurface imaging and neural network recognition: Non-intrusive buried substance location. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Sternberg, B.K.; Poulton, M.M.

    1997-01-26

    A high-frequency, high-resolution electromagnetic (EM) imaging system has been developed for environmental geophysics surveys. Some key features of this system include: (1) rapid surveying to allow dense spatial sampling over a large area, (2) high-accuracy measurements which are used to produce a high-resolution image of the subsurface, (3) measurements which have excellent signal-to-noise ratio over a wide bandwidth (31 kHz to 32 MHz), (4) elimination of electric-field interference at high frequencies, (5) large-scale physical modeling to produce accurate theoretical responses over targets of interest in environmental geophysics surveys, (6) rapid neural network interpretation at the field site, and (7) visualization of complex structures during the survey. Four major experiments were conducted with the system: (1) Data were collected for several targets in our physical modeling facility. (2) The authors tested the system over targets buried in soil. (3) The authors conducted an extensive survey at the Idaho National Engineering Laboratory (INEL) Cold Test Pit (CTP). The location of the buried waste, category of waste, and thickness of the clay cap were successfully mapped. (4) The authors ran surveys over the acid pit at INEL. This was an operational survey over a hot site. The interpreted low-resistivity region correlated closely with the known extent of the acid pit.

  1. High-resolution subsurface imaging and neural network recognition: Non-intrusive buried substance location. Final report, January 26, 1997

    Energy Technology Data Exchange (ETDEWEB)

    Sternberg, B.K.; Poulton, M.M.

    1998-12-31

    A high-frequency, high-resolution electromagnetic (EIVI) imaging system has been developed for environmental geophysics surveys. Some key features of this system include: (1) rapid surveying to allow dense spatial sampling over a large area, (2) high-accuracy measurements which are used to produce a high-resolution image of the subsurface, (3) measurements which have excellent signal-to-noise ratio over a wide bandwidth (31 kHz to 32 MHZ), (4) elimination of electric-field interference at high frequencies, (5) large-scale physical modeling to produce accurate theoretical responses over targets of interest in environmental geophysics surveys, (6) rapid neural network interpretation at the field site, and (7) visualization of complex structures during the survey. Four major experiments were conducted with the system: (1) Data were collected for several targets in our physical modeling facility. (2) We tested the system over targets buried in soil. (3) We conducted an extensive survey at the Idaho National Engineering Laboratory (INEL) Cold Test Pit (CTP). The location of the buried waste, category of waste, and thickness of the clay cap were successfully mapped. (4) We ran surveys over the acid pit at INEL. This was an operational survey over a hot site. The interpreted low-resistivity region correlated closely with the known extent of the acid pit.

  2. Railroad Lines - RAILROAD_100K_NTAD_IN: Railroad Network in Indiana (National Transportation Atlas Database, 1:100,000, Line Shapefile)

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — RAILROAD_NTAD_100K_SW is a 1:100,000 scale line shapefile which is a subset of the rail network. Bureau of Transportation Statistics metadata states - "The Rail...

  3. DMPD: Translational mini-review series on Toll-like receptors: networks regulated byToll-like receptors mediate innate and adaptive immunity. [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available e receptors: networks regulated byToll-like receptors mediate innate and adaptive...ed byToll-like receptors mediate innate and adaptive immunity. Authors Parker LC, Prince LR, Sabroe I. Publi...d byToll-like receptors mediate innate and adaptive immunity. Parker LC, Prince LR, Sabroe I. Clin Exp Immun...17223959 Translational mini-review series on Toll-like receptors: networks regulate

  4. The NASA Science Internet: An integrated approach to networking

    Science.gov (United States)

    Rounds, Fred

    1991-01-01

    An integrated approach to building a networking infrastructure is an absolute necessity for meeting the multidisciplinary science networking requirements of the Office of Space Science and Applications (OSSA) science community. These networking requirements include communication connectivity between computational resources, databases, and library systems, as well as to other scientists and researchers around the world. A consolidated networking approach allows strategic use of the existing science networking within the Federal government, and it provides networking capability that takes into consideration national and international trends towards multivendor and multiprotocol service. It also offers a practical vehicle for optimizing costs and maximizing performance. Finally, and perhaps most important to the development of high speed computing is that an integrated network constitutes a focus for phasing to the National Research and Education Network (NREN). The NASA Science Internet (NSI) program, established in mid 1988, is structured to provide just such an integrated network. A description of the NSI is presented.

  5. SmallSat Database

    Science.gov (United States)

    Petropulos, Dolores; Bittner, David; Murawski, Robert; Golden, Bert

    2015-01-01

    The SmallSat has an unrealized potential in both the private industry and in the federal government. Currently over 70 companies, 50 universities and 17 governmental agencies are involved in SmallSat research and development. In 1994, the U.S. Army Missile and Defense mapped the moon using smallSat imagery. Since then Smart Phones have introduced this imagery to the people of the world as diverse industries watched this trend. The deployment cost of smallSats is also greatly reduced compared to traditional satellites due to the fact that multiple units can be deployed in a single mission. Imaging payloads have become more sophisticated, smaller and lighter. In addition, the growth of small technology obtained from private industries has led to the more widespread use of smallSats. This includes greater revisit rates in imagery, significantly lower costs, the ability to update technology more frequently and the ability to decrease vulnerability of enemy attacks. The popularity of smallSats show a changing mentality in this fast paced world of tomorrow. What impact has this created on the NASA communication networks now and in future years? In this project, we are developing the SmallSat Relational Database which can support a simulation of smallSats within the NASA SCaN Compatability Environment for Networks and Integrated Communications (SCENIC) Modeling and Simulation Lab. The NASA Space Communications and Networks (SCaN) Program can use this modeling to project required network support needs in the next 10 to 15 years. The SmallSat Rational Database could model smallSats just as the other SCaN databases model the more traditional larger satellites, with a few exceptions. One being that the smallSat Database is designed to be built-to-order. The SmallSat database holds various hardware configurations that can be used to model a smallSat. It will require significant effort to develop as the research material can only be populated by hand to obtain the unique data

  6. District heating and cooling systems for communities through power plant retrofit distribution network. Final report, September 1, 1978-May 31, 1979

    Energy Technology Data Exchange (ETDEWEB)

    None

    1979-10-01

    This Final Report (Volume 2) of Phase 1 of District Heating for Communities Through Power Plant Retrofit Distribution Network contains 3 tasks: (1) Demonstration Team; (2) Identify Thermal Energy Sources and Potential Service Areas; and (3) Energy Market Analysis. Task 2 consists of estimating the thermal load within 5 and 10 miles of Public Service Electric and Gas Company steam power plants, Newark, New Jersey; estimating the costs of supplying thermal services to thermal loads of varying densities; a best case economic analysis of district heating for single-family homes; and some general comments on district-heating system design and development. Task 3 established the potential market for district heating that exists within a 5-mile radius of the selected generating stations; a sample of the questionnaire sent to the customers are shown. (MCW)

  7. The DIPPR® databases

    Science.gov (United States)

    Thomson, G. H.

    1996-01-01

    The Design Institute for Physical Property Data® (DIPPR), one of the Sponsored Research groups of the American Institute of Chemical Engineers (AIChE), has been in existence for 15 years and has supported a total of 14 projects, some completed, some ongoing. Four of these projects are “database” projects for which the primary product is a database of carefully evaluated property data. These projects are Data Compilation; Evaluated Data on Mixtures; Environmental, Safety, and Health Data Compilation; and Difusivities and Thermal Properties of Polymer Solutions. This paper lists the existing DIPPR projects; discusses DIPPR's structure and modes of dissemination of results; describes DIPPR's supporters and its unique characteristics; and finally, discusses the origin, nature, and content of the four database projects.

  8. 基于不规则三角网的水下地形导航数据库构建方法的优化%Optimized method of building underwater terrain navigation database based on triangular irregular network

    Institute of Scientific and Technical Information of China (English)

    王立辉; 高贤志; 梁冰冰; 余乐; 祝雪芬

    2015-01-01

    采用规则格网模型构建地形导航数据库时,存在精度较低以及效率较低的问题。为了优化地形导航数据库构建方法,提出了一种基于不规则三角网的地形导航数据库构建方法。基于分割合并法对源数据点按经纬度坐标进行分割,分别求出每个数据块数据点的凸壳,然后依据改进的凸壳算法逐点加入非凸壳数据点形成子块三角网,用改进的三角网合并算法对相邻的凸壳子块进行合并,完成子三角网的优化合并形成完整的地形导航数据库。仿真结果表明基于不规则三角网的地形导航数据库构建方法具有效率高、精度高、分辨率可调整的优点。%In view that using a regular grid model to build a underwater terrain navigation database has the problems of low accuracy and low efficiency, an optimized method is proposed to build an underwater terrain navigation database based on a triangular irregular network. Convex hulls are calculated for each block of data points with latitude and longitude coordinates by using a divide and conquer algorithm. Then, according to the improved convex hull algorithm, the sub-triangular irregular networks are formed by adding nonconvex hull data points to the convex hulls. Adjacent convex shell blocks are combined by using an improved algorithm for triangulation, and the terrain navigation database is completed by merging and optimizing the sub-triangulations. Simulation results show that building a terrain navigation database using the construction methods associated with a triangular irregular network has such advantages as high efficiency, high accuracy, and the ability to adjust resolution.

  9. The AMMA database

    Science.gov (United States)

    Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim

    2010-05-01

    The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can

  10. Secure Distributed Databases Using Cryptography

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2006-01-01

    Full Text Available The computational encryption is used intensively by different databases management systems for ensuring privacy and integrity of information that are physically stored in files. Also, the information is sent over network and is replicated on different distributed systems. It is proved that a satisfying level of security is achieved if the rows and columns of tables are encrypted independently of table or computer that sustains the data. Also, it is very important that the SQL - Structured Query Language query requests and responses to be encrypted over the network connection between the client and databases server. All this techniques and methods must be implemented by the databases administrators, designer and developers in a consistent security policy.

  11. Database design using entity-relationship diagrams

    CERN Document Server

    Bagui, Sikha

    2011-01-01

    Data, Databases, and the Software Engineering ProcessDataBuilding a DatabaseWhat is the Software Engineering Process?Entity Relationship Diagrams and the Software Engineering Life Cycle          Phase 1: Get the Requirements for the Database          Phase 2: Specify the Database          Phase 3: Design the DatabaseData and Data ModelsFiles, Records, and Data ItemsMoving from 3 × 5 Cards to ComputersDatabase Models     The Hierarchical ModelThe Network ModelThe Relational ModelThe Relational Model and Functional DependenciesFundamental Relational DatabaseRelational Database and SetsFunctional

  12. DMPD: The involvement of the interleukin-1 receptor-associated kinases (IRAKs) incellular signaling networks controlling inflammation. [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 18249132 The involvement of the interleukin-1 receptor-associated kinases (IRAKs) i...2008 Jan 30. (.png) (.svg) (.html) (.csml) Show The involvement of the interleukin-1 receptor-associated kin...ases (IRAKs) incellular signaling networks controlling inflammation. PubmedID 18249132 Title The... involvement of the interleukin-1 receptor-associated kinases (IRAKs) incellular signaling ne

  13. NATIONAL TRANSPORTATION ATLAS DATABASE: RAILROADS 2011

    Data.gov (United States)

    Kansas Data Access and Support Center — The Rail Network is a comprehensive database of the nation's railway system at the 1:100,000 scale or better. The data set covers all 50 States plus the District of...

  14. Developmental and Reproductive Toxicology Database (DART)

    Data.gov (United States)

    U.S. Department of Health & Human Services — A bibliographic database on the National Library of Medicine's (NLM) Toxicology Data Network (TOXNET) with references to developmental and reproductive toxicology...

  15. Establishing a Dynamic Database of Blue and Fin Whale Locations from Recordings at the IMS CTBTO hydro-acoustic network. The Baleakanta Project

    Science.gov (United States)

    Le Bras, R. J.; Kuzma, H.

    2013-12-01

    Falling as they do into the frequency range of continuously recording hydrophones (15-100Hz), blue and fin whale songs are a significant source of noise on the hydro-acoustic monitoring array of the International Monitoring System (IMS) of the Comprehensive Nuclear Test Ban Treaty Organization (CTBTO). One researcher's noise, however, can be a very interesting signal in another field of study. The aim of the Baleakanta Project (www.baleakanta.org) is to flag and catalogue these songs, using the azimuth and slowness of the signal measured at multiple hydrophones to solve for the approximate location of singing whales. Applying techniques borrowed from human speaker identification, it may even be possible to recognize the songs of particular individuals. The result will be a dynamic database of whale locations and songs with known individuals noted. This database will be of great value to marine biologists studying cetaceans, as there is no existing dataset which spans the globe over many years (more than 15 years of data have been collected by the IMS). Current whale song datasets from other sources are limited to detections made on small, temporary listening devices. The IMS song catalogue will make it possible to study at least some aspects of the global migration patterns of whales, changes in their songs over time, and the habits of individuals. It is believed that about 10 blue whale 'cultures' exist with distinct vocal patterns; the IMS song catalogue will test that number. Results and a subset of the database (delayed in time to mitigate worries over whaling and harassment of the animals) will be released over the web. A traveling museum exhibit is planned which will not only educate the public about whale songs, but will also make the CTBTO and its achievements more widely known. As a testament to the public's enduring fascination with whales, initial funding for this project has been crowd-sourced through an internet campaign.

  16. Hourly Comparison of GPM-IMERG-Final-Run and IMERG-Real-Time (V-03) over a Dense Surface Network in Northeastern Austria

    Science.gov (United States)

    Sharifi, Ehsan; Steinacker, Reinhold; Saghafian, Bahram

    2017-04-01

    Accurate quantitative daily precipitation estimation is key to meteorological and hydrological applications in hazards forecast and management. In-situ observations over mountainous areas are mostly limited, however, currently available satellite precipitation products can potentially provide the precipitation estimation needed for meteorological and hydrological applications. Over the years, blended methods that use multi-satellites and multi-sensors have been developed for estimating of global precipitation. One of the latest satellite precipitation products is GPM-IMERG (Global Precipitation Measurement with 30-minute temporal and 0.1-degree spatial resolutions) which consists of three products: Final-Run (aimed for research), Real-Time early run, and Real-Time late run. The Integrated Multisatellite Retrievals for GPM (IMERG) products built upon the success of TRMM's Multisatellite Precipitation Analysis (TMPA) products continue to make improvements in spatial and temporal resolutions and snowfall estimates. Recently, researchers who evaluated IMERG-Final-Run V-03 and other precipitation products indicated better performance for IMERG-Final-Run against other similar products. In this study two GPM-IMERG products, namely final run and real time-late run, were evaluated against a dense synoptic stations network (62 stations) over Northeastern Austria for mid-March 2015 to end of January 2016 period at hourly time-scale. Both products were examined against the reference data (stations) in capturing the occurrence of precipitation and statistical characteristics of precipitation intensity. Both satellite precipitation products underestimated precipitation events of 0.1 mm/hr to 0.4 mm/hr in intensity. For precipitations 0.4 mm/hr and greater, the trend was reversed and both satellite products overestimated than station recorded data. IMERG-RT outperformed IMERG-FR for precipitation intensity in the range of 0.1 mm/hr to 0.4 mm/hr while in the range of 1.1 to 1.8 mm

  17. The RIKEN integrated database of mammals.

    Science.gov (United States)

    Masuya, Hiroshi; Makita, Yuko; Kobayashi, Norio; Nishikata, Koro; Yoshida, Yuko; Mochizuki, Yoshiki; Doi, Koji; Takatsuki, Terue; Waki, Kazunori; Tanaka, Nobuhiko; Ishii, Manabu; Matsushima, Akihiro; Takahashi, Satoshi; Hijikata, Atsushi; Kozaki, Kouji; Furuichi, Teiichi; Kawaji, Hideya; Wakana, Shigeharu; Nakamura, Yukio; Yoshiki, Atsushi; Murata, Takehide; Fukami-Kobayashi, Kaoru; Mohan, Sujatha; Ohara, Osamu; Hayashizaki, Yoshihide; Mizoguchi, Riichiro; Obata, Yuichi; Toyoda, Tetsuro

    2011-01-01

    The RIKEN integrated database of mammals (http://scinets.org/db/mammal) is the official undertaking to integrate its mammalian databases produced from multiple large-scale programs that have been promoted by the institute. The database integrates not only RIKEN's original databases, such as FANTOM, the ENU mutagenesis program, the RIKEN Cerebellar Development Transcriptome Database and the Bioresource Database, but also imported data from public databases, such as Ensembl, MGI and biomedical ontologies. Our integrated database has been implemented on the infrastructure of publication medium for databases, termed SciNetS/SciNeS, or the Scientists' Networking System, where the data and metadata are structured as a semantic web and are downloadable in various standardized formats. The top-level ontology-based implementation of mammal-related data directly integrates the representative knowledge and individual data records in existing databases to ensure advanced cross-database searches and reduced unevenness of the data management operations. Through the development of this database, we propose a novel methodology for the development of standardized comprehensive management of heterogeneous data sets in multiple databases to improve the sustainability, accessibility, utility and publicity of the data of biomedical information.

  18. Rule-and Dictionary-based Solution for Variations in Written Arabic Names in Social Networks, Big Data, Accounting Systems and Large Databases

    Directory of Open Access Journals (Sweden)

    Ahmad B.A. Hassanat

    2014-10-01

    Full Text Available This study investigates the problem that some Arabic names can be written in multiple ways. When someone searches for only one form of a name, neither exact nor approximate matching is appropriate for returning the multiple variants of the name. Exact matching requires the user to enter all forms of the name for the search and approximate matching yields names not among the variations of the one being sought. In this study, we attempt to solve the problem with a dictionary of all Arabic names mapped to their different (alternative writing forms. We generated alternatives based on rules we derived from reviewing the first names of 9.9 million citizens and former citizens of Jordan. This dictionary can be used for both standardizing the written form when inserting a new name into a database and for searching for the name and all its alternative written forms. Creating the dictionary automatically based on rules resulted in at least 7% erroneous acceptance errors and 7.9% erroneous rejection errors. We addressed the errors by manually editing the dictionary. The dictionary can be of help to real world-databases, with the qualification that manual editing does not guarantee 100% correctness.

  19. Database Transposition for Constrained (Closed) Pattern Mining

    CERN Document Server

    Jeudy, Baptiste

    2009-01-01

    Recently, different works proposed a new way to mine patterns in databases with pathological size. For example, experiments in genome biology usually provide databases with thousands of attributes (genes) but only tens of objects (experiments). In this case, mining the "transposed" database runs through a smaller search space, and the Galois connection allows to infer the closed patterns of the original database. We focus here on constrained pattern mining for those unusual databases and give a theoretical framework for database and constraint transposition. We discuss the properties of constraint transposition and look into classical constraints. We then address the problem of generating the closed patterns of the original database satisfying the constraint, starting from those mined in the "transposed" database. Finally, we show how to generate all the patterns satisfying the constraint from the closed ones.

  20. Database systems for knowledge-based discovery.

    Science.gov (United States)

    Jagarlapudi, Sarma A R P; Kishan, K V Radha

    2009-01-01

    Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery.

  1. Database development and management

    CERN Document Server

    Chao, Lee

    2006-01-01

    Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database

  2. Neural Network Analysis in Higgs Search using $t\\overline{t}H,H \\to b\\overline{b}$ and TAG Database Development for ATLAS

    CERN Document Server

    McGlone, Helen Marie

    2009-01-01

    The Large Hadron Collider, LHC, at Conseil Europeen pour la Recherche Nucleaire, CERN, in Geneva, Switzerland, is an international physics project of unprecedented scale. First proton beams were circulated in the LHC in 2008. The ATLAS Collaboration, an international group of 2000 analysts, scientists, software developers and hardware experts, seeks to push the boundaries of our current understanding of the Universe, and our ability to undertake such studies. A central physics focus of the ATLAS experiment is study of a Higgs boson, a theoretically predicted particle, as yet unobserved in nature. In this thesis, a Neural Network is adopted and developed as an analysis method in a study of a Standard Model Higgs boson in the low mass Higgs range, using the physics channel ttH;H ! bb and Higgs mass mH = 120 GeV. The Neural Network analysis shows that a neural network method can give an improvement in sensitivity of the ttH;H ! bb channel. A set of Event Characteristics, associated with a topology where the exis...

  3. Next generation network management technology

    Science.gov (United States)

    Baras, John S.; Atallah, George C.; Ball, Mike; Goli, Shravan; Karne, Ramesh K.; Kelley, Steve; Kumar, Harsha; Plaisant, Catherine; Roussopoulos, Nick; Schneiderman, Ben; Srinivasarao, Mulugu; Stathatos, Kosta; Teittinen, Marko; Whitefield, David

    1995-01-01

    Today's telecommunications networks are becoming increasingly large, complex, mission critical and heterogeneous in several dimensions. For example, the underlying physical transmission facilities of a given network may be ``mixed media'' (copper, fiber-optic, radio, and satellite); the subnetworks may be acquired from different vendors due to economic, performance, or general availability reasons; the information being transmitted over the network may be ``multimedia'' (video, data, voice, and images) and, finally, varying performance criteria may be imposed e.g., data transfer may require high throughput while the others, whose concern is voice communications, may require low call blocking probability. For these reasons, future telecommunications networks are expected to be highly complex in their services and operations. Due to this growing complexity and the disparity among management systems for individual sub-networks, efficient network management systems have become critical to the current and future success of telecommunications companies. This paper addresses a research and development effort which focuses on prototyping configuration management, since that is the central process of network management and all other network management functions must be built upon it. Our prototype incorporates ergonomically designed graphical user interfaces tailored to the network configuration management subsystem and to the proposed advanced object-oriented database structure. The resulting design concept follows open standards such as Open Systems Interconnection (OSI) and incorporates object oriented programming methodology to associate data with functions, permit customization, and provide an open architecture environment.

  4. An Alaska Soil Carbon Database

    Science.gov (United States)

    Johnson, Kristofer; Harden, Jennifer

    2009-05-01

    Database Collaborator's Meeting; Fairbanks, Alaska, 4 March 2009; Soil carbon pools in northern high-latitude regions and their response to climate changes are highly uncertain, and collaboration is required from field scientists and modelers to establish baseline data for carbon cycle studies. The Global Change Program at the U.S. Geological Survey has funded a 2-year effort to establish a soil carbon network and database for Alaska based on collaborations from numerous institutions. To initiate a community effort, a workshop for the development of an Alaska soil carbon database was held at the University of Alaska Fairbanks. The database will be a resource for spatial and biogeochemical models of Alaska ecosystems and will serve as a prototype for a nationwide community project: the National Soil Carbon Network (http://www.soilcarb.net). Studies will benefit from the combination of multiple academic and government data sets. This collaborative effort is expected to identify data gaps and uncertainties more comprehensively. Future applications of information contained in the database will identify specific vulnerabilities of soil carbon in Alaska to climate change, disturbance, and vegetation change.

  5. Construction of networks group of peer reviewer database for academic periodicals based on small-world-network model%基于小世界网络模型建设学术期刊审稿专家数据库网络群

    Institute of Scientific and Technical Information of China (English)

    史朋亮; 吴晨

    2011-01-01

    概括国际期刊业建设审稿专家数据库的动态,总结当前期刊编辑部所使用的审稿专家数据库的来源,分析中国期刊业在审稿专家数据库建设上面临的机遇。基于小世界网络模型的观点,提出在中国知网(CNKI)作者库基础上构建审稿专家网络系统的设想和方案,并对所能达到的目标作了展望。%We give an overview of trends on establishment of peer reviewer database for academic periodicals in the world, conclude various resources of it, and sharpen current picture of challenge and chance on construction of a new peer reviewer database. Based on conception of a small-world-network model, we promote to build a system of networks group for peer reviewer database on authors in Chinese National Knowledge Infrastructure (CNKI), and give a prospection eventually.

  6. Status Quo and Prospect on Mobile Database Security

    OpenAIRE

    Tao Zhang; Shi Xing-jun

    2013-01-01

    Mobile database is a specialized class of distributed systems. There are some security challenges because of the dispersed nature of the mobile database application and hardware deviceconstraints. Therefore, the security issue of mobile database is analyzed in this paper. We will research the security in terms of mobile device, operating system on mobile device, mobile network and mobile database. Moreover, various security vulnerabilities on mobile database is recognized. Some appropriate t...

  7. Knowledge discovery from legal databases

    CERN Document Server

    Stranieri, Andrew; Schauer, Frederick

    2006-01-01

    Knowledge Discovery from Legal Databases is the first text to describe data mining techniques as they apply to law. Law students, legal academics and applied information technology specialists are guided thorough all phases of the knowledge discovery from databases process with clear explanations of numerous data mining algorithms including rule induction, neural networks and association rules. Throughout the text, assumptions that make data mining in law quite different to mining other data are made explicit.  Issues such as the selection of commonplace cases, the use of discretion as a form

  8. Technical Network

    CERN Multimedia

    2007-01-01

    In order to optimise the management of the Technical Network (TN), to facilitate understanding of the purpose of devices connected to the TN and to improve security incident handling, the Technical Network Administrators and the CNIC WG have asked IT/CS to verify the "description" and "tag" fields of devices connected to the TN. Therefore, persons responsible for systems connected to the TN will receive e-mails from IT/CS asking them to add the corresponding information in the network database at "network-cern-ch". Thank you very much for your cooperation. The Technical Network Administrators & the CNIC WG

  9. 水污染控制工程网络题库开发与运用效果分析%The Development and the Usage of the Network Test Questions Database of Water Pollution Control Engineering Course

    Institute of Scientific and Technical Information of China (English)

    王忠全; 董军; 蒋裕平; 余建萍; 朱岸东; 冯超华; 汪帆; 师伟

    2016-01-01

    基于数字化教学平台Blackboard建立了《水污染控制工程》课程的章节同步测试题库。题库包括单项选择题和多项选择题,题目总数为571道。已经过三届学生使用。通过调查表明,学生认为章节同步测试题库与教材匹配,能帮助学生自主学习,能提高学习成绩。与网络中水污染控制工程课程题库比较,本题库题量较大,能判分与反馈,有一定的创新性。%Based on the digital teaching platform blackboard, the chapter synchronous test of the Water Pollution Control Engineering course was established in website. The test database included single choice and multiple choice questions, and the total number was 571. It has been used by three years students. Through the investigation, the students thought that the synchronous test question database matched the course, it can help the students to study independently, improve the academic performance. Comparison of the water pollution control engineering course in network showed that it had innovation by the larger amount of test database and its awarding of points and feedback.

  10. Emotion Recognition from Persian Speech with Neural Network

    Directory of Open Access Journals (Sweden)

    Mina Hamidi

    2012-10-01

    Full Text Available In this paper, we report an effort towards automatic recognition of emotional states from continuousPersian speech. Due to the unavailability of appropriate database in the Persian language for emotionrecognition, at first, we built a database of emotional speech in Persian. This database consists of 2400wave clips modulated with anger, disgust, fear, sadness, happiness and normal emotions. Then we extractprosodic features, including features related to the pitch, intensity and global characteristics of the speechsignal. Finally, we applied neural networks for automatic recognition of emotion. The resulting averageaccuracy was about 78%.

  11. Database Reports Over the Internet

    Science.gov (United States)

    Smith, Dean Lance

    2002-01-01

    Most of the summer was spent developing software that would permit existing test report forms to be printed over the web on a printer that is supported by Adobe Acrobat Reader. The data is stored in a DBMS (Data Base Management System). The client asks for the information from the database using an HTML (Hyper Text Markup Language) form in a web browser. JavaScript is used with the forms to assist the user and verify the integrity of the entered data. Queries to a database are made in SQL (Sequential Query Language), a widely supported standard for making queries to databases. Java servlets, programs written in the Java programming language running under the control of network server software, interrogate the database and complete a PDF form template kept in a file. The completed report is sent to the browser requesting the report. Some errors are sent to the browser in an HTML web page, others are reported to the server. Access to the databases was restricted since the data are being transported to new DBMS software that will run on new hardware. However, the SQL queries were made to Microsoft Access, a DBMS that is available on most PCs (Personal Computers). Access does support the SQL commands that were used, and a database was created with Access that contained typical data for the report forms. Some of the problems and features are discussed below.

  12. Databases and their application

    NARCIS (Netherlands)

    E.C. Grimm; R.H.W Bradshaw; S. Brewer; S. Flantua; T. Giesecke; A.M. Lézine; H. Takahara; J.W.,Jr Williams

    2013-01-01

    During the past 20 years, several pollen database cooperatives have been established. These databases are now constituent databases of the Neotoma Paleoecology Database, a public domain, multiproxy, relational database designed for Quaternary-Pliocene fossil data and modern surface samples. The poll

  13. Consensus and conflict cards for metabolic pathway databases

    Science.gov (United States)

    2013-01-01

    Background The metabolic network of H. sapiens and many other organisms is described in multiple pathway databases. The level of agreement between these descriptions, however, has proven to be low. We can use these different descriptions to our advantage by identifying conflicting information and combining their knowledge into a single, more accurate, and more complete description. This task is, however, far from trivial. Results We introduce the concept of Consensus and Conflict Cards (C2Cards) to provide concise overviews of what the databases do or do not agree on. Each card is centered at a single gene, EC number or reaction. These three complementary perspectives make it possible to distinguish disagreements on the underlying biology of a metabolic process from differences that can be explained by different decisions on how and in what detail to represent knowledge. As a proof-of-concept, we implemented C2CardsHuman, as a web application http://www.molgenis.org/c2cards, covering five human pathway databases. Conclusions C2Cards can contribute to ongoing reconciliation efforts by simplifying the identification of consensus and conflicts between pathway databases and lowering the threshold for experts to contribute. Several case studies illustrate the potential of the C2Cards in identifying disagreements on the underlying biology of a metabolic process. The overviews may also point out controversial biological knowledge that should be subject of further research. Finally, the examples provided emphasize the importance of manual curation and the need for a broad community involvement. PMID:23803311

  14. The 2010 Nucleic Acids Research Database Issue and online Database Collection: a community of data resources.

    Science.gov (United States)

    Cochrane, Guy R; Galperin, Michael Y

    2010-01-01

    The current issue of Nucleic Acids Research includes descriptions of 58 new and 73 updated data resources. The accompanying online Database Collection, available at http://www.oxfordjournals.org/nar/database/a/, now lists 1230 carefully selected databases covering various aspects of molecular and cell biology. While most data resource descriptions remain very brief, the issue includes several longer papers that highlight recent significant developments in such databases as Pfam, MetaCyc, UniProt, ELM and PDBe. The databases described in the Database Issue and Database Collection, however, are far more than a distinct set of resources; they form a network of connected data, concepts and shared technology. The full content of the Database Issue is available online at the Nucleic Acids Research web site (http://nar.oxfordjournals.org/).

  15. 网络集成环境下的异种数据库的数据转换%Data Conversion Among Different Databases Under an Integrated Network Environment

    Institute of Scientific and Technical Information of China (English)

    梁允荣; 高玮玲; 杨茜

    2001-01-01

    Method for data conversion of heterogeneous DBMS's in the integrated network environment is introduced. The technical approach used can perform data conversion of heterogeneous databases located on different nodes of the network, such as Oracle, Sybase, Informix,MS SQL Server, SQL Anywhere and Foxpro. The conversion system adopts the Client/Server architecture and provides visual integrated interface for users.%介绍了异种数据库在网络集成环境下进行数据转换的方法.利用PowerBuilder与数据库系统连接和可视化编程的强大功能,实现了异种数据库的直接转换.该方法采用了C/S的体系结构范型为用户提供了可视化集成界面,完成了分布在网络不同结点上的目前常用的几种数据库(Oracle, Sybase, Informix, MS SQL Server, SQL Anywhere 和 FoxPro)的互相转换.

  16. Dietary Supplement Ingredient Database

    Science.gov (United States)

    ... and US Department of Agriculture Dietary Supplement Ingredient Database Toggle navigation Menu Home About DSID Mission Current ... values can be saved to build a small database or add to an existing database for national, ...

  17. Enlightenment on Computer Network Reliability From Transportation Network Reliability

    OpenAIRE

    Hu Wenjun; Zhou Xizhao

    2011-01-01

    Referring to transportation network reliability problem, five new computer network reliability definitions are proposed and discussed. They are computer network connectivity reliability, computer network time reliability, computer network capacity reliability, computer network behavior reliability and computer network potential reliability. Finally strategies are suggested to enhance network reliability.

  18. NoSQL Databases

    OpenAIRE

    2013-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  19. USAID Anticorruption Projects Database

    Data.gov (United States)

    US Agency for International Development — The Anticorruption Projects Database (Database) includes information about USAID projects with anticorruption interventions implemented worldwide between 2007 and...

  20. Collecting Taxes Database

    Data.gov (United States)

    US Agency for International Development — The Collecting Taxes Database contains performance and structural indicators about national tax systems. The database contains quantitative revenue performance...

  1. General database for ground water site information.

    Science.gov (United States)

    de Dreuzy, Jean-Raynald; Bodin, Jacques; Le Grand, Hervé; Davy, Philippe; Boulanger, Damien; Battais, Annick; Bour, Olivier; Gouze, Philippe; Porel, Gilles

    2006-01-01

    In most cases, analysis and modeling of flow and transport dynamics in ground water systems require long-term, high-quality, and multisource data sets. This paper discusses the structure of a multisite database (the H+ database) developed within the scope of the ERO program (French Environmental Research Observatory, http://www.ore.fr). The database provides an interface between field experimentalists and modelers, which can be used on a daily basis. The database structure enables the storage of a large number of data and data types collected from a given site or multiple-site network. The database is well suited to the integration, backup, and retrieval of data for flow and transport modeling in heterogeneous aquifers. It relies on the definition of standards and uses a templated structure, such that any type of geolocalized data obtained from wells, hydrological stations, and meteorological stations can be handled. New types of platforms other than wells, hydrological stations, and meteorological stations, and new types of experiments and/or parameters could easily be added without modifying the database structure. Thus, we propose that the database structure could be used as a template for designing databases for complex sites. An example application is the H+ database, which gathers data collected from a network of hydrogeological sites associated with the French Environmental Research Observatory.

  2. Databases and tools for nuclear astrophysics applications BRUSsels Nuclear LIBrary (BRUSLIB), Nuclear Astrophysics Compilation of REactions II (NACRE II) and Nuclear NETwork GENerator (NETGEN)

    CERN Document Server

    Xu, Yi; Jorissen, Alain; Chen, Guangling; Arnould, Marcel; 10.1051/0004-6361/201220537

    2012-01-01

    An update of a previous description of the BRUSLIB+NACRE package of nuclear data for astrophysics and of the web-based nuclear network generator NETGEN is presented. The new version of BRUSLIB contains the latest predictions of a wide variety of nuclear data based on the most recent version of the Brussels-Montreal Skyrme-HFB model. The nuclear masses, radii, spin/parities, deformations, single-particle schemes, matter densities, nuclear level densities, E1 strength functions, fission properties, and partition functions are provided for all nuclei lying between the proton and neutron drip lines over the 8<=Z<=110 range, whose evaluation is based on a unique microscopic model that ensures a good compromise between accuracy, reliability, and feasibility. In addition, these various ingredients are used to calculate about 100000 Hauser-Feshbach n-, p-, a-, and gamma-induced reaction rates based on the reaction code TALYS. NACRE is superseded by the NACRE II compilation for 15 charged-particle transfer react...

  3. Technical Network

    CERN Multimedia

    2007-01-01

    In order to optimize the management of the Technical Network (TN), to ease the understanding and purpose of devices connected to the TN, and to improve security incident handling, the Technical Network Administrators and the CNIC WG have asked IT/CS to verify the "description" and "tag" fields of devices connected to the TN. Therefore, persons responsible for systems connected to the TN will receive email notifications from IT/CS asking them to add the corresponding information in the network database. Thank you very much for your cooperation. The Technical Network Administrators & the CNIC WG

  4. Searching NCBI Databases Using Entrez.

    Science.gov (United States)

    Gibney, Gretchen; Baxevanis, Andreas D

    2011-10-01

    One of the most widely used interfaces for the retrieval of information from biological databases is the NCBI Entrez system. Entrez capitalizes on the fact that there are pre-existing, logical relationships between the individual entries found in numerous public databases. The existence of such natural connections, mostly biological in nature, argued for the development of a method through which all the information about a particular biological entity could be found without having to sequentially visit and query disparate databases. Two basic protocols describe simple, text-based searches, illustrating the types of information that can be retrieved through the Entrez system. An alternate protocol builds upon the first basic protocol, using additional, built-in features of the Entrez system, and providing alternative ways to issue the initial query. The support protocol reviews how to save frequently issued queries. Finally, Cn3D, a structure visualization tool, is also discussed.

  5. Logical database design principles

    CERN Document Server

    Garmany, John; Clark, Terry

    2005-01-01

    INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint

  6. An Interoperable Cartographic Database

    OpenAIRE

    Slobodanka Ključanin; Zdravko Galić

    2007-01-01

    The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on t...

  7. Fully Connected Neural Networks Ensemble with Signal Strength Clustering for Indoor Localization in Wireless Sensor Networks

    OpenAIRE

    2015-01-01

    The paper introduces a method which improves localization accuracy of the signal strength fingerprinting approach. According to the proposed method, entire localization area is divided into regions by clustering the fingerprint database. For each region a prototype of the received signal strength is determined and a dedicated artificial neural network (ANN) is trained by using only those fingerprints that belong to this region (cluster). Final estimation of the location is obtained by fusion ...

  8. The network researchers' network

    DEFF Research Database (Denmark)

    Henneberg, Stephan C.; Jiang, Zhizhong; Naudé, Peter

    2009-01-01

    The Industrial Marketing and Purchasing (IMP) Group is a network of academic researchers working in the area of business-to-business marketing. The group meets every year to discuss and exchange ideas, with a conference having been held every year since 1984 (there was no meeting in 1987). In thi......The Industrial Marketing and Purchasing (IMP) Group is a network of academic researchers working in the area of business-to-business marketing. The group meets every year to discuss and exchange ideas, with a conference having been held every year since 1984 (there was no meeting in 1987......). In this paper, based upon the papers presented at the 22 conferences held to date, we undertake a Social Network Analysis in order to examine the degree of co-publishing that has taken place between this group of researchers. We identify the different components in this database, and examine the large main...

  9. Determination of fat, moisture, and protein in meat and meat products by using the FOSS FoodScan Near-Infrared Spectrophotometer with FOSS Artificial Neural Network Calibration Model and Associated Database: collaborative study.

    Science.gov (United States)

    Anderson, Shirley

    2007-01-01

    A collaborative study was conducted to evaluate the repeatability and reproducibility of the FOSS FoodScan near-infrared spectrophotometer with artificial neural network calibration model and database for the determination of fat, moisture, and protein in meat and meat products. Representative samples were homogenized by grinding according to AOAC Official Method 983.18. Approximately 180 g ground sample was placed in a 140 mm round sample dish, and the dish was placed in the FoodScan. The operator ID was entered, the meat product profile within the software was selected, and the scanning process was initiated by pressing the "start" button. Results were displayed for percent (g/100 g) fat, moisture, and protein. Ten blind duplicate samples were sent to 15 collaborators in the United States. The within-laboratory (repeatability) relative standard deviation (RSD(r)) ranged from 0.22 to 2.67% for fat, 0.23 to 0.92% for moisture, and 0.35 to 2.13% for protein. The between-laboratories (reproducibility) relative standard deviation (RSD(R)) ranged from 0.52 to 6.89% for fat, 0.39 to 1.55% for moisture, and 0.54 to 5.23% for protein. The method is recommended for Official First Action.

  10. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RMOS Database Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...e Microarray Opening Site is a database of comprehensive information for Rice Mic...es and manner of utilization of database You can refer to the information of the

  11. A logic for networks

    CERN Document Server

    Franceschet, Massimo

    2010-01-01

    Networks are pervasive in the real world. Nature, society, economy, and technology are supported by ostensibly different networks that in fact share an amazing number of interesting structural properties. Network thinking exploded in the last decade, boosted by the availability of large databases on the topology of various real networks, mainly the Web and biological networks, and converged to the new discipline of network analysis - the holistic analysis of complex systems through the study of the network that wires their components. Physicists mainly drove the investigation, studying the structure and function of networks using methods and tools of statistical mechanics. Here, we give an alternative perspective on network analysis, proposing a logic for specifying general properties of networks and a modular algorithm for checking these properties. The logic borrows from two intertwined computing fields: XML databases and model checking.

  12. Flood forecasting for River Mekong with data-based models

    Science.gov (United States)

    Shahzad, Khurram M.; Plate, Erich J.

    2014-09-01

    In many regions of the world, the task of flood forecasting is made difficult because only a limited database is available for generating a suitable forecast model. This paper demonstrates that in such cases parsimonious data-based hydrological models for flood forecasting can be developed if the special conditions of climate and topography are used to advantage. As an example, the middle reach of River Mekong in South East Asia is considered, where a database of discharges from seven gaging stations on the river and 31 rainfall stations on the subcatchments between gaging stations is available for model calibration. Special conditions existing for River Mekong are identified and used in developing first a network connecting all discharge gages and then models for forecasting discharge increments between gaging stations. Our final forecast model (Model 3) is a linear combination of two structurally different basic models: a model (Model 1) using linear regressions for forecasting discharge increments, and a model (Model 2) using rainfall-runoff models. Although the model based on linear regressions works reasonably well for short times, better results are obtained with rainfall-runoff modeling. However, forecast accuracy of Model 2 is limited by the quality of rainfall forecasts. For best results, both models are combined by taking weighted averages to form Model 3. Model quality is assessed by means of both persistence index PI and standard deviation of forecast error.

  13. Holistic Security Model for Mobile Database in Nigeria

    Directory of Open Access Journals (Sweden)

    Fidelis C. Obodoeze

    2016-07-01

    Full Text Available Due to proliferation on the usage of mobile computing devices such as mobile phones, smart phones, Tablet PCs and Portable Digital Assistant (PDA in Nigeria and world over, it is expected that these light-weight, powerful, low-cost computing devices will pave way for data-driven applications in mobile environments. These portal mobile devices can be connected to corporate database and application servers so that application processing can take place at any time and from anywhere. This can throw up a lot of security challenges. Hackers, malicious programs and rival firms can penetrate the corporate servers through various security holes or vulnerabilities. This paper examines these security holes that can emanate from three major windows- the mobile device, the mobile network and the corporate database server and critically x-rays various solutions that can ward them off in order to protect critical data from attack, eavesdropping, disruption, destruction and modification. This paper finally proposes a holistic security model to protect corporate mobile database in Nigeria

  14. Testing of high speed network components (RNET Project), with IBM and Bellsouth (91-0069), and testing of high speed network components (campus LAN project) with IBM (92-0124). Final CRADA report

    Energy Technology Data Exchange (ETDEWEB)

    Wing, W.R. [Lockheed Martin Energy Systems, Inc., Oak Ridge, TN (United States); Chen, M.S. [IBM Corp., Atlanta, GA (United States); Brackett, P. [BellSouth, Atlanta, GA (United States)

    1997-06-20

    The Gigabit network project was established to demonstrate ATM technology in a realistic metropolitan environment running realistic applications, which would stretch its capacity. Despite considerable obstacles, both technical and logistical, the Gigabit Network project succeeded in establishing a network infrastructure that has served the Oak Ridge complex well during the last two years, and will continue to serve it in the future. The project did not, however, succeed in demonstrating the showcase applications on an ATM network. Development and delivery of a working ATM switch ultimately became the pacing item in the project, and after a number of delays, the project was terminated without placing a switch in service.

  15. Effects of acute rejection vs new-onset diabetes after transplant on transplant outcomes in pediatric kidney recipients: analysis of the Organ Procurement and Transplant Network/United Network for Organ Sharing (OPTN/UNOS) database.

    Science.gov (United States)

    Mehrnia, Alireza; Le, Thuy X; Tamer, Tamer R; Bunnapradist, Suphamai

    2016-11-01

    Improving long-term transplant and patient survival is still an ongoing challenge in kidney transplant medicine. Our objective was to identify the subsequent risks of new-onset diabetes after transplant (NODAT) and acute rejection (AR) in the first year post-transplant in predicting mortality and transplant failure. A total of 4687 patients without preexisting diabetes (age 2-20 years, 2004-2010) surviving with a functioning transplant for longer than 1 year with at least one follow-up report were identified from the OPTN/UNOS database as of September 2014. Study population was stratified into four mutually exclusive groups: Group 1, patients with a history of AR; Group 2, NODAT+; Group 3, NODAT+ AR+; and Group 4, the reference group (neither). Multivariate regression was used to analyze the relative risks for the outcomes of transplant failure and mortality. The median follow-up time was 1827 days after 1 year post-transplant. AR was associated with an increased risk of adjusted graft and death-censored graft failure (HR 2.87, CI 2.48-3.33, P < .001 and HR 2.11, CI 1.81-2.47, P < .001), respectively. NODAT and AR were identified in 3.5% and 14.5% of all study patients, respectively. AR in the first year post-transplant was a major risk factor for overall and death-censored graft failure, but not mortality. However, NODAT was not a risk factor on graft survival or mortality.

  16. Quantifying the consistency of scientific databases

    CERN Document Server

    Šubelj, Lovro; Boshkoska, Biljana Mileva; Kastrin, Andrej; Levnajić, Zoran

    2015-01-01

    Science is a social process with far-reaching impact on our modern society. In the recent years, for the first time we are able to scientifically study the science itself. This is enabled by massive amounts of data on scientific publications that is increasingly becoming available. The data is contained in several databases such as Web of Science or PubMed, maintained by various public and private entities. Unfortunately, these databases are not always consistent, which considerably hinders this study. Relying on the powerful framework of complex networks, we conduct a systematic analysis of the consistency among six major scientific databases. We found that identifying a single "best" database is far from easy. Nevertheless, our results indicate appreciable differences in mutual consistency of different databases, which we interpret as recipes for future bibliometric studies.

  17. Annotation and retrieval in protein interaction databases

    Science.gov (United States)

    Cannataro, Mario; Hiram Guzzi, Pietro; Veltri, Pierangelo

    2014-06-01

    Biological databases have been developed with a special focus on the efficient retrieval of single records or the efficient computation of specialized bioinformatics algorithms against the overall database, such as in sequence alignment. The continuos production of biological knowledge spread on several biological databases and ontologies, such as Gene Ontology, and the availability of efficient techniques to handle such knowledge, such as annotation and semantic similarity measures, enable the development on novel bioinformatics applications that explicitly use and integrate such knowledge. After introducing the annotation process and the main semantic similarity measures, this paper shows how annotations and semantic similarity can be exploited to improve the extraction and analysis of biologically relevant data from protein interaction databases. As case studies, the paper presents two novel software tools, OntoPIN and CytoSeVis, both based on the use of Gene Ontology annotations, for the advanced querying of protein interaction databases and for the enhanced visualization of protein interaction networks.

  18. A KINETIC DATABASE FOR ASTROCHEMISTRY (KIDA)

    Energy Technology Data Exchange (ETDEWEB)

    Wakelam, V.; Pavone, B.; Hebrard, E.; Hersant, F. [University of Bordeaux, LAB, UMR 5804, F-33270 Floirac (France); Herbst, E. [Departments of Physics, Astronomy, and Chemistry, The Ohio State University, Columbus, OH 43210 (United States); Loison, J.-C.; Chandrasekaran, V.; Bergeat, A. [University of Bordeaux, ISM, CNRS UMR 5255, F-33400 Talence (France); Smith, I. W. M. [University Chemical Laboratories, Lensfield Road, Cambridge CB2 1EW (United Kingdom); Adams, N. G. [Department of Chemistry, University of Georgia, Athens, GA 30602 (United States); Bacchus-Montabonel, M.-C. [LASIM, CNRS-UMR5579, Universite de Lyon (Lyon I), 43 Bvd. 11 Novembre 1918, F-69622 Villeurbanne Cedex (France); Beroff, K. [Institut des Sciences Moleculaires d' Orsay, CNRS and Universite Paris-Sud, F-91405 Orsay Cedex (France); Bierbaum, V. M. [Department of Chemistry and Biochemistry, Center for Astrophysics and Space Astronomy, University of Colorado, Boulder, CO 80309 (United States); Chabot, M. [Intitut de Physique Nucleaire d' Orsay, IN2P3-CNRS and Universite Paris-Sud, F-91406 Orsay Cedex (France); Dalgarno, A. [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Van Dishoeck, E. F. [Leiden Observatory, Leiden University, P.O. Box 9513, NL-2300 RA Leiden (Netherlands); Faure, A. [UJF-Grenoble 1/CNRS, Institut de Planetologie et d' Astrophysique de Grenoble (IPAG) UMR 5274, F-38041 Grenoble (France); Geppert, W. D. [Department of Physics, University of Stockholm, Roslagstullbacken 21, S-10691 Stockholm (Sweden); Gerlich, D. [Technische Universitaet Chemnitz, Department of Physics, D-09107 Chemnitz (Germany); Galli, D. [INAF-Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, I-50125 Firenze (Italy); and others

    2012-03-01

    We present a novel chemical database for gas-phase astrochemistry. Named the KInetic Database for Astrochemistry (KIDA), this database consists of gas-phase reactions with rate coefficients and uncertainties that will be vetted to the greatest extent possible. Submissions of measured and calculated rate coefficients are welcome, and will be studied by experts before inclusion into the database. Besides providing kinetic information for the interstellar medium, KIDA is planned to contain such data for planetary atmospheres and for circumstellar envelopes. Each year, a subset of the reactions in the database (kida.uva) will be provided as a network for the simulation of the chemistry of dense interstellar clouds with temperatures between 10 K and 300 K. We also provide a code, named Nahoon, to study the time-dependent gas-phase chemistry of zero-dimensional and one-dimensional interstellar sources.

  19. Concurrency control in distributed database systems

    CERN Document Server

    Cellary, W; Gelenbe, E

    1989-01-01

    Distributed Database Systems (DDBS) may be defined as integrated database systems composed of autonomous local databases, geographically distributed and interconnected by a computer network.The purpose of this monograph is to present DDBS concurrency control algorithms and their related performance issues. The most recent results have been taken into consideration. A detailed analysis and selection of these results has been made so as to include those which will promote applications and progress in the field. The application of the methods and algorithms presented is not limited to DDBSs but a

  20. Research on computer virus database management system

    Science.gov (United States)

    Qi, Guoquan

    2011-12-01

    The growing proliferation of computer viruses becomes the lethal threat and research focus of the security of network information. While new virus is emerging, the number of viruses is growing, virus classification increasing complex. Virus naming because of agencies' capture time differences can not be unified. Although each agency has its own virus database, the communication between each other lacks, or virus information is incomplete, or a small number of sample information. This paper introduces the current construction status of the virus database at home and abroad, analyzes how to standardize and complete description of virus characteristics, and then gives the information integrity, storage security and manageable computer virus database design scheme.

  1. E3 Staff Database

    Data.gov (United States)

    US Agency for International Development — E3 Staff database is maintained by E3 PDMS (Professional Development & Management Services) office. The database is Mysql. It is manually updated by E3 staff as...

  2. Native Health Research Database

    Science.gov (United States)

    ... APP WITH JAVASCRIPT TURNED OFF. THE NATIVE HEALTH DATABASE REQUIRES JAVASCRIPT IN ORDER TO FUNCTION. PLEASE ENTER ... To learn more about searching the Native Health Database, click here. Keywords Title Author Source of Publication ...

  3. Physiological Information Database (PID)

    Science.gov (United States)

    EPA has developed a physiological information database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence as well as similar data for laboratory animal spec...

  4. Cell Centred Database (CCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Cell Centered Database (CCDB) is a web accessible database for high resolution 2D, 3D and 4D data from light and electron microscopy, including correlated imaging.

  5. Database Urban Europe

    NARCIS (Netherlands)

    Sleutjes, B.; de Valk, H.A.G.

    2016-01-01

    Database Urban Europe: ResSegr database on segregation in The Netherlands. Collaborative research on residential segregation in Europe 2014–2016 funded by JPI Urban Europe (Joint Programming Initiative Urban Europe).

  6. Scopus database: a review.

    Science.gov (United States)

    Burnham, Judy F

    2006-03-08

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  7. Future database machine architectures

    OpenAIRE

    Hsiao, David K.

    1984-01-01

    There are many software database management systems available on many general-purpose computers ranging from micros to super-mainframes. Database machines as backened computers can offload the database management work from the mainframe so that we can retain the same mainframe longer. However, the database backend must also demonstrate lower cost, higher performance, and newer functionality. Some of the fundamental architecture issues in the design of high-performance and great-capacity datab...

  8. MPlus Database system

    Energy Technology Data Exchange (ETDEWEB)

    1989-01-20

    The MPlus Database program was developed to keep track of mail received. This system was developed by TRESP for the Department of Energy/Oak Ridge Operations. The MPlus Database program is a PC application, written in dBase III+'' and compiled with Clipper'' into an executable file. The files you need to run the MPLus Database program can be installed on a Bernoulli, or a hard drive. This paper discusses the use of this database.

  9. Mesh network simulation

    OpenAIRE

    Pei Ping; YURY N. PETRENKO

    2015-01-01

    A Mesh network simulation framework which provides a powerful and concise modeling chain for a network structure will be introduce in this report. Mesh networks has a special topologic structure. The paper investigates a message transfer in wireless mesh network simulation and how does it works in cellular network simulation. Finally the experimental result gave us the information that mesh networks have different principle in transmission way with cellular networks in transmission, and multi...

  10. CTD_DATABASE - Cascadia tsunami deposit database

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The Cascadia Tsunami Deposit Database contains data on the location and sedimentological properties of tsunami deposits found along the Cascadia margin. Data have...

  11. A database/knowledge structure for a robotics vision system

    Science.gov (United States)

    Dearholt, D. W.; Gonzales, N. N.

    1987-01-01

    Desirable properties of robotics vision database systems are given, and structures which possess properties appropriate for some aspects of such database systems are examined. Included in the structures discussed is a family of networks in which link membership is determined by measures of proximity between pairs of the entities stored in the database. This type of network is shown to have properties which guarantee that the search for a matching feature vector is monotonic. That is, the database can be searched with no backtracking, if there is a feature vector in the database which matches the feature vector of the external entity which is to be identified. The construction of the database is discussed, and the search procedure is presented. A section on the support provided by the database for description of the decision-making processes and the search path is also included.

  12. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us Trypanosomes Database... Database Description General information of database Database name Trypanosomes Database...rmation and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database... classification Protein sequence databases Organism Taxonomy Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Na...me: Homo sapiens Taxonomy ID: 9606 Database description The Trypanosomes database is a database providing th

  13. Database Description - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us PLACE Database... Description General information of database Database name A Database of Plant Cis-acting Regu...araki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database classification Plant database...s Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database description PLACE is a database of... motifs found in plant cis-acting regulatory DNA elements based on previously pub

  14. Keyword Search in Databases

    CERN Document Server

    Yu, Jeffrey Xu; Chang, Lijun

    2009-01-01

    It has become highly desirable to provide users with flexible ways to query/search information over databases as simple as keyword search like Google search. This book surveys the recent developments on keyword search over databases, and focuses on finding structural information among objects in a database using a set of keywords. Such structural information to be returned can be either trees or subgraphs representing how the objects, that contain the required keywords, are interconnected in a relational database or in an XML database. The structural keyword search is completely different from

  15. An Interoperable Cartographic Database

    Directory of Open Access Journals (Sweden)

    Slobodanka Ključanin

    2007-05-01

    Full Text Available The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on the Internet. 

  16. Experience in running relational databases on clustered storage

    CERN Document Server

    Aparicio, Ruben Gaspar

    2015-01-01

    For past eight years, CERN IT Database group has based its backend storage on NAS (Network-Attached Storage) architecture, providing database access via NFS (Network File System) protocol. In last two and half years, our storage has evolved from a scale-up architecture to a scale-out one. This paper describes our setup and a set of functionalities providing key features to other services like Database on Demand [1] or CERN Oracle backup and recovery service. It also outlines possible trend of evolution that, storage for databases could follow.

  17. Experience in running relational databases on clustered storage

    CERN Document Server

    Aparicio, Ruben Gaspar

    2015-01-01

    For past eight years, CERN IT Database group has based its backend storage on NAS (Network-Attached Storage) architecture, providing database access via NFS (Network File System) protocol. In last two and half years, our storage has evolved from a scale-up architecture to a scale-out one. This paper describes our setup and a set of functionalities providing key features to other services like Database on Demand [1] or CERN Oracle backup and recovery service. It also outlines possible trend of evolution that, storage for databases could follow.

  18. The NCBI Taxonomy database.

    Science.gov (United States)

    Federhen, Scott

    2012-01-01

    The NCBI Taxonomy database (http://www.ncbi.nlm.nih.gov/taxonomy) is the standard nomenclature and classification repository for the International Nucleotide Sequence Database Collaboration (INSDC), comprising the GenBank, ENA (EMBL) and DDBJ databases. It includes organism names and taxonomic lineages for each of the sequences represented in the INSDC's nucleotide and protein sequence databases. The taxonomy database is manually curated by a small group of scientists at the NCBI who use the current taxonomic literature to maintain a phylogenetic taxonomy for the source organisms represented in the sequence databases. The taxonomy database is a central organizing hub for many of the resources at the NCBI, and provides a means for clustering elements within other domains of NCBI web site, for internal linking between domains of the Entrez system and for linking out to taxon-specific external resources on the web. Our primary purpose is to index the domain of sequences as conveniently as possible for our user community.

  19. An audiovisual database of English speech sounds

    Science.gov (United States)

    Frisch, Stefan A.; Nikjeh, Dee Adams

    2003-10-01

    A preliminary audiovisual database of English speech sounds has been developed for teaching purposes. This database contains all Standard English speech sounds produced in isolated words in word initial, word medial, and word final position, unless not allowed by English phonotactics. There is one example of each word spoken by a male and a female talker. The database consists of an audio recording, video of the face from a 45 deg angle off of center, and ultrasound video of the tongue in the mid-saggital plane. The files contained in the database are suitable for examination by the Wavesurfer freeware program in audio or video modes [Sjolander and Beskow, KTH Stockholm]. This database is intended as a multimedia reference for students in phonetics or speech science. A demonstration and plans for further development will be presented.

  20. Examining cross-database global training to evaluate five different methods for ventricular beat classification.

    Science.gov (United States)

    Chudácek, V; Georgoulas, G; Lhotská, L; Stylios, C; Petrík, M; Cepek, M

    2009-07-01

    The detection of ventricular beats in the holter recording is a task of great importance since it can direct clinicians toward the parts of the electrocardiogram record that might be crucial for determining the final diagnosis. Although there already exists a fair amount of research work dealing with ventricular beat detection in holter recordings, the vast majority uses a local training approach, which is highly disputable from the point of view of any practical-real-life-application. In this paper, we compare five well-known methods: a classical decision tree approach and its variant with fuzzy rules, a self-organizing map clustering method with template matching for classification, a back-propagation neural network and a support vector machine classifier, all examined using the same global cross-database approach for training and testing. For this task two databases were used-the MIT-BIH database and the AHA database. Both databases are required for testing any newly developed algorithms for holter beat classification that is going to be deployed in the EU market. According to cross-database global training, when the classifier is trained with the beats from the records of one database then the records from the other database are used for testing. The results of all the methods are compared and evaluated using the measures of sensitivity and specificity. The support vector machine classifier is the best classifier from the five we tested, achieving an average sensitivity of 87.20% and an average specificity of 91.57%, which outperforms nearly all the published algorithms when applied in the context of a similar global training approach.

  1. The essentials of a database quality process

    Directory of Open Access Journals (Sweden)

    Dorothy M Blakeslee

    2003-01-01

    Full Text Available Many steps are involved in the process of turning an initial concept for a database into a finished product that meets the needs of its user community. In this paper, we describe those steps in the context of a four-phase process with particular emphasis on the quality-related issues that need to be addressed in each phase to ensure that the final product is a high quality database. The basic requirements for a successful database quality process are presented with specific examples drawn from experience gained in the Standard Reference Data Program at the National Institute of Standards and Technology.

  2. The Development and Usage of the Overseas Sinology Database

    Directory of Open Access Journals (Sweden)

    Ling Bao

    2007-12-01

    Full Text Available The Overseas Sinology Database is composed of three databases: scholar, organization, and journal. The thesis database is regard as separate and is attached to the scholar database. The database information comes from major areas of the world, especially the countries adjacent to China, and updates are done continuously. The Sinology Database is in several different languages and should satisfy the differing needs of data collection and database application. The data quality is strictly controlled during the whole data life cycle, which includes data collection, processing, storage, and accessing. In addition, according to the standards and specifications of the metadata, metadata are created to accompany the data, which satisfies the cooperation among different databases. Finally, besides the function of searching, statistical calculation, and sorting, the database is also used for data mining and knowledge discovery. Through these methods, conclusions about changes in Sinology can be drawn, which will aid us in understanding the world and China in particular.

  3. Food Composition Database Format and Structure: A User Focused Approach.

    Science.gov (United States)

    Clancy, Annabel K; Woods, Kaitlyn; McMahon, Anne; Probst, Yasmine

    2015-01-01

    This study aimed to investigate the needs of Australian food composition database user's regarding database format and relate this to the format of databases available globally. Three semi structured synchronous online focus groups (M = 3, F = 11) and n = 6 female key informant interviews were recorded. Beliefs surrounding the use, training, understanding, benefits and limitations of food composition data and databases were explored. Verbatim transcriptions underwent preliminary coding followed by thematic analysis with NVivo qualitative analysis software to extract the final themes. Schematic analysis was applied to the final themes related to database format. Desktop analysis also examined the format of six key globally available databases. 24 dominant themes were established, of which five related to format; database use, food classification, framework, accessibility and availability, and data derivation. Desktop analysis revealed that food classification systems varied considerably between databases. Microsoft Excel was a common file format used in all databases, and available software varied between countries. User's also recognised that food composition databases format should ideally be designed specifically for the intended use, have a user-friendly food classification system, incorporate accurate data with clear explanation of data derivation and feature user input. However, such databases are limited by data availability and resources. Further exploration of data sharing options should be considered. Furthermore, user's understanding of food composition data and databases limitations is inherent to the correct application of non-specific databases. Therefore, further exploration of user FCDB training should also be considered.

  4. Food Composition Database Format and Structure: A User Focused Approach.

    Directory of Open Access Journals (Sweden)

    Annabel K Clancy

    Full Text Available This study aimed to investigate the needs of Australian food composition database user's regarding database format and relate this to the format of databases available globally. Three semi structured synchronous online focus groups (M = 3, F = 11 and n = 6 female key informant interviews were recorded. Beliefs surrounding the use, training, understanding, benefits and limitations of food composition data and databases were explored. Verbatim transcriptions underwent preliminary coding followed by thematic analysis with NVivo qualitative analysis software to extract the final themes. Schematic analysis was applied to the final themes related to database format. Desktop analysis also examined the format of six key globally available databases. 24 dominant themes were established, of which five related to format; database use, food classification, framework, accessibility and availability, and data derivation. Desktop analysis revealed that food classification systems varied considerably between databases. Microsoft Excel was a common file format used in all databases, and available software varied between countries. User's also recognised that food composition databases format should ideally be designed specifically for the intended use, have a user-friendly food classification system, incorporate accurate data with clear explanation of data derivation and feature user input. However, such databases are limited by data availability and resources. Further exploration of data sharing options should be considered. Furthermore, user's understanding of food composition data and databases limitations is inherent to the correct application of non-specific databases. Therefore, further exploration of user FCDB training should also be considered.

  5. On-Line Databases in Mexico.

    Science.gov (United States)

    Molina, Enzo

    1986-01-01

    Use of online bibliographic databases in Mexico is provided through Servicio de Consulta a Bancos de Informacion, a public service that provides information retrieval, document delivery, translation, technical support, and training services. Technical infrastructure is based on a public packet-switching network and institutional users may receive…

  6. Copyright in Context: The OCLC Database.

    Science.gov (United States)

    Mason, Marilyn Gell

    1988-01-01

    Discusses topics related to OCLC adoption of guidelines for the use and transfer of OCLC-derived records, including the purpose of OCLC; the legal basis of copyrighting; technological change; compilation copyright; rationale for copyright of the OCLC database; impact on libraries; impact on networks; and relationships between OCLC and libraries. A…

  7. Statewide Transition Database: Update. Second Edition.

    Science.gov (United States)

    Repetto, Jeanne B.; And Others

    Project RETAIN (Retention in Education Technical Assistance and Information Network) is a Florida project that assists school districts through identification and dissemination of effective practices that keep students with mild disabilities in school. One part of the project was the development of a database of school district efforts in the area…

  8. Update History of This Database - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27... Arabidopsis Phenome Database English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...

  9. Update History of This Database - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Update History of This Database Date Update contents 2017/03/13 SKIP Stemcell Database... English archive site is opened. 2013/03/29 SKIP Stemcell Database ( https://www.skip.med.k...eio.ac.jp/SKIPSearch/top?lang=en ) is opened. About This Database Database Description Download License Upda...te History of This Database Site Policy | Contact Us Update History of This Database - SKIP Stemcell Database | LSDB Archive ...

  10. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us RMG Database... Description General information of database Database name RMG Alternative name Rice Mitochondri...ational Institute of Agrobiological Sciences E-mail : Database classification Nucleotide Sequence Databases ...Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database description This database co...e of rice mitochondrial genome and information on the analysis results. Features and manner of utilization of database

  11. 數據資料庫 Numeric Databases

    Directory of Open Access Journals (Sweden)

    Mei-ling Wang Chen

    1989-03-01

    Full Text Available 無In 1979, the International Communication Bureau of R.O.C. connected some U.S. information service centers through the international telecommunication network. Since then, there are Dialog, ORBIT & BRS introduced into this country. However, the users are interested in the bibliographic databases and seldomly know the non-bibliographic databases or the numeric databases. This article mainly describes the numeric database about its definition & characteristics, comparison with bibliographic databases, its producers. Service systems & users, data element, a brief introduction by the subject, its problem and future, Iibrary role and the present use status in the R.O.C.

  12. Report on Final Workshop results

    DEFF Research Database (Denmark)

    Cavalli, Valentino; Dyer, John; Robertson, Dale;

    The SERENATE project held its Final Workshop in Bad Nauheim, Germany on 16-17 June 2003. More than ninety representatives of research and education networking organisations, national governments and funding bodies, network operators, equipment manufacturers and the scientific and education...

  13. Final Project Report: DOE Award FG02-04ER25606 Overlay Transit Networking for Scalable, High Performance Data Communication across Heterogeneous Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Beck, Micah; Moore, Terry

    2007-08-31

    As the flood of data associated with leading edge computational science continues to escalate, the challenge of supporting the distributed collaborations that are now characteristic of it becomes increasingly daunting. The chief obstacles to progress on this front lie less in the synchronous elements of collaboration, which have been reasonably well addressed by new global high performance networks, than in the asynchronous elements, where appropriate shared storage infrastructure seems to be lacking. The recent report from the Department of Energy on the emerging 'data management challenge' captures the multidimensional nature of this problem succinctly: Data inevitably needs to be buffered, for periods ranging from seconds to weeks, in order to be controlled as it moves through the distributed and collaborative research process. To meet the diverse and changing set of application needs that different research communities have, large amounts of non-archival storage are required for transitory buffering, and it needs to be widely dispersed, easily available, and configured to maximize flexibility of use. In today's grid fabric, however, massive storage is mostly concentrated in data centers, available only to those with user accounts and membership in the appropriate virtual organizations, allocated as if its usage were non-transitory, and encapsulated behind legacy interfaces that inhibit the flexibility of use and scheduling. This situation severely restricts the ability of application communities to access and schedule usable storage where and when they need to in order to make their workflow more productive. (p.69f) One possible strategy to deal with this problem lies in creating a storage infrastructure that can be universally shared because it provides only the most generic of asynchronous services. Different user communities then define higher level services as necessary to meet their needs. One model of such a service is a Storage Network

  14. National Database of Geriatrics

    DEFF Research Database (Denmark)

    Kannegaard, Pia Nimann; Vinding, Kirsten L; Hare-Bruun, Helle

    2016-01-01

    AIM OF DATABASE: The aim of the National Database of Geriatrics is to monitor the quality of interdisciplinary diagnostics and treatment of patients admitted to a geriatric hospital unit. STUDY POPULATION: The database population consists of patients who were admitted to a geriatric hospital unit....... Geriatric patients cannot be defined by specific diagnoses. A geriatric patient is typically a frail multimorbid elderly patient with decreasing functional ability and social challenges. The database includes 14-15,000 admissions per year, and the database completeness has been stable at 90% during the past......, percentage of discharges with a rehabilitation plan, and the part of cases where an interdisciplinary conference has taken place. Data are recorded by doctors, nurses, and therapists in a database and linked to the Danish National Patient Register. DESCRIPTIVE DATA: Descriptive patient-related data include...

  15. Conditioning Probabilistic Databases

    CERN Document Server

    Koch, Christoph

    2008-01-01

    Past research on probabilistic databases has studied the problem of answering queries on a static database. Application scenarios of probabilistic databases however often involve the conditioning of a database using additional information in the form of new evidence. The conditioning problem is thus to transform a probabilistic database of priors into a posterior probabilistic database which is materialized for subsequent query processing or further refinement. It turns out that the conditioning problem is closely related to the problem of computing exact tuple confidence values. It is known that exact confidence computation is an NP-hard problem. This has lead researchers to consider approximation techniques for confidence computation. However, neither conditioning nor exact confidence computation can be solved using such techniques. In this paper we present efficient techniques for both problems. We study several problem decomposition methods and heuristics that are based on the most successful search techn...

  16. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  17. ITS-90 Thermocouple Database

    Science.gov (United States)

    SRD 60 NIST ITS-90 Thermocouple Database (Web, free access)   Web version of Standard Reference Database 60 and NIST Monograph 175. The database gives temperature -- electromotive force (emf) reference functions and tables for the letter-designated thermocouple types B, E, J, K, N, R, S and T. These reference functions have been adopted as standards by the American Society for Testing and Materials (ASTM) and the International Electrotechnical Commission (IEC).

  18. Searching Databases with Keywords

    Institute of Scientific and Technical Information of China (English)

    Shan Wang; Kun-Long Zhang

    2005-01-01

    Traditionally, SQL query language is used to search the data in databases. However, it is inappropriate for end-users, since it is complex and hard to learn. It is the need of end-user, searching in databases with keywords, like in web search engines. This paper presents a survey of work on keyword search in databases. It also includes a brief introduction to the SEEKER system which has been developed.

  19. Specialist Bibliographic Databases

    OpenAIRE

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A.; Trukhachev, Vladimir I.; Kostyukova, Elena I.; Gerasimov, Alexey N.; Kitas, George D.

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and d...

  20. Smart Location Database - Download

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block...

  1. Database principles programming performance

    CERN Document Server

    O'Neil, Patrick

    2014-01-01

    Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi

  2. Smart Location Database - Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block...

  3. Database Publication Practices

    DEFF Research Database (Denmark)

    Bernstein, P.A.; DeWitt, D.; Heuer, A.

    2005-01-01

    There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems.......There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems....

  4. The Danish Melanoma Database

    DEFF Research Database (Denmark)

    Hölmich, Lisbet Rosenkrantz; Klausen, Siri; Spaun, Eva

    2016-01-01

    AIM OF DATABASE: The aim of the database is to monitor and improve the treatment and survival of melanoma patients. STUDY POPULATION: All Danish patients with cutaneous melanoma and in situ melanomas must be registered in the Danish Melanoma Database (DMD). In 2014, 2,525 patients with invasive......, nature, and treatment hereof is registered. In case of death, the cause and date are included. Currently, all data are entered manually; however, data catchment from the existing registries is planned to be included shortly. DESCRIPTIVE DATA: The DMD is an old research database, but new as a clinical...

  5. Danish Gynecological Cancer Database

    DEFF Research Database (Denmark)

    Sørensen, Sarah Mejer; Bjørn, Signe Frahm; Jochumsen, Kirsten Marie

    2016-01-01

    AIM OF DATABASE: The Danish Gynecological Cancer Database (DGCD) is a nationwide clinical cancer database and its aim is to monitor the treatment quality of Danish gynecological cancer patients, and to generate data for scientific purposes. DGCD also records detailed data on the diagnostic measures...... is the registration of oncological treatment data, which is incomplete for a large number of patients. CONCLUSION: The very complete collection of available data from more registries form one of the unique strengths of DGCD compared to many other clinical databases, and provides unique possibilities for validation...

  6. Transporter Classification Database (TCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  7. The Relational Database Dictionary

    CERN Document Server

    J, C

    2006-01-01

    Avoid misunderstandings that can affect the design, programming, and use of database systems. Whether you're using Oracle, DB2, SQL Server, MySQL, or PostgreSQL, The Relational Database Dictionary will prevent confusion about the precise meaning of database-related terms (e.g., attribute, 3NF, one-to-many correspondence, predicate, repeating group, join dependency), helping to ensure the success of your database projects. Carefully reviewed for clarity, accuracy, and completeness, this authoritative and comprehensive quick-reference contains more than 600 terms, many with examples, covering i

  8. IVR EFP Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database contains trip-level reports submitted by vessels participating in Exempted Fishery projects with IVR reporting requirements.

  9. Databases for Microbiologists

    Science.gov (United States)

    2015-01-01

    Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. The purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists. PMID:26013493

  10. Veterans Administration Databases

    Science.gov (United States)

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  11. Residency Allocation Database

    Data.gov (United States)

    Department of Veterans Affairs — The Residency Allocation Database is used to determine allocation of funds for residency programs offered by Veterans Affairs Medical Centers (VAMCs). Information...

  12. World Input-Output Network.

    Directory of Open Access Journals (Sweden)

    Federica Cerina

    Full Text Available Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD is one of the first efforts to construct the global multi-regional input-output (GMRIO tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries.

  13. World Input-Output Network

    Science.gov (United States)

    Cerina, Federica; Zhu, Zhen; Chessa, Alessandro; Riccaboni, Massimo

    2015-01-01

    Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD) is one of the first efforts to construct the global multi-regional input-output (GMRIO) tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION) and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries. PMID:26222389

  14. World Input-Output Network.

    Science.gov (United States)

    Cerina, Federica; Zhu, Zhen; Chessa, Alessandro; Riccaboni, Massimo

    2015-01-01

    Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD) is one of the first efforts to construct the global multi-regional input-output (GMRIO) tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION) and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries.

  15. License - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database License License to Use This Database Last updated : 2014/02/04 You may use this database...pecifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons... Attribution-Share Alike 2.1 Japan . If you use data from this database, please be sure attribute this database...pan is found here . With regard to this database, you are licensed to: freely access part or whole of this database

  16. Neutrosophic Relational Database Decomposition

    OpenAIRE

    Meena Arora; Ranjit Biswas; Dr. U.S.Pandey

    2011-01-01

    In this paper we present a method of decomposing a neutrosophic database relation with Neutrosophic attributes into basic relational form. Our objective is capable of manipulating incomplete as well as inconsistent information. Fuzzy relation or vague relation can only handle incomplete information. Authors are taking the Neutrosophic Relational database [8],[2] to show how imprecise data can be handled in relational schema.

  17. HIV Structural Database

    Science.gov (United States)

    SRD 102 HIV Structural Database (Web, free access)   The HIV Protease Structural Database is an archive of experimentally determined 3-D structures of Human Immunodeficiency Virus 1 (HIV-1), Human Immunodeficiency Virus 2 (HIV-2) and Simian Immunodeficiency Virus (SIV) Proteases and their complexes with inhibitors or products of substrate cleavage.

  18. Structural Ceramics Database

    Science.gov (United States)

    SRD 30 NIST Structural Ceramics Database (Web, free access)   The NIST Structural Ceramics Database (WebSCD) provides evaluated materials property data for a wide range of advanced ceramics known variously as structural ceramics, engineering ceramics, and fine ceramics.

  19. Odense Pharmacoepidemiological Database (OPED)

    DEFF Research Database (Denmark)

    Hallas, Jesper; Poulsen, Maja Hellfritzsch; Hansen, Morten Rix

    2017-01-01

    The Odense University Pharmacoepidemiological Database (OPED) is a prescription database established in 1990 by the University of Southern Denmark, covering reimbursed prescriptions from the county of Funen in Denmark and the region of Southern Denmark (1.2 million inhabitants). It is still active...

  20. The Danish Anaesthesia Database

    DEFF Research Database (Denmark)

    Antonsen, Kristian; Rosenstock, Charlotte Vallentin; Lundstrøm, Lars Hyldborg

    2016-01-01

    AIM OF DATABASE: The aim of the Danish Anaesthesia Database (DAD) is the nationwide collection of data on all patients undergoing anesthesia. Collected data are used for quality assurance, quality development, and serve as a basis for research projects. STUDY POPULATION: The DAD was founded in 2004...

  1. World Database of Happiness

    NARCIS (Netherlands)

    R. Veenhoven (Ruut)

    1995-01-01

    textabstractABSTRACT The World Database of Happiness is an ongoing register of research on subjective appreciation of life. Its purpose is to make the wealth of scattered findings accessible, and to create a basis for further meta-analytic studies. The database involves four sections:
    1.

  2. Balkan Vegetation Database

    NARCIS (Netherlands)

    Vassilev, Kiril; Pedashenko, Hristo; Alexandrova, Alexandra; Tashev, Alexandar; Ganeva, Anna; Gavrilova, Anna; Gradevska, Asya; Assenov, Assen; Vitkova, Antonina; Grigorov, Borislav; Gussev, Chavdar; Filipova, Eva; Aneva, Ina; Knollová, Ilona; Nikolov, Ivaylo; Georgiev, Georgi; Gogushev, Georgi; Tinchev, Georgi; Pachedjieva, Kalina; Koev, Koycho; Lyubenova, Mariyana; Dimitrov, Marius; Apostolova-Stoyanova, Nadezhda; Velev, Nikolay; Zhelev, Petar; Glogov, Plamen; Natcheva, Rayna; Tzonev, Rossen; Boch, Steffen; Hennekens, Stephan M.; Georgiev, Stoyan; Stoyanov, Stoyan; Karakiev, Todor; Kalníková, Veronika; Shivarov, Veselin; Russakova, Veska; Vulchev, Vladimir

    2016-01-01

    The Balkan Vegetation Database (BVD; GIVD ID: EU-00-019; http://www.givd.info/ID/EU-00- 019) is a regional database that consists of phytosociological relevés from different vegetation types from six countries on the Balkan Peninsula (Albania, Bosnia and Herzegovina, Bulgaria, Kosovo, Montenegro

  3. Balkan Vegetation Database

    NARCIS (Netherlands)

    Vassilev, Kiril; Pedashenko, Hristo; Alexandrova, Alexandra; Tashev, Alexandar; Ganeva, Anna; Gavrilova, Anna; Gradevska, Asya; Assenov, Assen; Vitkova, Antonina; Grigorov, Borislav; Gussev, Chavdar; Filipova, Eva; Aneva, Ina; Knollová, Ilona; Nikolov, Ivaylo; Georgiev, Georgi; Gogushev, Georgi; Tinchev, Georgi; Pachedjieva, Kalina; Koev, Koycho; Lyubenova, Mariyana; Dimitrov, Marius; Apostolova-Stoyanova, Nadezhda; Velev, Nikolay; Zhelev, Petar; Glogov, Plamen; Natcheva, Rayna; Tzonev, Rossen; Boch, Steffen; Hennekens, Stephan M.; Georgiev, Stoyan; Stoyanov, Stoyan; Karakiev, Todor; Kalníková, Veronika; Shivarov, Veselin; Russakova, Veska; Vulchev, Vladimir

    2016-01-01

    The Balkan Vegetation Database (BVD; GIVD ID: EU-00-019; http://www.givd.info/ID/EU-00- 019) is a regional database that consists of phytosociological relevés from different vegetation types from six countries on the Balkan Peninsula (Albania, Bosnia and Herzegovina, Bulgaria, Kosovo, Montenegro

  4. Biological Macromolecule Crystallization Database

    Science.gov (United States)

    SRD 21 Biological Macromolecule Crystallization Database (Web, free access)   The Biological Macromolecule Crystallization Database and NASA Archive for Protein Crystal Growth Data (BMCD) contains the conditions reported for the crystallization of proteins and nucleic acids used in X-ray structure determinations and archives the results of microgravity macromolecule crystallization studies.

  5. Database Publication Practices

    DEFF Research Database (Denmark)

    Bernstein, P.A.; DeWitt, D.; Heuer, A.

    2005-01-01

    There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems....

  6. A Quality System Database

    Science.gov (United States)

    Snell, William H.; Turner, Anne M.; Gifford, Luther; Stites, William

    2010-01-01

    A quality system database (QSD), and software to administer the database, were developed to support recording of administrative nonconformance activities that involve requirements for documentation of corrective and/or preventive actions, which can include ISO 9000 internal quality audits and customer complaints.

  7. An organic database system

    NARCIS (Netherlands)

    M.L. Kersten (Martin); A.P.J.M. Siebes (Arno)

    1999-01-01

    textabstractThe pervasive penetration of database technology may suggest that we have reached the end of the database research era. The contrary is true. Emerging technology, in hardware, software, and connectivity, brings a wealth of opportunities to push technology to a new level of maturity.

  8. Atomic Spectra Database (ASD)

    Science.gov (United States)

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  9. World Database of Happiness

    NARCIS (Netherlands)

    R. Veenhoven (Ruut)

    1995-01-01

    textabstractABSTRACT The World Database of Happiness is an ongoing register of research on subjective appreciation of life. Its purpose is to make the wealth of scattered findings accessible, and to create a basis for further meta-analytic studies. The database involves four sections:
    1. Bib

  10. World Database of Happiness

    NARCIS (Netherlands)

    R. Veenhoven (Ruut)

    1995-01-01

    textabstractABSTRACT The World Database of Happiness is an ongoing register of research on subjective appreciation of life. Its purpose is to make the wealth of scattered findings accessible, and to create a basis for further meta-analytic studies. The database involves four sections:
    1. Bib

  11. Database Description - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available Yeast Interacting Proteins Database Database Description General information of database Database name Yeast... Interacting Proteins Database Alternative name - Creator Creator Name: Takashi Ito* Creator Affiliation: Di...-4-7136-3989 FAX: +81-4-7136-3979 E-mail : Database classification Metabolic and Signaling Pathways - Protei...n-protein interactions Organism Taxonomy Name: Saccharomyces cerevisiae Taxonomy ID: 4932 Database descripti...ive yeast two-hybrid analysis of budding yeast proteins. Features and manner of utilization of database Prot

  12. Constraint based modeling in R using metabolic reconstruction databases

    NARCIS (Netherlands)

    Gavai, A.K.; Hettinga, H.; Leunissen, J.A.M.

    2015-01-01

    This package provides an interface to simulate metabolic reconstruction from the BiGG database(http://bigg.ucsd.edu/) and other metabolic reconstruction databases. The package facilitates flux balance analysis (FBA) and the sampling of feasible flux distributions. Metabolic networks and estimated fl

  13. Common hyperspectral image database design

    Science.gov (United States)

    Tian, Lixun; Liao, Ningfang; Chai, Ali

    2009-11-01

    This paper is to introduce Common hyperspectral image database with a demand-oriented Database design method (CHIDB), which comprehensively set ground-based spectra, standardized hyperspectral cube, spectral analysis together to meet some applications. The paper presents an integrated approach to retrieving spectral and spatial patterns from remotely sensed imagery using state-of-the-art data mining and advanced database technologies, some data mining ideas and functions were associated into CHIDB to make it more suitable to serve in agriculture, geological and environmental areas. A broad range of data from multiple regions of the electromagnetic spectrum is supported, including ultraviolet, visible, near-infrared, thermal infrared, and fluorescence. CHIDB is based on dotnet framework and designed by MVC architecture including five main functional modules: Data importer/exporter, Image/spectrum Viewer, Data Processor, Parameter Extractor, and On-line Analyzer. The original data were all stored in SQL server2008 for efficient search, query and update, and some advance Spectral image data Processing technology are used such as Parallel processing in C#; Finally an application case is presented in agricultural disease detecting area.

  14. Impact of the codec and various QoS methods on the final quality of the transferred voice in an IP network

    Science.gov (United States)

    Slavata, Oldřich; Holub, Jan

    2015-02-01

    This paper deals with an analysis of the relation between the codec that is used, the QoS method, and the final voice transmission quality. The Cisco 2811 router is used for adjusting QoS. VoIP client Linphone is used for adjusting the codec. The criterion for transmission quality is the MOS parameter investigated with the ITU-T P.862 PESQ and P.863 POLQA algorithms.

  15. Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Church, Bruce W

    2008-10-15

    Most prokaryotes of interest to DOE are poorly understood. Even when full genomic sequences are available, the function of only a small number of gene products are clear. The critical question is how to best infer the most probable network architectures in cells that are poorly characterized. The project goal is to create a computational hypothesis testing (CHT) framework that combines large-scale dynamical simulation, a database of bioinformatics-derived probable interactions, and numerical parallel architecture data-fitting routines to explore many “what if ?” hypotheses about the functions of genes and proteins within pathways and their downstream effects on molecular concentration profiles and corresponding phenotypes. From this framework we expect to infer signal transduction pathways and gene expression networks in prokaryotes. Detailed mechanistic models of E. Coli have been developed that directly incorporate DNA sequence information. The CHT framework is implemented in the NIEngine network inference software. NIEngine has been applied to recover gene regulatory networks in E. coli to assess performance. Application to Shewanel la oneidensi and other organism of interest DOE will be conducted in partnership with Jim Collin's Lab at Boston University and other academic partners. The CHT framework has also found broad application in the automated learning of biology for purposes of improving human health.

  16. Drug-target interaction prediction: databases, web servers and computational models.

    Science.gov (United States)

    Chen, Xing; Yan, Chenggang Clarence; Zhang, Xiaotian; Zhang, Xu; Dai, Feng; Yin, Jian; Zhang, Yongdong

    2016-07-01

    Identification of drug-target interactions is an important process in drug discovery. Although high-throughput screening and other biological assays are becoming available, experimental methods for drug-target interaction identification remain to be extremely costly, time-consuming and challenging even nowadays. Therefore, various computational models have been developed to predict potential drug-target associations on a large scale. In this review, databases and web servers involved in drug-target identification and drug discovery are summarized. In addition, we mainly introduced some state-of-the-art computational models for drug-target interactions prediction, including network-based method, machine learning-based method and so on. Specially, for the machine learning-based method, much attention was paid to supervised and semi-supervised models, which have essential difference in the adoption of negative samples. Although significant improvements for drug-target interaction prediction have been obtained by many effective computational models, both network-based and machine learning-based methods have their disadvantages, respectively. Furthermore, we discuss the future directions of the network-based drug discovery and network approach for personalized drug discovery based on personalized medicine, genome sequencing, tumor clone-based network and cancer hallmark-based network. Finally, we discussed the new evaluation validation framework and the formulation of drug-target interactions prediction problem by more realistic regression formulation based on quantitative bioactivity data.

  17. Reclamation research database

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2007-07-01

    A reclamation research database was compiled to help stakeholders search publications and research related to the reclamation of Alberta's oil sands region. New publications are added to the database by the Cumulative Environmental Management Association (CEMA), a nonprofit association whose mandate is to develop frameworks and guidelines for the management of cumulative environmental effects in the oil sands region. A total of 514 research papers have been compiled in the database to date. Topics include recent research on hydrology, aquatic and terrestrial ecosystems, laboratory studies on biodegradation, and the effects of oil sands processing on micro-organisms. The database includes a wide variety of studies related to reconstructed wetlands as well as the ecological effects of hydrocarbons on phytoplankton and other organisms. The database format included information on research format availability, as well as information related to the author's affiliations. Links to external abstracts were provided where available, as well as details of source information.

  18. The LHCb configuration database

    CERN Document Server

    Abadie, L; Van Herwijnen, Eric; Jacobsson, R; Jost, B; Neufeld, N

    2005-01-01

    The aim of the LHCb configuration database is to store information about all the controllable devices of the detector. The experiment's control system (that uses PVSS ) will configure, start up and monitor the detector from the information in the configuration database. The database will contain devices with their properties, connectivity and hierarchy. The ability to store and rapidly retrieve huge amounts of data, and the navigability between devices are important requirements. We have collected use cases to ensure the completeness of the design. Using the entity relationship modelling technique we describe the use cases as classes with attributes and links. We designed the schema for the tables using relational diagrams. This methodology has been applied to the TFC (switches) and DAQ system. Other parts of the detector will follow later. The database has been implemented using Oracle to benefit from central CERN database support. The project also foresees the creation of tools to populate, maintain, and co...

  19. Cascadia Tsunami Deposit Database

    Science.gov (United States)

    Peters, Robert; Jaffe, Bruce; Gelfenbaum, Guy; Peterson, Curt

    2003-01-01

    The Cascadia Tsunami Deposit Database contains data on the location and sedimentological properties of tsunami deposits found along the Cascadia margin. Data have been compiled from 52 studies, documenting 59 sites from northern California to Vancouver Island, British Columbia that contain known or potential tsunami deposits. Bibliographical references are provided for all sites included in the database. Cascadia tsunami deposits are usually seen as anomalous sand layers in coastal marsh or lake sediments. The studies cited in the database use numerous criteria based on sedimentary characteristics to distinguish tsunami deposits from sand layers deposited by other processes, such as river flooding and storm surges. Several studies cited in the database contain evidence for more than one tsunami at a site. Data categories include age, thickness, layering, grainsize, and other sedimentological characteristics of Cascadia tsunami deposits. The database documents the variability observed in tsunami deposits found along the Cascadia margin.

  20. Database Description - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us DGBY Database... Description General information of database Database name DGBY Alternative name Database for G...-12 Kannondai, Tsukuba, Ibaraki 305-8642 Japan Akira Ando TEL: +81-29-838-8066 E-mail: Database classificati...on Microarray Data and other Gene Expression Databases Organism Taxonomy Name: Sa...ccharomyces cerevisiae Taxonomy ID: 4932 Database description Baker's yeast Saccharomyces cerevisiae is an e

  1. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us RPSD Database... Description General information of database Database name RPSD Alternative name Summary inform...n National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Database classification Structure Database...idopsis thaliana Taxonomy ID: 3702 Taxonomy Name: Glycine max Taxonomy ID: 3847 Database description We have...nts such as rice, and have put together the result and related informations. This database contains the basi

  2. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Gurney, Kevin R

    2015-01-12

    This document constitutes the final report under DOE grant DE-FG-08ER64649. The organization of this document is as follows: first, I will review the original scope of the proposed research. Second, I will present the current draft of a paper nearing submission to Nature Climate Change on the initial results of this funded effort. Finally, I will present the last phase of the research under this grant which has supported a Ph.D. student. To that end, I will present the graduate student’s proposed research, a portion of which is completed and reflected in the paper nearing submission. This final work phase will be completed in the next 12 months. This final workphase will likely result in 1-2 additional publications and we consider the results (as exemplified by the current paper) high quality. The continuing results will acknowledge the funding provided by DOE grant DE-FG-08ER64649.

  3. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    DeTar, Carleton [P.I.

    2012-12-10

    This document constitutes the Final Report for award DE-FC02-06ER41446 as required by the Office of Science. It summarizes accomplishments and provides copies of scientific publications with significant contribution from this award.

  4. The Water Cycle Solutions Network

    Science.gov (United States)

    Houser, P.; Belvedere, D.; Imam, B.; Schiffer, R.; Schlosser, C.; Gupta, H.; Welty, C.; Vörösmarty, C.; Matthews, D.; Lawford, R.

    2006-12-01

    The goal of the Water cycle Solutions Network is to improve and optimize the sustained ability of water cycle researchers, stakeholders, organizations and networks to interact, identify, harness, and extend research results to augment decision support tools and meet national needs. WaterNet will engage relevant NASA water cycle research resources and community-of-practice organizations, to develop what we term an "actionable database" that can be used to communicate and connect water cycle research results (WCRs) towards the improvement of water-related Decision Support Tools (DSTs). An actionable database includes enough sufficient knowledge about its nodes and their heritage so that connections between these nodes are identifiable and robust. Recognizing the many existing highly valuable water-related science and application networks, we will focus the balance of our efforts on enabling their interoperability in a solutions network context. We will initially focus on identification, collection, and analysis of the two end points, these being the WCRs and water related DSTs. We will then develop strategies to connect these two end points via innovative communication strategies, improved user access to NASA resources, improved water cycle research community appreciation for DST requirements, improved policymaker, management and stakeholder knowledge of NASA research and application products, and improved identification of pathways for progress. Finally, we will develop relevant benchmarking and metrics, to understand the network's characteristics, to optimize its performance, and to establish sustainability. The WaterNet will deliver numerous pre-evaluation reports that will identify the pathways for improving the collective ability of the water cycle community to routinely harness WCRs that address crosscutting water cycle challenges.

  5. Final report : groundwater monitoring at Morrill, Kansas, in September 2005 and March 2006, with expansion of the monitoring network in January 2006.

    Energy Technology Data Exchange (ETDEWEB)

    LaFreniere, L. M.; Environmental Science Division

    2007-06-30

    This document reports the results of groundwater monitoring in September 2005 and March 2006 at the grain storage facility formerly operated at Morrill, Kansas, by the Commodity Credit Corporation of the U.S. Department of Agriculture (CCC/USDA). These activities were the first and second twice yearly sampling events of the two-year monitoring program approved by the CCC/USDA and Kansas Department of Health and Environment (KDHE) project managers. The monitoring network sampled in September 2005 consisted of 9 monitoring wells (MW1S-MW5S and MW1D [installed in the mid 1990s] and MW6S-MW8S [installed in 2004]), plus 3 private wells (Isch, Rillinger, and Stone). The groundwater samples collected in this first event were analyzed for volatile organic compounds (VOCs), dissolved hydrogen, and additional groundwater parameters to aid in evaluating the potential for reductive dechlorination processes. After the monitoring in September 2005, Argonne recommended expansion of the initial monitoring network. Previous sampling (August 2004) had already suggested that the initial network was inadequate to delineate the extent of the carbon tetrachloride plume. With the approval of the CCC/USDA and KDHE project managers, the monitoring network was expanded in January 2006 through the installation of 3 additional monitoring wells (MW9S-MW11S). Details of the monitoring well installations are reported in this document. The expanded monitoring network of 12 monitoring wells (MW1S-MW11S and MW1D) and 3 private wells (Isch, Rillinger, and Stone) was sampled in March 2006, the second monitoring event in the planned two-year program. Results of analyses for VOCs showed minor increases or decreases in contaminant levels at various locations but indicated that the leading edge of the contaminant plume is approaching the intermittent stream leading to Terrapin Creek. The groundwater samples collected in March 2006 were also analyzed for additional groundwater parameters to aid in the

  6. Conceptual and logical level of database modeling

    Science.gov (United States)

    Hunka, Frantisek; Matula, Jiri

    2016-06-01

    Conceptual and logical levels form the top most levels of database modeling. Usually, ORM (Object Role Modeling) and ER diagrams are utilized to capture the corresponding schema. The final aim of business process modeling is to store its results in the form of database solution. For this reason, value oriented business process modeling which utilizes ER diagram to express the modeling entities and relationships between them are used. However, ER diagrams form the logical level of database schema. To extend possibilities of different business process modeling methodologies, the conceptual level of database modeling is needed. The paper deals with the REA value modeling approach to business process modeling using ER-diagrams, and derives conceptual model utilizing ORM modeling approach. Conceptual model extends possibilities for value modeling to other business modeling approaches.

  7. DATABASES DEVELOPED IN INDIA FOR BIOLOGICAL SCIENCES

    Directory of Open Access Journals (Sweden)

    Gitanjali Yadav

    2017-09-01

    Full Text Available The complexity of biological systems requires use of a variety of experimental methods with ever increasing sophistication to probe various cellular processes at molecular and atomic resolution. The availability of technologies for determining nucleic acid sequences of genes and atomic resolution structures of biomolecules prompted development of major biological databases like GenBank and PDB almost four decades ago. India was one of the few countries to realize early, the utility of such databases for progress in modern biology/biotechnology. Department of Biotechnology (DBT, India established Biotechnology Information System (BTIS network in late eighties. Starting with the genome sequencing revolution at the turn of the century, application of high-throughput sequencing technologies in biology and medicine for analysis of genomes, transcriptomes, epigenomes and microbiomes have generated massive volumes of sequence data. BTIS network has not only provided state of the art computational infrastructure to research institutes and universities for utilizing various biological databases developed abroad in their research, it has also actively promoted research and development (R&D projects in Bioinformatics to develop a variety of biological databases in diverse areas. It is encouraging to note that, a large number of biological databases or data driven software tools developed in India, have been published in leading peer reviewed international journals like Nucleic Acids Research, Bioinformatics, Database, BMC, PLoS and NPG series publication. Some of these databases are not only unique, they are also highly accessed as reflected in number of citations. Apart from databases developed by individual research groups, BTIS has initiated consortium projects to develop major India centric databases on Mycobacterium tuberculosis, Rice and Mango, which can potentially have practical applications in health and agriculture. Many of these biological

  8. PADB : Published Association Database

    Directory of Open Access Journals (Sweden)

    Lee Jin-Sung

    2007-09-01

    Full Text Available Abstract Background Although molecular pathway information and the International HapMap Project data can help biomedical researchers to investigate the aetiology of complex diseases more effectively, such information is missing or insufficient in current genetic association databases. In addition, only a few of the environmental risk factors are included as gene-environment interactions, and the risk measures of associations are not indexed in any association databases. Description We have developed a published association database (PADB; http://www.medclue.com/padb that includes both the genetic associations and the environmental risk factors available in PubMed database. Each genetic risk factor is linked to a molecular pathway database and the HapMap database through human gene symbols identified in the abstracts. And the risk measures such as odds ratios or hazard ratios are extracted automatically from the abstracts when available. Thus, users can review the association data sorted by the risk measures, and genetic associations can be grouped by human genes or molecular pathways. The search results can also be saved to tab-delimited text files for further sorting or analysis. Currently, PADB indexes more than 1,500,000 PubMed abstracts that include 3442 human genes, 461 molecular pathways and about 190,000 risk measures ranging from 0.00001 to 4878.9. Conclusion PADB is a unique online database of published associations that will serve as a novel and powerful resource for reviewing and interpreting huge association data of complex human diseases.

  9. Database and Expert Systems Applications

    DEFF Research Database (Denmark)

    Viborg Andersen, Kim; Debenham, John; Wagner, Roland

    submissions. The papers are organized in topical sections on workflow automation, database queries, data classification and recommendation systems, information retrieval in multimedia databases, Web applications, implementational aspects of databases, multimedia databases, XML processing, security, XML...... schemata, query evaluation, semantic processing, information retrieval, temporal and spatial databases, querying XML, organisational aspects of databases, natural language processing, ontologies, Web data extraction, semantic Web, data stream management, data extraction, distributed database systems...

  10. IRIS Toxicological Review of Acrolein (2003 Final)

    Science.gov (United States)

    EPA announced the release of the final report, Toxicological Review of Acrolein: in support of the Integrated Risk Information System (IRIS). The updated Summary for Acrolein and accompanying toxicological review have been added to the IRIS Database.

  11. District heating and cooling systems for communities through power plant retrofit and distribution networks. Phase 1: identificaion and assessment. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1979-09-01

    Appendix A, Utility Plant Characteristics, contains information describing the characteristics of seven utility plants that were considered during the final site selection process. The plants are: Valley Electric Generating Plant, downtown Milwaukee; Manitowoc Electric Generating Plant, downtown Manitowoc; Blount Street Electric Generating Plant, downtown Madison; Pulliam Electric Generating Plant, downtown Green Bay; Edgewater Electric Generating Plant, downtown Sheboygan; Rock River Electric Generating Plant, near Janesville and Beloit; and Black Hawk Electric Generating Plant, downtown Beloit. Additional appendices are: Future Loads; hvac Inventory; Load Calculations; Factors to Induce Potential Users; Turbine Retrofit/Distribution System Data; and Detailed Economic Analysis Results/Data.

  12. Glycoproteomic and glycomic databases.

    Science.gov (United States)

    Baycin Hizal, Deniz; Wolozny, Daniel; Colao, Joseph; Jacobson, Elena; Tian, Yuan; Krag, Sharon S; Betenbaugh, Michael J; Zhang, Hui

    2014-01-01

    Protein glycosylation serves critical roles in the cellular and biological processes of many organisms. Aberrant glycosylation has been associated with many illnesses such as hereditary and chronic diseases like cancer, cardiovascular diseases, neurological disorders, and immunological disorders. Emerging mass spectrometry (MS) technologies that enable the high-throughput identification of glycoproteins and glycans have accelerated the analysis and made possible the creation of dynamic and expanding databases. Although glycosylation-related databases have been established by many laboratories and institutions, they are not yet widely known in the community. Our study reviews 15 different publicly available databases and identifies their key elements so that users can identify the most applicable platform for their analytical needs. These databases include biological information on the experimentally identified glycans and glycopeptides from various cells and organisms such as human, rat, mouse, fly and zebrafish. The features of these databases - 7 for glycoproteomic data, 6 for glycomic data, and 2 for glycan binding proteins are summarized including the enrichment techniques that are used for glycoproteome and glycan identification. Furthermore databases such as Unipep, GlycoFly, GlycoFish recently established by our group are introduced. The unique features of each database, such as the analytical methods used and bioinformatical tools available are summarized. This information will be a valuable resource for the glycobiology community as it presents the analytical methods and glycosylation related databases together in one compendium. It will also represent a step towards the desired long term goal of integrating the different databases of glycosylation in order to characterize and categorize glycoproteins and glycans better for biomedical research.

  13. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  14. A Brief Review of RNA-Protein Interaction Database Resources

    Directory of Open Access Journals (Sweden)

    Ying Yi

    2017-01-01

    Full Text Available RNA-protein interactions play critical roles in various biological processes. By collecting and analyzing the RNA-protein interactions and binding sites from experiments and predictions, RNA-protein interaction databases have become an essential resource for the exploration of the transcriptional and post-transcriptional regulatory network. Here, we briefly review several widely used RNA-protein interaction database resources developed in recent years to provide a guide of these databases. The content and major functions in databases are presented. The brief description of database helps users to quickly choose the database containing information they interested. In short, these RNA-protein interaction database resources are continually updated, but the current state shows the efforts to identify and analyze the large amount of RNA-protein interactions.

  15. Phase Equilibria Diagrams Database

    Science.gov (United States)

    SRD 31 NIST/ACerS Phase Equilibria Diagrams Database (PC database for purchase)   The Phase Equilibria Diagrams Database contains commentaries and more than 21,000 diagrams for non-organic systems, including those published in all 21 hard-copy volumes produced as part of the ACerS-NIST Phase Equilibria Diagrams Program (formerly titled Phase Diagrams for Ceramists): Volumes I through XIV (blue books); Annuals 91, 92, 93; High Tc Superconductors I & II; Zirconium & Zirconia Systems; and Electronic Ceramics I. Materials covered include oxides as well as non-oxide systems such as chalcogenides and pnictides, phosphates, salt systems, and mixed systems of these classes.

  16. LandIT Database

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Pedersen, Torben Bach

    2010-01-01

    and reporting purposes. This paper presents the LandIT database; which is result of the LandIT project, which refers to an industrial collaboration project that developed technologies for communication and data integration between farming devices and systems. The LandIT database in principal is based...... on the ISOBUS standard; however the standard is extended with additional requirements, such as gradual data aggregation and flexible exchange of farming data. This paper describes the conceptual and logical schemas of the proposed database based on a real-life farming case study....

  17. ALICE Geometry Database

    CERN Document Server

    Santo, J

    1999-01-01

    The ALICE Geometry Database project consists of the development of a set of data structures to store the geometrical information of the ALICE Detector. This Database will be used in Simulation, Reconstruction and Visualisation and will interface with existing CAD systems and Geometrical Modellers.At the present time, we are able to read a complete GEANT3 geometry, to store it in our database and to visualise it. On disk, we store different geometry files in hierarchical fashion, and all the nodes, materials, shapes, configurations and transformations distributed in this tree structure. The present status of the prototype and its future evolution will be presented.

  18. Database machine performance

    Energy Technology Data Exchange (ETDEWEB)

    Cesarini, F.; Salza, S.

    1987-01-01

    This book is devoted to the important problem of database machine performance evaluation. The book presents several methodological proposals and case studies, that have been developed within an international project supported by the European Economic Community on Database Machine Evaluation Techniques and Tools in the Context of the Real Time Processing. The book gives an overall view of the modeling methodologies and the evaluation strategies that can be adopted to analyze the performance of the database machine. Moreover, it includes interesting case studies and an extensive bibliography.

  19. Product Licenses Database Application

    CERN Document Server

    Tonkovikj, Petar

    2016-01-01

    The goal of this project is to organize and centralize the data about software tools available to CERN employees, as well as provide a system that would simplify the license management process by providing information about the available licenses and their expiry dates. The project development process is consisted of two steps: modeling the products (software tools), product licenses, legal agreements and other data related to these entities in a relational database and developing the front-end user interface so that the user can interact with the database. The result is an ASP.NET MVC web application with interactive views for displaying and managing the data in the underlying database.

  20. Plant Genome Duplication Database.

    Science.gov (United States)

    Lee, Tae-Ho; Kim, Junah; Robertson, Jon S; Paterson, Andrew H

    2017-01-01

    Genome duplication, widespread in flowering plants, is a driving force in evolution. Genome alignments between/within genomes facilitate identification of homologous regions and individual genes to investigate evolutionary consequences of genome duplication. PGDD (the Plant Genome Duplication Database), a public web service database, provides intra- or interplant genome alignment information. At present, PGDD contains information for 47 plants whose genome sequences have been released. Here, we describe methods for identification and estimation of dates of genome duplication and speciation by functions of PGDD.The database is freely available at http://chibba.agtec.uga.edu/duplication/.

  1. LandIT Database

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Pedersen, Torben Bach

    2010-01-01

    and reporting purposes. This paper presents the LandIT database; which is result of the LandIT project, which refers to an industrial collaboration project that developed technologies for communication and data integration between farming devices and systems. The LandIT database in principal is based...... on the ISOBUS standard; however the standard is extended with additional requirements, such as gradual data aggregation and flexible exchange of farming data. This paper describes the conceptual and logical schemas of the proposed database based on a real-life farming case study....

  2. Danish Pancreatic Cancer Database

    DEFF Research Database (Denmark)

    Fristrup, Claus; Detlefsen, Sönke; Palnæs Hansen, Carsten

    2016-01-01

    AIM OF DATABASE: The Danish Pancreatic Cancer Database aims to prospectively register the epidemiology, diagnostic workup, diagnosis, treatment, and outcome of patients with pancreatic cancer in Denmark at an institutional and national level. STUDY POPULATION: Since May 1, 2011, all patients......, and survival. The results are published annually. CONCLUSION: The Danish Pancreatic Cancer Database has registered data on 2,217 patients with microscopically verified ductal adenocarcinoma of the pancreas. The data have been obtained nationwide over a period of 4 years and 2 months. The completeness...

  3. Selective network protection and economically efficient network operation in underground areas with high explosion hazard by automation of protection systems. Final report; Selektiver Netzschutz und wirtschaftlicher Netzbetrieb in schlagwettergefaehrdeten Bereichen durch Automatisierung der Schutztechnik. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Eickhoff, F. [Deutsche Montan Technologie GmbH, Bochum (Germany). Gas and Fire Division

    2002-07-01

    Electric power supply systems for coal mines in Germany from 2003 must adapt to developments in mining and to changing standards. Future systems should be more efficient, more compact, and with less network losses. Automatic network protection of underground networks of 5 kV and more is problematic due to economic considerations. Advantages are found in higher safety, protective earthing and faster information on disturbances. Economic considerations and formal boundary conditions concerning the acceptance of explosion protection systems and electrotechnical safety requirements permit only automation of existing switches with the aid of available protection systems. [German] Zur Realisierung einer elektrischen Energieversorgung im Steinkohlenbergbau in Deutschland fuer die Jahre bis 2003 und spaeter wird eine Anpassung an die bergtechnischen Entwicklungen notwendig. Zudem muss eine Anpassung an geaenderte Normenvorgaben erfolgen. In der Energieversorgung sind Entwicklungen im Gange, die Versorgung anders zu strukturieren, um den Abbaubetrieb leistungsfaehiger, kompakter und verlustaermer mit elektrischer Energie versorgen zu koennen. Die weitergehende Automatisierung des Netzschutzes ab 5 kV im Bereich unter Tage ist aufgrund der wirtschaftlichen Situation nur eingeschraenkt durchfuehrbar. Die moeglichen Sicherheitsvorteile liegen nach Auswertung bisheriger Forschungsvorhaben im Verhaeltnis von Ergebnis und Aufwand insbesondere: - im Erdschlussschutz der Hochspannungsnetze und - in der Stoerungsklaerung durch detaillierte Informationen des Netzschutzes. Die wirtschaftlichen Vorgaben und formalen Randbedingungen bei der Zulassung bezueglich Explosionsschutz und elektrotechnischer Sicherheitsanforderungen erlauben daher nur eine Automatisierung vorhandener Schalter durch Verwendung entsprechender vorhandener Schutzgeraete. (orig.)

  4. In silico identification of anti-cancer compounds and plants from traditional Chinese medicine database

    Science.gov (United States)

    Dai, Shao-Xing; Li, Wen-Xing; Han, Fei-Fei; Guo, Yi-Cheng; Zheng, Jun-Juan; Liu, Jia-Qian; Wang, Qian; Gao, Yue-Dong; Li, Gong-Hua; Huang, Jing-Fei

    2016-05-01

    There is a constant demand to develop new, effective, and affordable anti-cancer drugs. The traditional Chinese medicine (TCM) is a valuable and alternative resource for identifying novel anti-cancer agents. In this study, we aim to identify the anti-cancer compounds and plants from the TCM database by using cheminformatics. We first predicted 5278 anti-cancer compounds from TCM database. The top 346 compounds were highly potent active in the 60 cell lines test. Similarity analysis revealed that 75% of the 5278 compounds are highly similar to the approved anti-cancer drugs. Based on the predicted anti-cancer compounds, we identified 57 anti-cancer plants by activity enrichment. The identified plants are widely distributed in 46 genera and 28 families, which broadens the scope of the anti-cancer drug screening. Finally, we constructed a network of predicted anti-cancer plants and approved drugs based on the above results. The network highlighted the supportive role of the predicted plant in the development of anti-cancer drug and suggested different molecular anti-cancer mechanisms of the plants. Our study suggests that the predicted compounds and plants from TCM database offer an attractive starting point and a broader scope to mine for potential anti-cancer agents.

  5. Illinois coal reserve assessment and database development. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Treworgy, C.G.; Prussen, E.I.; Justice, M.A.; Chenoweth, C.A. [and others

    1997-11-01

    The new demonstrated reserve base estimate of coal of Illinois is 105 billion short tons. This estimate is an increase from the 78 billion tons in the Energy Information Administration`s demonstrated reserve base of coal, as of January 1, 1994. The new estimate arises from revised resource calculations based on recent mapping in a number of countries, as well as significant adjustments for depletion due to past mining. The new estimate for identified resources is 199 billion tons, a revision of the previous estimate of 181 billion tons. The new estimates incorporate the available analyses of sulfur, heat content, and rank group appropriate for characterizing the remaining coal resources in Illinois. Coal-quality data were examined in conjunction with coal resource mapping. Analyses of samples from exploration drill holes, channel samples from mines and outcrops, and geologic trends were compiled and mapped to allocate coal resource quantities to ranges of sulfur, heat content, and rank group. The new allocations place almost 1% of the demonstrated reserve base of Illinois in the two lowest sulfur categories, in contrast to none in the previous allocation used by the Energy Information Administration (EIA). The new allocations also place 89% of the demonstrated reserve base in the highest sulfur category, in contrast to the previous allocation of 69% in the highest category.

  6. US geothermal database and Oregon cascade thermal studies: (Final report)

    Energy Technology Data Exchange (ETDEWEB)

    Blackwell, D.D.; Steele, J.L.; Carter, L.

    1988-05-01

    This report describes two tasks of different nature. The first of these tasks was the preparation of a data base for heat flow and associated ancillary information for the United States. This data base is being used as the basis for preparation of the United States portion of a geothermal map of North America. The ''Geothermal Map of North America'' will be published as part of the Decade of North American Geology (DNAG) series of the Geological Society of America. The second of these tasks was to make a geothermal evaluation of holes drilled in the Cascade Range as part of a Department of Energy (DOE)/Industry co-sponsored deep drilling project. This second task involved field work, making temperature logs in the holes, and laboratory work, measuring thermal conductivity measurements on an extensive set of samples from these holes. The culmination of this task was an interpretation of heat flow values in terms of the regional thermal conditions; implications for geothermal systems in the Cascade Range; evaluation of the effect of groundwater flow on the depths that need to be drilled for successful measurements in the Cascade Range; and investigation of the nature of the surface groundwater effects on the temperature-depth curves. 40 refs., 7 figs., 7 tabs.

  7. FINAL DIGITAL FLOOD INSURANCE RATE MAP DATABASE, TEXAS COUNTY, OK

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Floodplain Mapping/Redelineation study deliverables depict and quantify the flood risks for the study area. The primary risk classifications used are the...

  8. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1994-05-27

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  9. Kansas Cartographic Database (KCD)

    Data.gov (United States)

    Kansas Data Access and Support Center — The Kansas Cartographic Database (KCD) is an exact digital representation of selected features from the USGS 7.5 minute topographic map series. Features that are...

  10. Records Management Database

    Data.gov (United States)

    US Agency for International Development — The Records Management Database is tool created in Microsoft Access specifically for USAID use. It contains metadata in order to access and retrieve the information...

  11. OTI Activity Database

    Data.gov (United States)

    US Agency for International Development — OTI's worldwide activity database is a simple and effective information system that serves as a program management, tracking, and reporting tool. In each country,...

  12. Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Hansen, Ulla Darling; Gradel, Kim Oren; Larsen, Michael Due

    2016-01-01

    The Danish Urogynaecological Database is established in order to ensure high quality of treatment for patients undergoing urogynecological surgery. The database contains details of all women in Denmark undergoing incontinence surgery or pelvic organ prolapse surgery amounting to ~5,200 procedures...... per year. The variables are collected along the course of treatment of the patient from the referral to a postoperative control. Main variables are prior obstetrical and gynecological history, symptoms, symptom-related quality of life, objective urogynecological findings, type of operation......, complications if relevant, implants used if relevant, 3-6-month postoperative recording of symptoms, if any. A set of clinical quality indicators is being maintained by the steering committee for the database and is published in an annual report which also contains extensive descriptive statistics. The database...

  13. Fine Arts Database (FAD)

    Data.gov (United States)

    General Services Administration — The Fine Arts Database records information on federally owned art in the control of the GSA; this includes the location, current condition and information on artists.

  14. Rat Genome Database (RGD)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Rat Genome Database (RGD) is a collaborative effort between leading research institutions involved in rat genetic and genomic research to collect, consolidate,...

  15. The Exoplanet Orbit Database

    CERN Document Server

    Wright, Jason T; Marcy, Geoffrey W; Han, Eunkyu; Feng, Ying; Johnson, John Asher; Howard, Andrew W; Valenti, Jeff A; Anderson, Jay; Piskunov, Nikolai

    2010-01-01

    We present a database of well determined orbital parameters of exoplanets. This database comprises spectroscopic orbital elements measured for 421 planets orbiting 357 stars from radial velocity and transit measurements as reported in the literature. We have also compiled fundamental transit parameters, stellar parameters, and the method used for the planets discovery. This Exoplanet Orbit Database includes all planets with robust, well measured orbital parameters reported in peer-reviewed articles. The database is available in a searchable, filterable, and sortable form on the Web at http://exoplanets.org through the Exoplanets Data Explorer Table, and the data can be plotted and explored through the Exoplanets Data Explorer Plotter. We use the Data Explorer to generate publication-ready plots giving three examples of the signatures of exoplanet migration and dynamical evolution: We illustrate the character of the apparent correlation between mass and period in exoplanet orbits, the selection different biase...

  16. National Geochemical Database: Concentrate

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Geochemistry of concentrates from the National Geochemical Database. Primarily inorganic elemental concentrations, most samples are from the continental US and...

  17. National Geochemical Database: Soil

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Geochemical analysis of soil samples from the National Geochemical Database. Primarily inorganic elemental concentrations, most samples are from the continental US...

  18. National Geochemical Database: Sediment

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Geochemical analysis of sediment samples from the National Geochemical Database. Primarily inorganic elemental concentrations, most samples are of stream sediment in...

  19. The Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Guldberg, Rikke; Brostrøm, Søren; Hansen, Jesper Kjær

    2013-01-01

    INTRODUCTION AND HYPOTHESIS: The Danish Urogynaecological Database (DugaBase) is a nationwide clinical database established in 2006 to monitor, ensure and improve the quality of urogynaecological surgery. We aimed to describe its establishment and completeness and to validate selected variables....... This is the first study based on data from the DugaBase. METHODS: The database completeness was calculated as a comparison between urogynaecological procedures reported to the Danish National Patient Registry and to the DugaBase. Validity was assessed for selected variables from a random sample of 200 women...... in the DugaBase from 1 January 2009 to 31 October 2010, using medical records as a reference. RESULTS: A total of 16,509 urogynaecological procedures were registered in the DugaBase by 31 December 2010. The database completeness has increased by calendar time, from 38.2 % in 2007 to 93.2 % in 2010 for public...

  20. The Danish Depression Database

    DEFF Research Database (Denmark)

    Videbech, Poul Bror Hemming; Deleuran, Anette

    2016-01-01

    AIM OF DATABASE: The purpose of the Danish Depression Database (DDD) is to monitor and facilitate the improvement of the quality of the treatment of depression in Denmark. Furthermore, the DDD has been designed to facilitate research. STUDY POPULATION: Inpatients as well as outpatients...... as an evaluation of the risk of suicide are measured before and after treatment. Whether psychiatric aftercare has been scheduled for inpatients and the rate of rehospitalization are also registered. DESCRIPTIVE DATA: The database was launched in 2011. Every year since then ~5,500 inpatients and 7,500 outpatients...... have been registered annually in the database. A total of 24,083 inpatients and 29,918 outpatients have been registered. The DDD produces an annual report published on the Internet. CONCLUSION: The DDD can become an important tool for quality improvement and research, when the reporting is more...

  1. Molecular marker databases.

    Science.gov (United States)

    Lai, Kaitao; Lorenc, Michał Tadeusz; Edwards, David

    2015-01-01

    The detection and analysis of genetic variation plays an important role in plant breeding and this role is increasing with the continued development of genome sequencing technologies. Molecular genetic markers are important tools to characterize genetic variation and assist with genomic breeding. Processing and storing the growing abundance of molecular marker data being produced requires the development of specific bioinformatics tools and advanced databases. Molecular marker databases range from species specific through to organism wide and often host a variety of additional related genetic, genomic, or phenotypic information. In this chapter, we will present some of the features of plant molecular genetic marker databases, highlight the various types of marker resources, and predict the potential future direction of crop marker databases.

  2. Consumer Product Category Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use...

  3. Eldercare Locator Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Eldercare Locator is a searchable database that allows a user to search via zip code or city/ state for agencies at the State and local levels that provide...

  4. Drycleaner Database - Region 7

    Data.gov (United States)

    U.S. Environmental Protection Agency — THIS DATA ASSET NO LONGER ACTIVE: This is metadata documentation for the Region 7 Drycleaner Database (R7DryClnDB) which tracks all Region7 drycleaners who notify...

  5. Reach Address Database (RAD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Reach Address Database (RAD) stores the reach address of each Water Program feature that has been linked to the underlying surface water features (streams,...

  6. Toxicity Reference Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Toxicity Reference Database (ToxRefDB) contains approximately 30 years and $2 billion worth of animal studies. ToxRefDB allows scientists and the interested...

  7. 1988 Spitak Earthquake Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 1988 Spitak Earthquake database is an extensive collection of geophysical and geological data, maps, charts, images and descriptive text pertaining to the...

  8. Hawaii bibliographic database

    Science.gov (United States)

    Wright, Thomas L.; Takahashi, Taeko Jane

    The Hawaii bibliographic database has been created to contain all of the literature, from 1779 to the present, pertinent to the volcanological history of the Hawaiian-Emperor volcanic chain. References are entered in a PC- and Macintosh-compatible EndNote Plus bibliographic database with keywords and s or (if no ) with annotations as to content. Keywords emphasize location, discipline, process, identification of new chemical data or age determinations, and type of publication. The database is updated approximately three times a year and is available to upload from an ftp site. The bibliography contained 8460 references at the time this paper was submitted for publication. Use of the database greatly enhances the power and completeness of library searches for anyone interested in Hawaiian volcanism.

  9. Food Habits Database (FHDBS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NEFSC Food Habits Database has two major sources of data. The first, and most extensive, is the standard NEFSC Bottom Trawl Surveys Program. During these...

  10. NLCD 2011 database

    Data.gov (United States)

    U.S. Environmental Protection Agency — National Land Cover Database 2011 (NLCD 2011) is the most recent national land cover product created by the Multi-Resolution Land Characteristics (MRLC) Consortium....

  11. Mouse Phenome Database (MPD)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Mouse Phenome Database (MPD) has characterizations of hundreds of strains of laboratory mice to facilitate translational discoveries and to assist in selection...

  12. Disaster Debris Recovery Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The US EPA Region 5 Disaster Debris Recovery Database includes public datasets of over 3,500 composting facilities, demolition contractors, haulers, transfer...

  13. National Geochemical Database: Sediment

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Geochemical analysis of sediment samples from the National Geochemical Database. Primarily inorganic elemental concentrations, most samples are of stream sediment...

  14. Uranium Location Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — A GIS compiled locational database in Microsoft Access of ~15,000 mines with uranium occurrence or production, primarily in the western United States. The metadata...

  15. National Assessment Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The National Assessment Database stores and tracks state water quality assessment decisions, Total Maximum Daily Loads (TMDLs) and other watershed plans designed to...

  16. Household Products Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — This database links over 4,000 consumer brands to health effects from Material Safety Data Sheets (MSDS) provided by the manufacturers and allows scientists and...

  17. Dissolution Methods Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — For a drug product that does not have a dissolution test method in the United States Pharmacopeia (USP), the FDA Dissolution Methods Database provides information on...

  18. ATLAS DAQ Configuration Databases

    Institute of Scientific and Technical Information of China (English)

    I.Alexandrov; A.Amorim; 等

    2001-01-01

    The configuration databases are an important part of the Trigger/DAQ system of the future ATLAS experiment .This paper describes their current status giving details of architecture,implementation,test results and plans for future work.

  19. Venus Crater Database

    Data.gov (United States)

    National Aeronautics and Space Administration — This web page leads to a database of images and information about the 900 or so impact craters on the surface of Venus by diameter, latitude, and name.

  20. Global Volcano Locations Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NGDC maintains a database of over 1,500 volcano locations obtained from the Smithsonian Institution Global Volcanism Program, Volcanoes of the World publication. The...