WorldWideScience

Sample records for network database final

  1. Wisconsin Inventors` Network Database final report

    Energy Technology Data Exchange (ETDEWEB)

    1991-12-04

    The Wisconsin Innovation Service Center at UW-Whitewater received a DOE grant to create an Inventor`s Network Database to assist independent inventors and entrepreneurs with new product development. Since 1980, the Wisconsin Innovation Service Center (WISC) at the University of Wisconsin-Whitewater has assisted independent and small business inventors in estimating the marketability of their new product ideas and inventions. The purpose of the WISC as an economic development entity is to encourage inventors who appear to have commercially viable inventions, based on preliminary market research, to invest in the next stages of development, perhaps investigating prototype development, legal protection, or more in-depth market research. To address inventor`s information needs, WISC developed on electronic database with search capabilities by geographic region and by product category/industry. It targets both public and private resources capable of, and interested in, working with individual and small business inventors. At present, the project includes resources in Wisconsin only.

  2. Wisconsin Inventors' Network Database final report

    Energy Technology Data Exchange (ETDEWEB)

    1991-12-04

    The Wisconsin Innovation Service Center at UW-Whitewater received a DOE grant to create an Inventor's Network Database to assist independent inventors and entrepreneurs with new product development. Since 1980, the Wisconsin Innovation Service Center (WISC) at the University of Wisconsin-Whitewater has assisted independent and small business inventors in estimating the marketability of their new product ideas and inventions. The purpose of the WISC as an economic development entity is to encourage inventors who appear to have commercially viable inventions, based on preliminary market research, to invest in the next stages of development, perhaps investigating prototype development, legal protection, or more in-depth market research. To address inventor's information needs, WISC developed on electronic database with search capabilities by geographic region and by product category/industry. It targets both public and private resources capable of, and interested in, working with individual and small business inventors. At present, the project includes resources in Wisconsin only.

  3. Network-based statistical comparison of citation topology of bibliographic databases

    Science.gov (United States)

    Šubelj, Lovro; Fiala, Dalibor; Bajec, Marko

    2014-01-01

    Modern bibliographic databases provide the basis for scientific research and its evaluation. While their content and structure differ substantially, there exist only informal notions on their reliability. Here we compare the topological consistency of citation networks extracted from six popular bibliographic databases including Web of Science, CiteSeer and arXiv.org. The networks are assessed through a rich set of local and global graph statistics. We first reveal statistically significant inconsistencies between some of the databases with respect to individual statistics. For example, the introduced field bow-tie decomposition of DBLP Computer Science Bibliography substantially differs from the rest due to the coverage of the database, while the citation information within arXiv.org is the most exhaustive. Finally, we compare the databases over multiple graph statistics using the critical difference diagram. The citation topology of DBLP Computer Science Bibliography is the least consistent with the rest, while, not surprisingly, Web of Science is significantly more reliable from the perspective of consistency. This work can serve either as a reference for scholars in bibliometrics and scientometrics or a scientific evaluation guideline for governments and research agencies. PMID:25263231

  4. Network-based Database Course

    DEFF Research Database (Denmark)

    Nielsen, J.N.; Knudsen, Morten; Nielsen, Jens Frederik Dalsgaard

    A course in database design and implementation has been de- signed, utilizing existing network facilities. The course is an elementary course for students of computer engineering. Its purpose is to give the students a theoretical database knowledge as well as practical experience with design...... and implementation. A tutorial relational database and the students self-designed databases are implemented on the UNIX system of Aalborg University, thus giving the teacher the possibility of live demonstrations in the lecture room, and the students the possibility of interactive learning in their working rooms...

  5. A distributed database view of network tracking systems

    Science.gov (United States)

    Yosinski, Jason; Paffenroth, Randy

    2008-04-01

    In distributed tracking systems, multiple non-collocated trackers cooperate to fuse local sensor data into a global track picture. Generating this global track picture at a central location is fairly straightforward, but the single point of failure and excessive bandwidth requirements introduced by centralized processing motivate the development of decentralized methods. In many decentralized tracking systems, trackers communicate with their peers via a lossy, bandwidth-limited network in which dropped, delayed, and out of order packets are typical. Oftentimes the decentralized tracking problem is viewed as a local tracking problem with a networking twist; we believe this view can underestimate the network complexities to be overcome. Indeed, a subsequent 'oversight' layer is often introduced to detect and handle track inconsistencies arising from a lack of robustness to network conditions. We instead pose the decentralized tracking problem as a distributed database problem, enabling us to draw inspiration from the vast extant literature on distributed databases. Using the two-phase commit algorithm, a well known technique for resolving transactions across a lossy network, we describe several ways in which one may build a distributed multiple hypothesis tracking system from the ground up to be robust to typical network intricacies. We pay particular attention to the dissimilar challenges presented by network track initiation vs. maintenance and suggest a hybrid system that balances speed and robustness by utilizing two-phase commit for only track initiation transactions. Finally, we present simulation results contrasting the performance of such a system with that of more traditional decentralized tracking implementations.

  6. Wireless Sensor Networks Database: Data Management and Implementation

    Directory of Open Access Journals (Sweden)

    Ping Liu

    2014-04-01

    Full Text Available As the core application of wireless sensor network technology, Data management and processing have become the research hotspot in the new database. This article studied mainly data management in wireless sensor networks, in connection with the characteristics of the data in wireless sensor networks, discussed wireless sensor network data query, integrating technology in-depth, proposed a mobile database structure based on wireless sensor network and carried out overall design and implementation for the data management system. In order to achieve the communication rules of above routing trees, network manager uses a simple maintenance algorithm of routing trees. Design ordinary node end, server end in mobile database at gathering nodes and mobile client end that can implement the system, focus on designing query manager, storage modules and synchronous module at server end in mobile database at gathering nodes.

  7. Nuclear technology databases and information network systems

    International Nuclear Information System (INIS)

    Iwata, Shuichi; Kikuchi, Yasuyuki; Minakuchi, Satoshi

    1993-01-01

    This paper describes the databases related to nuclear (science) technology, and information network. Following contents are collected in this paper: the database developed by JAERI, ENERGY NET, ATOM NET, NUCLEN nuclear information database, INIS, NUclear Code Information Service (NUCLIS), Social Application of Nuclear Technology Accumulation project (SANTA), Nuclear Information Database/Communication System (NICS), reactor materials database, radiation effects database, NucNet European nuclear information database, reactor dismantling database. (J.P.N.)

  8. The Danish Collaborative Bacteraemia Network (DACOBAN) database

    DEFF Research Database (Denmark)

    Gradel, Kim Oren; Schønheyder, Henrik Carl; Arpi, Magnus

    2014-01-01

    % of the Danish population). The database also includes data on comorbidity from the Danish National Patient Registry, vital status from the Danish Civil Registration System, and clinical data on 31% of nonselected records in the database. Use of the unique civil registration number given to all Danish residents......The Danish Collaborative Bacteraemia Network (DACOBAN) research database includes microbiological data obtained from positive blood cultures from a geographically and demographically well-defined population serviced by three clinical microbiology departments (1.7 million residents, 32...... enables linkage to additional registries for specific research projects. The DACOBAN database is continuously updated, and it currently comprises 39,292 patients with 49,951 bacteremic episodes from 2000 through 2011. The database is part of an international network of population-based bacteremia...

  9. Network and Database Security: Regulatory Compliance, Network, and Database Security - A Unified Process and Goal

    Directory of Open Access Journals (Sweden)

    Errol A. Blake

    2007-12-01

    Full Text Available Database security has evolved; data security professionals have developed numerous techniques and approaches to assure data confidentiality, integrity, and availability. This paper will show that the Traditional Database Security, which has focused primarily on creating user accounts and managing user privileges to database objects are not enough to protect data confidentiality, integrity, and availability. This paper is a compilation of different journals, articles and classroom discussions will focus on unifying the process of securing data or information whether it is in use, in storage or being transmitted. Promoting a change in Database Curriculum Development trends may also play a role in helping secure databases. This paper will take the approach that if one make a conscientious effort to unifying the Database Security process, which includes Database Management System (DBMS selection process, following regulatory compliances, analyzing and learning from the mistakes of others, Implementing Networking Security Technologies, and Securing the Database, may prevent database breach.

  10. Report on Approaches to Database Translation. Final Report.

    Science.gov (United States)

    Gallagher, Leonard; Salazar, Sandra

    This report describes approaches to database translation (i.e., transferring data and data definitions from a source, either a database management system (DBMS) or a batch file, to a target DBMS), and recommends a method for representing the data structures of newly-proposed network and relational data models in a form suitable for database…

  11. Rett networked database

    DEFF Research Database (Denmark)

    Grillo, Elisa; Villard, Laurent; Clarke, Angus

    2012-01-01

    Rett syndrome (RTT) is a neurodevelopmental disorder with one principal phenotype and several distinct, atypical variants (Zappella, early seizure onset and congenital variants). Mutations in MECP2 are found in most cases of classic RTT but at least two additional genes, CDKL5 and FOXG1, can unde...... clinical features and mutations. We expect that the network will serve for the recruitment of patients into clinical trials and for developing quality measures to drive up standards of medical management....... underlie some (usually variant) cases. There is only limited correlation between genotype and phenotype. The Rett Networked Database (http://www.rettdatabasenetwork.org/) has been established to share clinical and genetic information. Through an "adaptor" process of data harmonization, a set of 293...... can expand indefinitely. Data are entered by a clinician in each center who supervises accuracy. This network was constructed to make available pooled international data for the study of RTT natural history and genotype-phenotype correlation and to indicate the proportion of patients with specific...

  12. The ESID Online Database network.

    Science.gov (United States)

    Guzman, D; Veit, D; Knerr, V; Kindle, G; Gathmann, B; Eades-Perner, A M; Grimbacher, B

    2007-03-01

    Primary immunodeficiencies (PIDs) belong to the group of rare diseases. The European Society for Immunodeficiencies (ESID), is establishing an innovative European patient and research database network for continuous long-term documentation of patients, in order to improve the diagnosis, classification, prognosis and therapy of PIDs. The ESID Online Database is a web-based system aimed at data storage, data entry, reporting and the import of pre-existing data sources in an enterprise business-to-business integration (B2B). The online database is based on Java 2 Enterprise System (J2EE) with high-standard security features, which comply with data protection laws and the demands of a modern research platform. The ESID Online Database is accessible via the official website (http://www.esid.org/). Supplementary data are available at Bioinformatics online.

  13. Insertion algorithms for network model database management systems

    Science.gov (United States)

    Mamadolimov, Abdurashid; Khikmat, Saburov

    2017-12-01

    The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.

  14. Extracting reaction networks from databases-opening Pandora's box.

    Science.gov (United States)

    Fearnley, Liam G; Davis, Melissa J; Ragan, Mark A; Nielsen, Lars K

    2014-11-01

    Large quantities of information describing the mechanisms of biological pathways continue to be collected in publicly available databases. At the same time, experiments have increased in scale, and biologists increasingly use pathways defined in online databases to interpret the results of experiments and generate hypotheses. Emerging computational techniques that exploit the rich biological information captured in reaction systems require formal standardized descriptions of pathways to extract these reaction networks and avoid the alternative: time-consuming and largely manual literature-based network reconstruction. Here, we systematically evaluate the effects of commonly used knowledge representations on the seemingly simple task of extracting a reaction network describing signal transduction from a pathway database. We show that this process is in fact surprisingly difficult, and the pathway representations adopted by various knowledge bases have dramatic consequences for reaction network extraction, connectivity, capture of pathway crosstalk and in the modelling of cell-cell interactions. Researchers constructing computational models built from automatically extracted reaction networks must therefore consider the issues we outline in this review to maximize the value of existing pathway knowledge. © The Author 2013. Published by Oxford University Press.

  15. SolCyc: a database hub at the Sol Genomics Network (SGN) for the manual curation of metabolic networks in Solanum and Nicotiana specific databases

    Science.gov (United States)

    Foerster, Hartmut; Bombarely, Aureliano; Battey, James N D; Sierro, Nicolas; Ivanov, Nikolai V; Mueller, Lukas A

    2018-01-01

    Abstract SolCyc is the entry portal to pathway/genome databases (PGDBs) for major species of the Solanaceae family hosted at the Sol Genomics Network. Currently, SolCyc comprises six organism-specific PGDBs for tomato, potato, pepper, petunia, tobacco and one Rubiaceae, coffee. The metabolic networks of those PGDBs have been computationally predicted by the pathologic component of the pathway tools software using the manually curated multi-domain database MetaCyc (http://www.metacyc.org/) as reference. SolCyc has been recently extended by taxon-specific databases, i.e. the family-specific SolanaCyc database, containing only curated data pertinent to species of the nightshade family, and NicotianaCyc, a genus-specific database that stores all relevant metabolic data of the Nicotiana genus. Through manual curation of the published literature, new metabolic pathways have been created in those databases, which are complemented by the continuously updated, relevant species-specific pathways from MetaCyc. At present, SolanaCyc comprises 199 pathways and 29 superpathways and NicotianaCyc accounts for 72 pathways and 13 superpathways. Curator-maintained, taxon-specific databases such as SolanaCyc and NicotianaCyc are characterized by an enrichment of data specific to these taxa and free of falsely predicted pathways. Both databases have been used to update recently created Nicotiana-specific databases for Nicotiana tabacum, Nicotiana benthamiana, Nicotiana sylvestris and Nicotiana tomentosiformis by propagating verifiable data into those PGDBs. In addition, in-depth curation of the pathways in N.tabacum has been carried out which resulted in the elimination of 156 pathways from the 569 pathways predicted by pathway tools. Together, in-depth curation of the predicted pathway network and the supplementation with curated data from taxon-specific databases has substantially improved the curation status of the species–specific N.tabacum PGDB. The implementation of this

  16. SoyFN: a knowledge database of soybean functional networks.

    Science.gov (United States)

    Xu, Yungang; Guo, Maozu; Liu, Xiaoyan; Wang, Chunyu; Liu, Yang

    2014-01-01

    Many databases for soybean genomic analysis have been built and made publicly available, but few of them contain knowledge specifically targeting the omics-level gene-gene, gene-microRNA (miRNA) and miRNA-miRNA interactions. Here, we present SoyFN, a knowledge database of soybean functional gene networks and miRNA functional networks. SoyFN provides user-friendly interfaces to retrieve, visualize, analyze and download the functional networks of soybean genes and miRNAs. In addition, it incorporates much information about KEGG pathways, gene ontology annotations and 3'-UTR sequences as well as many useful tools including SoySearch, ID mapping, Genome Browser, eFP Browser and promoter motif scan. SoyFN is a schema-free database that can be accessed as a Web service from any modern programming language using a simple Hypertext Transfer Protocol call. The Web site is implemented in Java, JavaScript, PHP, HTML and Apache, with all major browsers supported. We anticipate that this database will be useful for members of research communities both in soybean experimental science and bioinformatics. Database URL: http://nclab.hit.edu.cn/SoyFN.

  17. Research on Construction of Road Network Database Based on Video Retrieval Technology

    Directory of Open Access Journals (Sweden)

    Wang Fengling

    2017-01-01

    Full Text Available Based on the characteristics of the video database and the basic structure of the video database and several typical video data models, the segmentation-based multi-level data model is used to describe the landscape information video database, the network database model and the road network management database system. Landscape information management system detailed design and implementation of a detailed preparation.

  18. The NASA Fireball Network Database

    Science.gov (United States)

    Moser, Danielle E.

    2011-01-01

    The NASA Meteoroid Environment Office (MEO) has been operating an automated video fireball network since late-2008. Since that time, over 1,700 multi-station fireballs have been observed. A database containing orbital data and trajectory information on all these events has recently been compiled and is currently being mined for information. Preliminary results are presented here.

  19. The research of network database security technology based on web service

    Science.gov (United States)

    Meng, Fanxing; Wen, Xiumei; Gao, Liting; Pang, Hui; Wang, Qinglin

    2013-03-01

    Database technology is one of the most widely applied computer technologies, its security is becoming more and more important. This paper introduced the database security, network database security level, studies the security technology of the network database, analyzes emphatically sub-key encryption algorithm, applies this algorithm into the campus-one-card system successfully. The realization process of the encryption algorithm is discussed, this method is widely used as reference in many fields, particularly in management information system security and e-commerce.

  20. Flexible network reconstruction from relational databases with Cytoscape and CytoSQL.

    Science.gov (United States)

    Laukens, Kris; Hollunder, Jens; Dang, Thanh Hai; De Jaeger, Geert; Kuiper, Martin; Witters, Erwin; Verschoren, Alain; Van Leemput, Koenraad

    2010-07-01

    Molecular interaction networks can be efficiently studied using network visualization software such as Cytoscape. The relevant nodes, edges and their attributes can be imported in Cytoscape in various file formats, or directly from external databases through specialized third party plugins. However, molecular data are often stored in relational databases with their own specific structure, for which dedicated plugins do not exist. Therefore, a more generic solution is presented. A new Cytoscape plugin 'CytoSQL' is developed to connect Cytoscape to any relational database. It allows to launch SQL ('Structured Query Language') queries from within Cytoscape, with the option to inject node or edge features of an existing network as SQL arguments, and to convert the retrieved data to Cytoscape network components. Supported by a set of case studies we demonstrate the flexibility and the power of the CytoSQL plugin in converting specific data subsets into meaningful network representations. CytoSQL offers a unified approach to let Cytoscape interact with relational databases. Thanks to the power of the SQL syntax, this tool can rapidly generate and enrich networks according to very complex criteria. The plugin is available at http://www.ptools.ua.ac.be/CytoSQL.

  1. Five years database of landslides and floods affecting Swiss transportation networks

    Science.gov (United States)

    Voumard, Jérémie; Derron, Marc-Henri; Jaboyedoff, Michel

    2017-04-01

    Switzerland is a country threatened by a lot of natural hazards. Many events occur in built environment, affecting infrastructures, buildings or transportation networks and producing occasionally expensive damages. This is the reason why large landslides are generally well studied and monitored in Switzerland to reduce the financial and human risks. However, we have noticed a lack of data on small events which have impacted roads and railways these last years. This is why we have collect all the reported natural hazard events which have affected the Swiss transportation networks since 2012 in a database. More than 800 roads and railways closures have been recorded in five years from 2012 to 2016. These event are classified into six classes: earth flow, debris flow, rockfall, flood, avalanche and others. Data come from Swiss online press articles sorted by Google Alerts. The search is based on more than thirty keywords, in three languages (Italian, French, German). After verifying that the article relates indeed an event which has affected a road or a railways track, it is studied in details. We get finally information on about sixty attributes by event about event date, event type, event localisation, meteorological conditions as well as impacts and damages on the track and human damages. From this database, many trends over the five years of data collection can be outlined: in particular, the spatial and temporal distributions of the events, as well as their consequences in term of traffic (closure duration, deviation, etc.). Even if the database is imperfect (by the way it was built and because of the short time period considered), it highlights the not negligible impact of small natural hazard events on roads and railways in Switzerland at a national level. This database helps to better understand and quantify this events, to better integrate them in risk assessment.

  2. PL/SQL and Bind Variable: the two ways to increase the efficiency of Network Databases

    Directory of Open Access Journals (Sweden)

    Hitesh KUMAR SHARMA

    2011-12-01

    Full Text Available Modern data analysis applications are driven by the Network databases. They are pushing traditional database and data warehousing technologies beyond their limits due to their massively increasing data volumes and demands for low latency. There are three major challenges in working with network databases: interoperability due to heterogeneous data repositories, proactively due to autonomy of data sources and high efficiency to meet the application demand. This paper provides the two ways to meet the third challenge of network databases. This goal can be achieved by network database administrator with the usage of PL/SQL blocks and bind variable. The paper will explain the effect of PL/SQL block and bind variable on Network database efficiency to meet the modern data analysis application demand.

  3. A Bayesian Network Schema for Lessening Database Inference

    National Research Council Canada - National Science Library

    Chang, LiWu; Moskowitz, Ira S

    2001-01-01

    .... The authors introduce a formal schema for database inference analysis, based upon a Bayesian network structure, which identifies critical parameters involved in the inference problem and represents...

  4. Database Software Selection for the Egyptian National STI Network.

    Science.gov (United States)

    Slamecka, Vladimir

    The evaluation and selection of information/data management system software for the Egyptian National Scientific and Technical (STI) Network are described. An overview of the state-of-the-art of database technology elaborates on the differences between information retrieval and database management systems (DBMS). The desirable characteristics of…

  5. The Network Configuration of an Object Relational Database Management System

    Science.gov (United States)

    Diaz, Philip; Harris, W. C.

    2000-01-01

    The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.

  6. Computer application for database management and networking of service radio physics

    International Nuclear Information System (INIS)

    Ferrando Sanchez, A.; Cabello Murillo, E.; Diaz Fuentes, R.; Castro Novais, J.; Clemente Gutierrez, F.; Casa de Juan, M. A. de la; Adaimi Hernandez, P.

    2011-01-01

    The databases in the quality control prove to be a powerful tool for recording, management and statistical process control. Developed in a Windows environment and under Access (Micros of Office) our service implements this philosophy on the canter's computer network. A computer that acts as the server provides the database to the treatment units to record quality control measures daily and incidents. To remove shortcuts stop working with data migration, possible use of duplicate and erroneous data loss because of errors in network connections, which are common problems, we proceeded to manage connections and access to databases ease of maintenance and use horn to all service personnel.

  7. Handling of network and database instabilities in CORAL

    International Nuclear Information System (INIS)

    Trentadue, R; Valassi, A; Kalkhof, A

    2012-01-01

    The CORAL software is widely used by the LHC experiments for storing and accessing data using relational database technologies. CORAL provides a C++ abstraction layer that supports data persistency for several back-ends and deployment models, direct client access to Oracle servers being one of the most important use cases. Since 2010, several problems have been reported by the LHC experiments in their use of Oracle through CORAL, involving application errors, hangs or crashes after the network or the database servers became temporarily unavailable. CORAL already provided some level of handling of these instabilities, which are due to external causes and cannot be avoided, but this proved to be insufficient in some cases and to be itself the cause of other problems, such as the hangs and crashes mentioned before, in other cases. As a consequence, a major redesign of the CORAL plugins has been implemented, with the aim of making the software more robust against these database and network glitches. The new implementation ensures that CORAL automatically reconnects to Oracle databases in a transparent way whenever possible and gently terminates the application when this is not possible. Internally, this is done by resetting all relevant parameters of the underlying back-end technology (OCI, the Oracle Call Interface). This presentation reports on the status of this work at the time of the CHEP2012 conference, covering the design and implementation of these new features and the outlook for future developments in this area.

  8. A Database of Tornado Events as Perceived by the USArray Transportable Array Network

    Science.gov (United States)

    Tytell, J. E.; Vernon, F.; Reyes, J. C.

    2015-12-01

    Over the course of the deployment of Earthscope's USArray Transportable Array (TA) network there have numerous tornado events that have occurred within the changing footprint of its network. The Array Network Facility based in San Diego, California, has compiled a database of these tornado events based on data provided by the NOAA Storm Prediction Center (SPC). The SPC data itself consists of parameters such as start-end point track data for each event, maximum EF intensities, and maximum track widths. Our database is Antelope driven and combines these data from the SPC with detailed station information from the TA network. We are now able to list all available TA stations during any specific tornado event date and also provide a single calculated "nearest" TA station per individual tornado event. We aim to provide this database as a starting resource for those with an interest in investigating tornado signatures within surface pressure and seismic response data. On a larger scale, the database may be of particular interest to the infrasound research community

  9. U.S. LCI Database Project--Final Phase I Report

    Energy Technology Data Exchange (ETDEWEB)

    2003-08-01

    This Phase I final report reviews the process and provides a plan for the execution of subsequent phases of the database project, including recommended data development priorities and a preliminary cost estimate. The ultimate goal of the project is to develop publicly available LCI Data modules for commonly used materials, products, and processes.

  10. Content-based organization of the information space in multi-database networks

    NARCIS (Netherlands)

    Papazoglou, M.; Milliner, S.

    1998-01-01

    Abstract. Rapid growth in the volume of network-available data, complexity, diversity and terminological fluctuations, at different data sources, render network-accessible information increasingly difficult to achieve. The situation is particularly cumbersome for users of multi-database systems who

  11. CMRegNet-An interspecies reference database for corynebacterial and mycobacterial regulatory networks

    DEFF Research Database (Denmark)

    Abreu, Vinicius A C; Almeida, Sintia; Tiwari, Sandeep

    2015-01-01

    gene regulatory network can lead to various practical applications, creating a greater understanding of how organisms control their cellular behavior. DESCRIPTION: In this work, we present a new database, CMRegNet for the gene regulatory networks of Corynebacterium glutamicum ATCC 13032......Net to date the most comprehensive database of regulatory interactions of CMNR bacteria. The content of CMRegNet is publicly available online via a web interface found at http://lgcm.icb.ufmg.br/cmregnet ....

  12. Research on Big Data Attribute Selection Method in Submarine Optical Fiber Network Fault Diagnosis Database

    Directory of Open Access Journals (Sweden)

    Chen Ganlang

    2017-11-01

    Full Text Available At present, in the fault diagnosis database of submarine optical fiber network, the attribute selection of large data is completed by detecting the attributes of the data, the accuracy of large data attribute selection cannot be guaranteed. In this paper, a large data attribute selection method based on support vector machines (SVM for fault diagnosis database of submarine optical fiber network is proposed. Mining large data in the database of optical fiber network fault diagnosis, and calculate its attribute weight, attribute classification is completed according to attribute weight, so as to complete attribute selection of large data. Experimental results prove that ,the proposed method can improve the accuracy of large data attribute selection in fault diagnosis database of submarine optical fiber network, and has high use value.

  13. A Bayesian network approach to the database search problem in criminal proceedings

    Science.gov (United States)

    2012-01-01

    Background The ‘database search problem’, that is, the strengthening of a case - in terms of probative value - against an individual who is found as a result of a database search, has been approached during the last two decades with substantial mathematical analyses, accompanied by lively debate and centrally opposing conclusions. This represents a challenging obstacle in teaching but also hinders a balanced and coherent discussion of the topic within the wider scientific and legal community. This paper revisits and tracks the associated mathematical analyses in terms of Bayesian networks. Their derivation and discussion for capturing probabilistic arguments that explain the database search problem are outlined in detail. The resulting Bayesian networks offer a distinct view on the main debated issues, along with further clarity. Methods As a general framework for representing and analyzing formal arguments in probabilistic reasoning about uncertain target propositions (that is, whether or not a given individual is the source of a crime stain), this paper relies on graphical probability models, in particular, Bayesian networks. This graphical probability modeling approach is used to capture, within a single model, a series of key variables, such as the number of individuals in a database, the size of the population of potential crime stain sources, and the rarity of the corresponding analytical characteristics in a relevant population. Results This paper demonstrates the feasibility of deriving Bayesian network structures for analyzing, representing, and tracking the database search problem. The output of the proposed models can be shown to agree with existing but exclusively formulaic approaches. Conclusions The proposed Bayesian networks allow one to capture and analyze the currently most well-supported but reputedly counter-intuitive and difficult solution to the database search problem in a way that goes beyond the traditional, purely formulaic expressions

  14. STITCH 2: an interaction network database for small molecules and proteins

    DEFF Research Database (Denmark)

    Kuhn, Michael; Szklarczyk, Damian; Franceschini, Andrea

    2010-01-01

    Over the last years, the publicly available knowledge on interactions between small molecules and proteins has been steadily increasing. To create a network of interactions, STITCH aims to integrate the data dispersed over the literature and various databases of biological pathways, drug......-target relationships and binding affinities. In STITCH 2, the number of relevant interactions is increased by incorporation of BindingDB, PharmGKB and the Comparative Toxicogenomics Database. The resulting network can be explored interactively or used as the basis for large-scale analyses. To facilitate links to other...... chemical databases, we adopt InChIKeys that allow identification of chemicals with a short, checksum-like string. STITCH 2.0 connects proteins from 630 organisms to over 74,000 different chemicals, including 2200 drugs. STITCH can be accessed at http://stitch.embl.de/....

  15. A database on electric vehicle use in Sweden. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Fridstrand, Niklas [Lund Univ. (Sweden). Dept. of Industrial Electrical Engineering and Automation

    2000-05-01

    The Department of Industrial Electrical Engineering and Automation (IEA) at the Lund Institute of Technology (LTH), has taken responsibility for developing and maintaining a database on electric and hybrid road vehicles in Sweden. The Swedish Transport and Communications Research Board, (KFB) initiated the development of this database. Information is collected from three major cities in Sweden: Malmoe, Gothenburg and Stockholm, as well as smaller cities such as Skellefteaa and Haernoesand in northern Sweden. This final report summarises the experience gained during the development and maintenance of the database from February 1996 to December 1999. Our aim was to construct a well-functioning database for the evaluation of electric and hybrid road vehicles in Sweden. The database contains detailed information on several years' use of electric vehicles (EVs) in Sweden (for example, 220 million driving records). Two data acquisition systems were used, one less and one more complex with respect to the number of quantities logged. Unfortunately, data collection was not complete, due to malfunctioning of the more complex system, and due to human factors for the less complex system.

  16. Exploring the Ligand-Protein Networks in Traditional Chinese Medicine: Current Databases, Methods, and Applications

    Directory of Open Access Journals (Sweden)

    Mingzhu Zhao

    2013-01-01

    Full Text Available The traditional Chinese medicine (TCM, which has thousands of years of clinical application among China and other Asian countries, is the pioneer of the “multicomponent-multitarget” and network pharmacology. Although there is no doubt of the efficacy, it is difficult to elucidate convincing underlying mechanism of TCM due to its complex composition and unclear pharmacology. The use of ligand-protein networks has been gaining significant value in the history of drug discovery while its application in TCM is still in its early stage. This paper firstly surveys TCM databases for virtual screening that have been greatly expanded in size and data diversity in recent years. On that basis, different screening methods and strategies for identifying active ingredients and targets of TCM are outlined based on the amount of network information available, both on sides of ligand bioactivity and the protein structures. Furthermore, applications of successful in silico target identification attempts are discussed in detail along with experiments in exploring the ligand-protein networks of TCM. Finally, it will be concluded that the prospective application of ligand-protein networks can be used not only to predict protein targets of a small molecule, but also to explore the mode of action of TCM.

  17. New model for distributed multimedia databases and its application to networking of museums

    Science.gov (United States)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1998-02-01

    This paper proposes a new distributed multimedia data base system where the databases storing MPEG-2 videos and/or super high definition images are connected together through the B-ISDN's, and also refers to an example of the networking of museums on the basis of the proposed database system. The proposed database system introduces a new concept of the 'retrieval manager' which functions an intelligent controller so that the user can recognize a set of image databases as one logical database. A user terminal issues a request to retrieve contents to the retrieval manager which is located in the nearest place to the user terminal on the network. Then, the retrieved contents are directly sent through the B-ISDN's to the user terminal from the server which stores the designated contents. In this case, the designated logical data base dynamically generates the best combination of such a retrieving parameter as a data transfer path referring to directly or data on the basis of the environment of the system. The generated retrieving parameter is then executed to select the most suitable data transfer path on the network. Therefore, the best combination of these parameters fits to the distributed multimedia database system.

  18. An online database for informing ecological network models: http://kelpforest.ucsc.edu.

    Science.gov (United States)

    Beas-Luna, Rodrigo; Novak, Mark; Carr, Mark H; Tinker, Martin T; Black, August; Caselle, Jennifer E; Hoban, Michael; Malone, Dan; Iles, Alison

    2014-01-01

    Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/databaseui).

  19. An XML-Based Networking Method for Connecting Distributed Anthropometric Databases

    Directory of Open Access Journals (Sweden)

    H Cheng

    2007-03-01

    Full Text Available Anthropometric data are used by numerous types of organizations for health evaluation, ergonomics, apparel sizing, fitness training, and many other applications. Data have been collected and stored in electronic databases since at least the 1940s. These databases are owned by many organizations around the world. In addition, the anthropometric studies stored in these databases often employ different standards, terminology, procedures, or measurement sets. To promote the use and sharing of these databases, the World Engineering Anthropometry Resources (WEAR group was formed and tasked with the integration and publishing of member resources. It is easy to see that organizing worldwide anthropometric data into a single database architecture could be a daunting and expensive undertaking. The challenges of WEAR integration reflect mainly in the areas of distributed and disparate data, different standards and formats, independent memberships, and limited development resources. Fortunately, XML schema and web services provide an alternative method for networking databases, referred to as the Loosely Coupled WEAR Integration. A standard XML schema can be defined and used as a type of Rosetta stone to translate the anthropometric data into a universal format, and a web services system can be set up to link the databases to one another. In this way, the originators of the data can keep their data locally along with their own data management system and user interface, but their data can be searched and accessed as part of the larger data network, and even combined with the data of others. This paper will identify requirements for WEAR integration, review XML as the universal format, review different integration approaches, and propose a hybrid web services/data mart solution.

  20. Experience and Lessons learnt from running High Availability Databases on Network Attached Storage

    CERN Document Server

    Guijarro, Manuel

    2008-01-01

    The Database and Engineering Services Group of CERN's Information Technology Department supplies the Oracle Central Database services used in many activities at CERN. In order to provide High Availability and ease management for those services, a NAS (Network Attached Storage) based infrastructure has been setup. It runs several instances of the Oracle RAC (Real Application Cluster) using NFS (Network File System) as shared disk space for RAC purposes and Data hosting. It is composed of two private LANs (Local Area Network), one to provide access to the NAS filers and a second to implement the Oracle RAC private interconnect, both using Network Bonding. NAS filers are configured in partnership to prevent having single points of failure and to provide automatic NAS filer fail-over.

  1. Experience and lessons learnt from running high availability databases on network attached storage

    International Nuclear Information System (INIS)

    Guijarro, M; Gaspar, R

    2008-01-01

    The Database and Engineering Services Group of CERN's Information Technology Department supplies the Oracle Central Database services used in many activities at CERN. In order to provide High Availability and ease management for those services, a NAS (Network Attached Storage) based infrastructure has been setup. It runs several instances of the Oracle RAC (Real Application Cluster) using NFS (Network File System) as shared disk space for RAC purposes and Data hosting. It is composed of two private LANs (Local Area Network), one to provide access to the NAS filers and a second to implement the Oracle RAC private interconnect, both using Network Bonding. NAS filers are configured in partnership to prevent having single points of failure and to provide automatic NAS filer fail-over

  2. ODIN. Online Database Information Network: ODIN Policy & Procedure Manual.

    Science.gov (United States)

    Townley, Charles T.; And Others

    Policies and procedures are outlined for the Online Database Information Network (ODIN), a cooperative of libraries in south-central Pennsylvania, which was organized to improve library services through technology. The first section covers organization and goals, members, and responsibilities of the administrative council and libraries. Patrons…

  3. The shortest path algorithm performance comparison in graph and relational database on a transportation network

    Directory of Open Access Journals (Sweden)

    Mario Miler

    2014-02-01

    Full Text Available In the field of geoinformation and transportation science, the shortest path is calculated on graph data mostly found in road and transportation networks. This data is often stored in various database systems. Many applications dealing with transportation network require calculation of the shortest path. The objective of this research is to compare the performance of Dijkstra shortest path calculation in PostgreSQL (with pgRouting and Neo4j graph database for the purpose of determining if there is any difference regarding the speed of the calculation. Benchmarking was done on commodity hardware using OpenStreetMap road network. The first assumption is that Neo4j graph database would be well suited for the shortest path calculation on transportation networks but this does not come without some cost. Memory proved to be an issue in Neo4j setup when dealing with larger transportation networks.

  4. PharmDB-K: Integrated Bio-Pharmacological Network Database for Traditional Korean Medicine.

    Directory of Open Access Journals (Sweden)

    Ji-Hyun Lee

    Full Text Available Despite the growing attention given to Traditional Medicine (TM worldwide, there is no well-known, publicly available, integrated bio-pharmacological Traditional Korean Medicine (TKM database for researchers in drug discovery. In this study, we have constructed PharmDB-K, which offers comprehensive information relating to TKM-associated drugs (compound, disease indication, and protein relationships. To explore the underlying molecular interaction of TKM, we integrated fourteen different databases, six Pharmacopoeias, and literature, and established a massive bio-pharmacological network for TKM and experimentally validated some cases predicted from the PharmDB-K analyses. Currently, PharmDB-K contains information about 262 TKMs, 7,815 drugs, 3,721 diseases, 32,373 proteins, and 1,887 side effects. One of the unique sets of information in PharmDB-K includes 400 indicator compounds used for standardization of herbal medicine. Furthermore, we are operating PharmDB-K via phExplorer (a network visualization software and BioMart (a data federation framework for convenient search and analysis of the TKM network. Database URL: http://pharmdb-k.org, http://biomart.i-pharm.org.

  5. [Study of algorithms to identify schizophrenia in the SNIIRAM database conducted by the REDSIAM network].

    Science.gov (United States)

    Quantin, C; Collin, C; Frérot, M; Besson, J; Cottenet, J; Corneloup, M; Soudry-Faure, A; Mariet, A-S; Roussot, A

    2017-10-01

    The aim of the REDSIAM network is to foster communication between users of French medico-administrative databases and to validate and promote analysis methods suitable for the data. Within this network, the working group "Mental and behavioral disorders" took an interest in algorithms to identify adult schizophrenia in the SNIIRAM database and inventoried identification criteria for patients with schizophrenia in these databases. The methodology was based on interviews with nine experts in schizophrenia concerning the procedures they use to identify patients with schizophrenia disorders in databases. The interviews were based on a questionnaire and conducted by telephone. The synthesis of the interviews showed that the SNIIRAM contains various tables which allow coders to identify patients suffering from schizophrenia: chronic disease status, drugs and hospitalizations. Taken separately, these criteria were not sufficient to recognize patients with schizophrenia, an algorithm should be based on all of them. Apparently, only one-third of people living with schizophrenia benefit from the longstanding disease status. Not all patients are hospitalized, and coding for diagnoses at the hospitalization, notably for short stays in medicine, surgery or obstetrics departments, is not exhaustive. As for treatment with antipsychotics, it is not specific enough as such treatments are also prescribed to patients with bipolar disorders, or even other disorders. It seems appropriate to combine these complementary criteria, while keeping in mind out-patient care (every year 80,000 patients are seen exclusively in an outpatient setting), even if these data are difficult to link with other information. Finally, the experts made three propositions for selection algorithms of patients with schizophrenia. Patients with schizophrenia can be relatively accurately identified using SNIIRAM data. Different combinations of the selected criteria must be used depending on the objectives and

  6. Virtualized Network Control. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Ghani, Nasir [Univ. of New Mexico, Albuquerque, NM (United States)

    2013-02-01

    This document is the final report for the Virtualized Network Control (VNC) project, which was funded by the United States Department of Energy (DOE) Office of Science. This project was also informally referred to as Advanced Resource Computation for Hybrid Service and TOpology NEtworks (ARCHSTONE). This report provides a summary of the project's activities, tasks, deliverable, and accomplishments. It also provides a summary of the documents, software, and presentations generated as part of this projects activities. Namely, the Appendix contains an archive of the deliverables, documents, and presentations generated a part of this project.

  7. ProteoLens: a visual analytic tool for multi-scale database-driven biological network data mining.

    Science.gov (United States)

    Huan, Tianxiao; Sivachenko, Andrey Y; Harrison, Scott H; Chen, Jake Y

    2008-08-12

    New systems biology studies require researchers to understand how interplay among myriads of biomolecular entities is orchestrated in order to achieve high-level cellular and physiological functions. Many software tools have been developed in the past decade to help researchers visually navigate large networks of biomolecular interactions with built-in template-based query capabilities. To further advance researchers' ability to interrogate global physiological states of cells through multi-scale visual network explorations, new visualization software tools still need to be developed to empower the analysis. A robust visual data analysis platform driven by database management systems to perform bi-directional data processing-to-visualizations with declarative querying capabilities is needed. We developed ProteoLens as a JAVA-based visual analytic software tool for creating, annotating and exploring multi-scale biological networks. It supports direct database connectivity to either Oracle or PostgreSQL database tables/views, on which SQL statements using both Data Definition Languages (DDL) and Data Manipulation languages (DML) may be specified. The robust query languages embedded directly within the visualization software help users to bring their network data into a visualization context for annotation and exploration. ProteoLens supports graph/network represented data in standard Graph Modeling Language (GML) formats, and this enables interoperation with a wide range of other visual layout tools. The architectural design of ProteoLens enables the de-coupling of complex network data visualization tasks into two distinct phases: 1) creating network data association rules, which are mapping rules between network node IDs or edge IDs and data attributes such as functional annotations, expression levels, scores, synonyms, descriptions etc; 2) applying network data association rules to build the network and perform the visual annotation of graph nodes and edges

  8. Construction and evaluation of yeast expression networks by database-guided predictions

    Directory of Open Access Journals (Sweden)

    Katharina Papsdorf

    2016-05-01

    Full Text Available DNA-Microarrays are powerful tools to obtain expression data on the genome-wide scale. We performed microarray experiments to elucidate the transcriptional networks, which are up- or down-regulated in response to the expression of toxic polyglutamine proteins in yeast. Such experiments initially generate hit lists containing differentially expressed genes. To look into transcriptional responses, we constructed networks from these genes. We therefore developed an algorithm, which is capable of dealing with very small numbers of microarrays by clustering the hits based on co-regulatory relationships obtained from the SPELL database. Here, we evaluate this algorithm according to several criteria and further develop its statistical capabilities. Initially, we define how the number of SPELL-derived co-regulated genes and the number of input hits influences the quality of the networks. We then show the ability of our networks to accurately predict further differentially expressed genes. Including these predicted genes into the networks improves the network quality and allows quantifying the predictive strength of the networks based on a newly implemented scoring method. We find that this approach is useful for our own experimental data sets and also for many other data sets which we tested from the SPELL microarray database. Furthermore, the clusters obtained by the described algorithm greatly improve the assignment to biological processes and transcription factors for the individual clusters. Thus, the described clustering approach, which will be available through the ClusterEx web interface, and the evaluation parameters derived from it represent valuable tools for the fast and informative analysis of yeast microarray data.

  9. Causal biological network database: a comprehensive platform of causal biological network models focused on the pulmonary and vascular systems.

    Science.gov (United States)

    Boué, Stéphanie; Talikka, Marja; Westra, Jurjen Willem; Hayes, William; Di Fabio, Anselmo; Park, Jennifer; Schlage, Walter K; Sewer, Alain; Fields, Brett; Ansari, Sam; Martin, Florian; Veljkovic, Emilija; Kenney, Renee; Peitsch, Manuel C; Hoeng, Julia

    2015-01-01

    With the wealth of publications and data available, powerful and transparent computational approaches are required to represent measured data and scientific knowledge in a computable and searchable format. We developed a set of biological network models, scripted in the Biological Expression Language, that reflect causal signaling pathways across a wide range of biological processes, including cell fate, cell stress, cell proliferation, inflammation, tissue repair and angiogenesis in the pulmonary and cardiovascular context. This comprehensive collection of networks is now freely available to the scientific community in a centralized web-based repository, the Causal Biological Network database, which is composed of over 120 manually curated and well annotated biological network models and can be accessed at http://causalbionet.com. The website accesses a MongoDB, which stores all versions of the networks as JSON objects and allows users to search for genes, proteins, biological processes, small molecules and keywords in the network descriptions to retrieve biological networks of interest. The content of the networks can be visualized and browsed. Nodes and edges can be filtered and all supporting evidence for the edges can be browsed and is linked to the original articles in PubMed. Moreover, networks may be downloaded for further visualization and evaluation. Database URL: http://causalbionet.com © The Author(s) 2015. Published by Oxford University Press.

  10. Distributed Database Semantic Integration of Wireless Sensor Network to Access the Environmental Monitoring System

    Directory of Open Access Journals (Sweden)

    Ubaidillah Umar

    2018-06-01

    Full Text Available A wireless sensor network (WSN works continuously to gather information from sensors that generate large volumes of data to be handled and processed by applications. Current efforts in sensor networks focus more on networking and development services for a variety of applications and less on processing and integrating data from heterogeneous sensors. There is an increased need for information to become shareable across different sensors, database platforms, and applications that are not easily implemented in traditional database systems. To solve the issue of these large amounts of data from different servers and database platforms (including sensor data, a semantic sensor web service platform is needed to enable a machine to extract meaningful information from the sensor’s raw data. This additionally helps to minimize and simplify data processing and to deduce new information from existing data. This paper implements a semantic web data platform (SWDP to manage the distribution of data sensors based on the semantic database system. SWDP uses sensors for temperature, humidity, carbon monoxide, carbon dioxide, luminosity, and noise. The system uses the Sesame semantic web database for data processing and a WSN to distribute, minimize, and simplify information processing. The sensor nodes are distributed in different places to collect sensor data. The SWDP generates context information in the form of a resource description framework. The experiment results demonstrate that the SWDP is more efficient than the traditional database system in terms of memory usage and processing time.

  11. The Fragment Network: A Chemistry Recommendation Engine Built Using a Graph Database.

    Science.gov (United States)

    Hall, Richard J; Murray, Christopher W; Verdonk, Marcel L

    2017-07-27

    The hit validation stage of a fragment-based drug discovery campaign involves probing the SAR around one or more fragment hits. This often requires a search for similar compounds in a corporate collection or from commercial suppliers. The Fragment Network is a graph database that allows a user to efficiently search chemical space around a compound of interest. The result set is chemically intuitive, naturally grouped by substitution pattern and meaningfully sorted according to the number of observations of each transformation in medicinal chemistry databases. This paper describes the algorithms used to construct and search the Fragment Network and provides examples of how it may be used in a drug discovery context.

  12. ISOE: an international occupational exposure database and communications network for dose optimisation

    International Nuclear Information System (INIS)

    Robinson, I.F.; Lazo, E.

    1995-01-01

    The Information System on Occupational Exposure (ISOE) was launched by the Nuclear Energy Agency (NEA) of the Organisation for Economic Co-operation and Development (OECD) on 1 January 1992 to facilitate the communication of dosimetry and ALARA implementation data among nuclear facilities around the world. Members of ISOE include 51 utilities from 17 countries and regulators from 11 countries, with four regional technical centres administering the system and a Steering Group which manages the work. ISOE includes three databases and a communications network at several levels. The three databases NEA1, NEA2 and NEA3 include varying levels of details, with NEA3 being the most detailed giving task and site specific ALARA practices and experiences. Utility membership of ISOE gives full access to the databases whereas regulators have more limited access. This paper reviews the current status of participation, describes the three databases and the communications network. Some dose data showing trends in particular countries are presented as well as dose data relating to operation cycle length and outage length. The advantages of membership are described, and it is concluded that ISOE holds the potential for both dose and cost savings. (author)

  13. LmSmdB: an integrated database for metabolic and gene regulatory network in Leishmania major and Schistosoma mansoni

    Directory of Open Access Journals (Sweden)

    Priyanka Patel

    2016-03-01

    Full Text Available A database that integrates all the information required for biological processing is essential to be stored in one platform. We have attempted to create one such integrated database that can be a one stop shop for the essential features required to fetch valuable result. LmSmdB (L. major and S. mansoni database is an integrated database that accounts for the biological networks and regulatory pathways computationally determined by integrating the knowledge of the genome sequences of the mentioned organisms. It is the first database of its kind that has together with the network designing showed the simulation pattern of the product. This database intends to create a comprehensive canopy for the regulation of lipid metabolism reaction in the parasite by integrating the transcription factors, regulatory genes and the protein products controlled by the transcription factors and hence operating the metabolism at genetic level. Keywords: L.major, S.mansoni, Regulatory networks, Transcription factors, Database

  14. Final status of the salt repository project waste package program experimental database

    International Nuclear Information System (INIS)

    Thornton, B.M.; Reimus, P.W.

    1988-03-01

    This report describes the final status of the Salt Repository Project Waste Package Program Experimental Database. The data base serves as a clearinghouse for all data collected within the Waste Package Program (WPP) and its predecessor programs at Pacific Northwest Laboratory (PNL). The database was maintained using RS/1 database management software. Documented assurance that the entries in the database were consistent with experimental records was provided by having each experimentalist inspect the entries and signify that they were in agreement with the records. The inspection and signoff were done per PNL technical procedures. Data for which it was impossible to obtain the experimentalist's inspection and signature were segregated from the rest of the database, although they could still be accessed by WPP staff. The WPPED contains two groups of subdirectories. One group contains data taken prior to the installation of quality assurance procedures at PNL. The other group of subdirectories contains data taken under the NQA-1 procedures since their installation in April 1985. As part of closeout activities in the Salt Repository Project, the WPP database has been archived onto magnetic media. The data in the database are available by request on magnetic media or in hardcopy form. 2 refs

  15. CoryneRegNet 4.0 – A reference database for corynebacterial gene regulatory networks

    Directory of Open Access Journals (Sweden)

    Baumbach Jan

    2007-11-01

    Full Text Available Abstract Background Detailed information on DNA-binding transcription factors (the key players in the regulation of gene expression and on transcriptional regulatory interactions of microorganisms deduced from literature-derived knowledge, computer predictions and global DNA microarray hybridization experiments, has opened the way for the genome-wide analysis of transcriptional regulatory networks. The large-scale reconstruction of these networks allows the in silico analysis of cell behavior in response to changing environmental conditions. We previously published CoryneRegNet, an ontology-based data warehouse of corynebacterial transcription factors and regulatory networks. Initially, it was designed to provide methods for the analysis and visualization of the gene regulatory network of Corynebacterium glutamicum. Results Now we introduce CoryneRegNet release 4.0, which integrates data on the gene regulatory networks of 4 corynebacteria, 2 mycobacteria and the model organism Escherichia coli K12. As the previous versions, CoryneRegNet provides a web-based user interface to access the database content, to allow various queries, and to support the reconstruction, analysis and visualization of regulatory networks at different hierarchical levels. In this article, we present the further improved database content of CoryneRegNet along with novel analysis features. The network visualization feature GraphVis now allows the inter-species comparisons of reconstructed gene regulatory networks and the projection of gene expression levels onto that networks. Therefore, we added stimulon data directly into the database, but also provide Web Service access to the DNA microarray analysis platform EMMA. Additionally, CoryneRegNet now provides a SOAP based Web Service server, which can easily be consumed by other bioinformatics software systems. Stimulons (imported from the database, or uploaded by the user can be analyzed in the context of known

  16. Development of a neoclassical transport database by neural network fitting in LHD

    International Nuclear Information System (INIS)

    Wakasa, Arimitsu; Oikawa, Shun-ichi; Murakami, Sadayoshi; Yamada, Hiroshi; Yokoyama, Masayuki; Watanabe, Kiyomasa; Maassberg, Hening; Beidler, Craig D.

    2004-01-01

    A database of neoclassical transport coefficients for the Large Helical Device is developed using normalized mono-energetic diffusion coefficients evaluated by Monte Carlo simulation code; DCOM. A neural network fitting method is applied to take energy convolutions with the given distribution function, e.g. Maxwellian. The database gives the diffusion coefficients as a function of the collision frequency, the radial electric field and the minor radius position. (author)

  17. Study on managing EPICS database using ORACLE

    International Nuclear Information System (INIS)

    Liu Shu; Wang Chunhong; Zhao Jijiu

    2007-01-01

    EPICS is used as a development toolkit of BEPCII control system. The core of EPICS is a distributed database residing in front-end machines. The distributed database is usually created by tools such as VDCT and text editor in the host, then loaded to front-end target IOCs through the network. In BEPCII control system there are about 20,000 signals, which are distributed in more than 20 IOCs. All the databases are developed by device control engineers using VDCT or text editor. There's no uniform tools providing transparent management. The paper firstly presents the current status on EPICS database management issues in many labs. Secondly, it studies EPICS database and the interface between ORACLE and EPICS database. finally, it introduces the software development and application is BEPCII control system. (authors)

  18. Interim evaluation report of the mutually operable database systems by different computers; Denshi keisanki sogo un'yo database system chukan hyoka hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1989-03-01

    This is the interim report on evaluation of the mutually operable database systems by different computers. The techniques for these systems fall into four categories of those related to (1) dispersed data systems, (2) multimedia, (3) high reliability, and (4) elementary techniques for mutually operable network systems. The techniques for the category (1) include those for vertically dispersed databases, database systems for multiple addresses in a wide area, and open type combined database systems, which have been in progress generally as planned. Those for the category (2) include the techniques for color document inputting and information retrieval, meaning compiling, understanding highly overlapping data, and controlling data centered by drawings, which have been in progress generally as planned. Those for the category (3) include the techniques for improving resistance of the networks to obstruction, and security of the data in the networks, which have been in progress generally as planned. Those for the category (4) include the techniques for rule processing for development of protocols, protocols for mutually connecting the systems, and high-speed, high-function networks, which have been in progress generally as planned. It is expected that the original objectives are finally achieved, because the development programs for these categories have been in progress generally as planned. (NEDO)

  19. A Quality-Control-Oriented Database for a Mesoscale Meteorological Observation Network

    Science.gov (United States)

    Lussana, C.; Ranci, M.; Uboldi, F.

    2012-04-01

    In the operational context of a local weather service, data accessibility and quality related issues must be managed by taking into account a wide set of user needs. This work describes the structure and the operational choices made for the operational implementation of a database system storing data from highly automated observing stations, metadata and information on data quality. Lombardy's environmental protection agency, ARPA Lombardia, manages a highly automated mesoscale meteorological network. A Quality Assurance System (QAS) ensures that reliable observational information is collected and disseminated to the users. The weather unit in ARPA Lombardia, at the same time an important QAS component and an intensive data user, has developed a database specifically aimed to: 1) providing quick access to data for operational activities and 2) ensuring data quality for real-time applications, by means of an Automatic Data Quality Control (ADQC) procedure. Quantities stored in the archive include hourly aggregated observations of: precipitation amount, temperature, wind, relative humidity, pressure, global and net solar radiation. The ADQC performs several independent tests on raw data and compares their results in a decision-making procedure. An important ADQC component is the Spatial Consistency Test based on Optimal Interpolation. Interpolated and Cross-Validation analysis values are also stored in the database, providing further information to human operators and useful estimates in case of missing data. The technical solution adopted is based on a LAMP (Linux, Apache, MySQL and Php) system, constituting an open source environment suitable for both development and operational practice. The ADQC procedure itself is performed by R scripts directly interacting with the MySQL database. Users and network managers can access the database by using a set of web-based Php applications.

  20. Interim evaluation report of the mutually operable database systems by different computers; Denshi keisanki sogo un'yo database system chukan hyoka hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1989-03-01

    This is the interim report on evaluation of the mutually operable database systems by different computers. The techniques for these systems fall into four categories of those related to (1) dispersed data systems, (2) multimedia, (3) high reliability, and (4) elementary techniques for mutually operable network systems. The techniques for the category (1) include those for vertically dispersed databases, database systems for multiple addresses in a wide area, and open type combined database systems, which have been in progress generally as planned. Those for the category (2) include the techniques for color document inputting and information retrieval, meaning compiling, understanding highly overlapping data, and controlling data centered by drawings, which have been in progress generally as planned. Those for the category (3) include the techniques for improving resistance of the networks to obstruction, and security of the data in the networks, which have been in progress generally as planned. Those for the category (4) include the techniques for rule processing for development of protocols, protocols for mutually connecting the systems, and high-speed, high-function networks, which have been in progress generally as planned. It is expected that the original objectives are finally achieved, because the development programs for these categories have been in progress generally as planned. (NEDO)

  1. Databases as policy instruments. About extending networks as evidence-based policy

    Directory of Open Access Journals (Sweden)

    Stoevelaar Herman

    2007-12-01

    Full Text Available Abstract Background This article seeks to identify the role of databases in health policy. Access to information and communication technologies has changed traditional relationships between the state and professionals, creating new systems of surveillance and control. As a result, databases may have a profound effect on controlling clinical practice. Methods We conducted three case studies to reconstruct the development and use of databases as policy instruments. Each database was intended to be employed to control the use of one particular pharmaceutical in the Netherlands (growth hormone, antiretroviral drugs for HIV and Taxol, respectively. We studied the archives of the Dutch Health Insurance Board, conducted in-depth interviews with key informants and organized two focus groups, all focused on the use of databases both in policy circles and in clinical practice. Results Our results demonstrate that policy makers hardly used the databases, neither for cost control nor for quality assurance. Further analysis revealed that these databases facilitated self-regulation and quality assurance by (national bodies of professionals, resulting in restrictive prescription behavior amongst physicians. Conclusion The databases fulfill control functions that were formerly located within the policy realm. The databases facilitate collaboration between policy makers and physicians, since they enable quality assurance by professionals. Delegating regulatory authority downwards into a network of physicians who control the use of pharmaceuticals seems to be a good alternative for centralized control on the basis of monitoring data.

  2. The European Narcolepsy Network (EU-NN) database

    DEFF Research Database (Denmark)

    Khatami, Ramin; Luca, Gianina; Baumann, Christian R

    2016-01-01

    Narcolepsy with cataplexy is a rare disease with an estimated prevalence of 0.02% in European populations. Narcolepsy shares many features of rare disorders, in particular the lack of awareness of the disease with serious consequences for healthcare supply. Similar to other rare diseases, only a ......, identification of post-marketing medication side-effects, and will contribute to improve clinical trial designs and provide facilities to further develop phase III trials....... Narcolepsy Network is introduced. The database structure, standardization of data acquisition and quality control procedures are described, and an overview provided of the first 1079 patients from 18 European specialized centres. Due to its standardization this continuously increasing data pool is most...

  3. Design of special purpose database for credit cooperation bank business processing network system

    Science.gov (United States)

    Yu, Yongling; Zong, Sisheng; Shi, Jinfa

    2011-12-01

    With the popularization of e-finance in the city, the construction of e-finance is transfering to the vast rural market, and quickly to develop in depth. Developing the business processing network system suitable for the rural credit cooperative Banks can make business processing conveniently, and have a good application prospect. In this paper, We analyse the necessity of adopting special purpose distributed database in Credit Cooperation Band System, give corresponding distributed database system structure , design the specical purpose database and interface technology . The application in Tongbai Rural Credit Cooperatives has shown that system has better performance and higher efficiency.

  4. Coordinating Mobile Databases: A System Demonstration

    OpenAIRE

    Zaihrayeu, Ilya; Giunchiglia, Fausto

    2004-01-01

    In this paper we present the Peer Database Management System (PDBMS). This system runs on top of the standard database management system, and it allows it to connect its database with other (peer) databases on the network. A particularity of our solution is that PDBMS allows for conventional database technology to be effectively operational in mobile settings. We think of database mobility as a database network, where databases appear and disappear spontaneously and their network access point...

  5. A central database for the Global Terrestrial Network for Permafrost (GTN-P)

    Science.gov (United States)

    Elger, Kirsten; Lanckman, Jean-Pierre; Lantuit, Hugues; Karlsson, Ævar Karl; Johannsson, Halldór

    2013-04-01

    The Global Terrestrial Network for Permafrost (GTN-P) is the primary international observing network for permafrost sponsored by the Global Climate Observing System (GCOS) and the Global Terrestrial Observing System (GTOS), and managed by the International Permafrost Association (IPA). It monitors the Essential Climate Variable (ECV) permafrost that consists of permafrost temperature and active-layer thickness, with the long-term goal of obtaining a comprehensive view of the spatial structure, trends, and variability of changes in the active layer and permafrost. The network's two international monitoring components are (1) CALM (Circumpolar Active Layer Monitoring) and the (2) Thermal State of Permafrost (TSP), which is made of an extensive borehole-network covering all permafrost regions. Both programs have been thoroughly overhauled during the International Polar Year 2007-2008 and extended their coverage to provide a true circumpolar network stretching over both Hemispheres. GTN-P has gained considerable visibility in the science community in providing the baseline against which models are globally validated and incorporated in climate assessments. Yet it was until now operated on a voluntary basis, and is now being redesigned to meet the increasing expectations from the science community. To update the network's objectives and deliver the best possible products to the community, the IPA organized a workshop to define the user's needs and requirements for the production, archival, storage and dissemination of the permafrost data products it manages. From the beginning on, GNT-P data was "outfitted" with an open data policy with free data access via the World Wide Web. The existing data, however, is far from being homogeneous: is not yet optimized for databases, there is no framework for data reporting or archival and data documentation is incomplete. As a result, and despite the utmost relevance of permafrost in the Earth's climate system, the data has not been

  6. Final Report on Atomic Database Project

    International Nuclear Information System (INIS)

    Yuan, J.; Gui, Z.; Moses, G.A.

    2006-01-01

    Atomic physics in hot dense plasmas is essential for understanding the radiative properties of plasmas either produced terrestrially such as in fusion energy research or in space such as the study of the core of the sun. Various kinds of atomic data are needed for spectrum analysis or for radiation hydrodynamics simulations. There are many atomic databases accessible publicly through the web, such as CHIANTI (an atomic database for spectroscopic diagnostics for astrophysical plasmas) from Naval Research Laboratory [1], collaborative development of TOPbase (The Opacity Project for astrophysically abundant elements) [2], NIST atomic spectra database from NIST [3], TOPS Opacities from Los Alamos National Laboratory [4], etc. Most of these databases are specific to astrophysics, which provide energy levels, oscillator strength f and photoionization cross sections for astrophysical elements ( Z=1-26). There are abundant spectrum data sources for spectral analysis of low Z elements. For opacities used for radiation transport, TOPS Opacities from LANL is the most valuable source. The database provides mixed opacities from element for H (Z=1) to Zn (Z=30) The data in TOPS Opacities is calculated by the code LEDCOP. In the Fusion Technology Institute, we also have developed several different models to calculate atomic data and opacities, such as the detailed term accounting model (DTA) and the unresolved transition array (UTA) model. We use the DTA model for low-Z materials since an enormous number of transitions need to be computed for medium or high-Z materials. For medium and high Z materials, we use the UTA model which simulates the enormous number of transitions by using a single line profile to represent a collection of transition arrays. These models have been implemented in our computing code JATBASE and RSSUTA. For plasma populations, two models are used in JATBASE, one is the local thermodynamic equilibrium (LTE) model and the second is the non-LTE model. For the

  7. Data Linkage Graph: computation, querying and knowledge discovery of life science database networks

    Directory of Open Access Journals (Sweden)

    Lange Matthias

    2007-12-01

    Full Text Available To support the interpretation of measured molecular facts, like gene expression experiments or EST sequencing, the functional or the system biological context has to be considered. Doing so, the relationship to existing biological knowledge has to be discovered. In general, biological knowledge is worldwide represented in a network of databases. In this paper we present a method for knowledge extraction in life science databases, which prevents the scientists from screen scraping and web clicking approaches.

  8. Genes2Networks: connecting lists of gene symbols using mammalian protein interactions databases

    Directory of Open Access Journals (Sweden)

    Ma'ayan Avi

    2007-10-01

    Full Text Available Abstract Background In recent years, mammalian protein-protein interaction network databases have been developed. The interactions in these databases are either extracted manually from low-throughput experimental biomedical research literature, extracted automatically from literature using techniques such as natural language processing (NLP, generated experimentally using high-throughput methods such as yeast-2-hybrid screens, or interactions are predicted using an assortment of computational approaches. Genes or proteins identified as significantly changing in proteomic experiments, or identified as susceptibility disease genes in genomic studies, can be placed in the context of protein interaction networks in order to assign these genes and proteins to pathways and protein complexes. Results Genes2Networks is a software system that integrates the content of ten mammalian interaction network datasets. Filtering techniques to prune low-confidence interactions were implemented. Genes2Networks is delivered as a web-based service using AJAX. The system can be used to extract relevant subnetworks created from "seed" lists of human Entrez gene symbols. The output includes a dynamic linkable three color web-based network map, with a statistical analysis report that identifies significant intermediate nodes used to connect the seed list. Conclusion Genes2Networks is powerful web-based software that can help experimental biologists to interpret lists of genes and proteins such as those commonly produced through genomic and proteomic experiments, as well as lists of genes and proteins associated with disease processes. This system can be used to find relationships between genes and proteins from seed lists, and predict additional genes or proteins that may play key roles in common pathways or protein complexes.

  9. Final Report - Cloud-Based Management Platform for Distributed, Multi-Domain Networks

    Energy Technology Data Exchange (ETDEWEB)

    Chowdhury, Pulak [Ennetix Inc.; Mukherjee, Biswanath [Ennetix Inc.

    2017-11-03

    In this Department of Energy (DOE) Small Business Innovation Research (SBIR) Phase II project final report, Ennetix presents the development of a solution for end-to-end monitoring, analysis, and visualization of network performance for distributed networks. This solution benefits enterprises of all sizes, operators of distributed and federated networks, and service providers.

  10. Application and Network-Cognizant Proxies - Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Antonio Ortega; Daniel C. Lee

    2003-03-24

    OAK B264 Application and Network-Cognizant Proxies - Final Report. Current networks show increasing heterogeneity both in terms of their bandwidths/delays and the applications they are required to support. This is a trend that is likely to intensify in the future, as real-time services, such as video, become more widely available and networking access over wireless links becomes more widespread. For this reason they propose that application-specific proxies, intermediate network nodes that broker the interactions between server and client, will become an increasingly important network element. These proxies will allow adaptation to changes in network characteristics without requiring a direct intervention of either server or client. Moreover, it will be possible to locate these proxies strategically at those points where a mismatch occurs between subdomains (for example, a proxy could be placed so as to act as a bridge between a reliable network domain and an unreliable one). This design philosophy favors scalability in the sense that the basic network infrastructure can remain unchanged while new functionality can be added to proxies, as required by the applications. While proxies can perform numerous generic functions, such as caching or security, they concentrate here on media-specific, and in particular video-specific, tasks. The goal of this project was to demonstrate that application- and network-specific knowledge at a proxy can improve overall performance especially under changing network conditions. They summarize below the work performed to address these issues. Particular effort was spent in studying caching techniques and on video classification to enable DiffServ delivery. other work included analysis of traffic characteristics, optimized media scheduling, coding techniques based on multiple description coding, and use of proxies to reduce computation costs. This work covered much of what was originally proposed but with a necessarily reduced scope.

  11. East-China Geochemistry Database (ECGD):A New Networking Database for North China Craton

    Science.gov (United States)

    Wang, X.; Ma, W.

    2010-12-01

    North China Craton is one of the best natural laboratories that research some Earth Dynamic questions[1]. Scientists made much progress in research on this area, and got vast geochemistry data, which are essential for answering many fundamental questions about the age, composition, structure, and evolution of the East China area. But the geochemical data have long been accessible only through the scientific literature and theses where they have been widely dispersed, making it difficult for the broad Geosciences community to find, access and efficiently use the full range of available data[2]. How to effectively store, manage, share and reuse the existing geochemical data in the North China Craton area? East-China Geochemistry Database(ECGD) is a networking geochemical scientific database system that has been designed based on WebGIS and relational database for the structured storage and retrieval of geochemical data and geological map information. It is integrated the functions of data retrieval, spatial visualization and online analysis. ECGD focus on three areas: 1.Storage and retrieval of geochemical data and geological map information. Research on the characters of geochemical data, including its composing and connecting of each other, we designed a relational database, which based on geochemical relational data model, to store a variety of geological sample information such as sampling locality, age, sample characteristics, reference, major elements, rare earth elements, trace elements and isotope system et al. And a web-based user-friendly interface is provided for constructing queries. 2.Data view. ECGD is committed to online data visualization by different ways, especially to view data in digital map with dynamic way. Because ECGD was integrated WebGIS technology, the query results can be mapped on digital map, which can be zoomed, translation and dot selection. Besides of view and output query results data by html, txt or xls formats, researchers also can

  12. Long-Term Collaboration Network Based on ClinicalTrials.gov Database in the Pharmaceutical Industry

    Directory of Open Access Journals (Sweden)

    Heyoung Yang

    2018-01-01

    Full Text Available Increasing costs, risks, and productivity problems in the pharmaceutical industry are important recent issues in the biomedical field. Open innovation is proposed as a solution to these issues. However, little statistical analysis related to collaboration in the pharmaceutical industry has been conducted so far. Meanwhile, not many cases have analyzed the clinical trials database, even though it is the information source with the widest coverage for the pharmaceutical industry. The purpose of this study is to test the clinical trials information as a probe for observing the status of the collaboration network and open innovation in the pharmaceutical industry. This study applied the social network analysis method to clinical trials data from 1980 to 2016 in ClinicalTrials.gov. Data were divided into four time periods—1980s, 1990s, 2000s, and 2010s—and the collaboration network was constructed for each time period. The characteristic of each network was investigated. The types of agencies participating in the clinical trials were classified as a university, national institute, company, or other, and the major players in the collaboration networks were identified. This study showed some phenomena related to the pharmaceutical industry that could provide clues to policymakers about open innovation. If follow-up studies were conducted, the utilization of the clinical trial database could be further expanded, which is expected to help open innovation in the pharmaceutical industry.

  13. Network and Database Security: Regulatory Compliance, Network, and Database Security - A Unified Process and Goal

    OpenAIRE

    Errol A. Blake

    2007-01-01

    Database security has evolved; data security professionals have developed numerous techniques and approaches to assure data confidentiality, integrity, and availability. This paper will show that the Traditional Database Security, which has focused primarily on creating user accounts and managing user privileges to database objects are not enough to protect data confidentiality, integrity, and availability. This paper is a compilation of different journals, articles and classroom discussions ...

  14. The River Network of Montenegro in the GIS Database

    Directory of Open Access Journals (Sweden)

    Goran Barović

    2017-06-01

    Full Text Available The subject of this paper is the systematization and precise identification of the structure of river networks in Montenegro in both planimetric and hypsometric dimensions, using cartometry. This includes the precise determination of the morphometric parameters of river flows, their numerical display, graphical display, and documentation. This allows for a number of analyses, for example, of individual catchments, the mutual relations of individual watercourses within a higher order catchment, and the classification of flows according to river and sea basins and their relationship to the environment. In addition, there is the potential for expanding the database further, with a view to continuous, systematic, scientific and practical follow-up in all or part of the geographic space. The cartometric analysis of the river network in Montenegro has a special scientific, and also a social value. In the geographical structure of all countries, including Montenegro, rivers occupy a central place as individual elements and integral parts of the whole. There is almost no human activity which is not related to river flows, or related phenomena and processes. The river network as part of a geographic space continues to gain in importance, and therefore studying it must connect with the other structural elements within which it functions. These are the basic relief characteristics, climate, and certain hydrographic characteristics. A complete theoretical and methodological approach to this problem forms the basis for a scientific understanding of the significance of the river network of Montenegro.

  15. Application of new type of distributed multimedia databases to networked electronic museum

    Science.gov (United States)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1999-01-01

    Recently, various kinds of multimedia application systems have actively been developed based on the achievement of advanced high sped communication networks, computer processing technologies, and digital contents-handling technologies. Under this background, this paper proposed a new distributed multimedia database system which can effectively perform a new function of cooperative retrieval among distributed databases. The proposed system introduces a new concept of 'Retrieval manager' which functions as an intelligent controller so that the user can recognize a set of distributed databases as one logical database. The logical database dynamically generates and performs a preferred combination of retrieving parameters on the basis of both directory data and the system environment. Moreover, a concept of 'domain' is defined in the system as a managing unit of retrieval. The retrieval can effectively be performed by cooperation of processing among multiple domains. Communication language and protocols are also defined in the system. These are used in every action for communications in the system. A language interpreter in each machine translates a communication language into an internal language used in each machine. Using the language interpreter, internal processing, such internal modules as DBMS and user interface modules can freely be selected. A concept of 'content-set' is also introduced. A content-set is defined as a package of contents. Contents in the content-set are related to each other. The system handles a content-set as one object. The user terminal can effectively control the displaying of retrieved contents, referring to data indicating the relation of the contents in the content- set. In order to verify the function of the proposed system, a networked electronic museum was experimentally built. The results of this experiment indicate that the proposed system can effectively retrieve the objective contents under the control to a number of distributed

  16. National information network and database system of hazardous waste management in China

    Energy Technology Data Exchange (ETDEWEB)

    Ma Hongchang [National Environmental Protection Agency, Beijing (China)

    1996-12-31

    Industries in China generate large volumes of hazardous waste, which makes it essential for the nation to pay more attention to hazardous waste management. National laws and regulations, waste surveys, and manifest tracking and permission systems have been initiated. Some centralized hazardous waste disposal facilities are under construction. China`s National Environmental Protection Agency (NEPA) has also obtained valuable information on hazardous waste management from developed countries. To effectively share this information with local environmental protection bureaus, NEPA developed a national information network and database system for hazardous waste management. This information network will have such functions as information collection, inquiry, and connection. The long-term objective is to establish and develop a national and local hazardous waste management information network. This network will significantly help decision makers and researchers because it will be easy to obtain information (e.g., experiences of developed countries in hazardous waste management) to enhance hazardous waste management in China. The information network consists of five parts: technology consulting, import-export management, regulation inquiry, waste survey, and literature inquiry.

  17. Phospho.ELM: a database of phosphorylation sites--update 2011

    DEFF Research Database (Denmark)

    Dinkel, Holger; Chica, Claudia; Via, Allegra

    2011-01-01

    The Phospho.ELM resource (http://phospho.elm.eu.org) is a relational database designed to store in vivo and in vitro phosphorylation data extracted from the scientific literature and phosphoproteomic analyses. The resource has been actively developed for more than 7 years and currently comprises ...... sequence alignment used for the score calculation. Finally, special emphasis has been put on linking to external resources such as interaction networks and other databases.......The Phospho.ELM resource (http://phospho.elm.eu.org) is a relational database designed to store in vivo and in vitro phosphorylation data extracted from the scientific literature and phosphoproteomic analyses. The resource has been actively developed for more than 7 years and currently comprises 42...

  18. A systems biology approach to construct the gene regulatory network of systemic inflammation via microarray and databases mining

    Directory of Open Access Journals (Sweden)

    Lan Chung-Yu

    2008-09-01

    Full Text Available Abstract Background Inflammation is a hallmark of many human diseases. Elucidating the mechanisms underlying systemic inflammation has long been an important topic in basic and clinical research. When primary pathogenetic events remains unclear due to its immense complexity, construction and analysis of the gene regulatory network of inflammation at times becomes the best way to understand the detrimental effects of disease. However, it is difficult to recognize and evaluate relevant biological processes from the huge quantities of experimental data. It is hence appealing to find an algorithm which can generate a gene regulatory network of systemic inflammation from high-throughput genomic studies of human diseases. Such network will be essential for us to extract valuable information from the complex and chaotic network under diseased conditions. Results In this study, we construct a gene regulatory network of inflammation using data extracted from the Ensembl and JASPAR databases. We also integrate and apply a number of systematic algorithms like cross correlation threshold, maximum likelihood estimation method and Akaike Information Criterion (AIC on time-lapsed microarray data to refine the genome-wide transcriptional regulatory network in response to bacterial endotoxins in the context of dynamic activated genes, which are regulated by transcription factors (TFs such as NF-κB. This systematic approach is used to investigate the stochastic interaction represented by the dynamic leukocyte gene expression profiles of human subject exposed to an inflammatory stimulus (bacterial endotoxin. Based on the kinetic parameters of the dynamic gene regulatory network, we identify important properties (such as susceptibility to infection of the immune system, which may be useful for translational research. Finally, robustness of the inflammatory gene network is also inferred by analyzing the hubs and "weak ties" structures of the gene network

  19. Investigating the Potential Impacts of Energy Production in the Marcellus Shale Region Using the Shale Network Database

    Science.gov (United States)

    Brantley, S.; Brazil, L.

    2017-12-01

    The Shale Network's extensive database of water quality observations enables educational experiences about the potential impacts of resource extraction with real data. Through tools that are open source and free to use, researchers, educators, and citizens can access and analyze the very same data that the Shale Network team has used in peer-reviewed publications about the potential impacts of hydraulic fracturing on water. The development of the Shale Network database has been made possible through efforts led by an academic team and involving numerous individuals from government agencies, citizen science organizations, and private industry. Thus far, these tools and data have been used to engage high school students, university undergraduate and graduate students, as well as citizens so that all can discover how energy production impacts the Marcellus Shale region, which includes Pennsylvania and other nearby states. This presentation will describe these data tools, how the Shale Network has used them in developing lesson plans, and the resources available to learn more.

  20. Updates on drug-target network; facilitating polypharmacology and data integration by growth of DrugBank database.

    Science.gov (United States)

    Barneh, Farnaz; Jafari, Mohieddin; Mirzaie, Mehdi

    2016-11-01

    Network pharmacology elucidates the relationship between drugs and targets. As the identified targets for each drug increases, the corresponding drug-target network (DTN) evolves from solely reflection of the pharmaceutical industry trend to a portrait of polypharmacology. The aim of this study was to evaluate the potentials of DrugBank database in advancing systems pharmacology. We constructed and analyzed DTN from drugs and targets associations in the DrugBank 4.0 database. Our results showed that in bipartite DTN, increased ratio of identified targets for drugs augmented density and connectivity of drugs and targets and decreased modular structure. To clear up the details in the network structure, the DTNs were projected into two networks namely, drug similarity network (DSN) and target similarity network (TSN). In DSN, various classes of Food and Drug Administration-approved drugs with distinct therapeutic categories were linked together based on shared targets. Projected TSN also showed complexity because of promiscuity of the drugs. By including investigational drugs that are currently being tested in clinical trials, the networks manifested more connectivity and pictured the upcoming pharmacological space in the future years. Diverse biological processes and protein-protein interactions were manipulated by new drugs, which can extend possible target combinations. We conclude that network-based organization of DrugBank 4.0 data not only reveals the potential for repurposing of existing drugs, also allows generating novel predictions about drugs off-targets, drug-drug interactions and their side effects. Our results also encourage further effort for high-throughput identification of targets to build networks that can be integrated into disease networks. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  1. A meta-database comparison from various European Research and Monitoring Networks dedicated to forest sites

    Czech Academy of Sciences Publication Activity Database

    Danielewska, A.; Clarke, N.; Olejnik, Janusz; Hansen, K.; de Vries, W.; Lundin, L.; Tuovinen, J-P.; Fischer, R.; Urbaniak, M.; Paoletti, E.

    2013-01-01

    Roč. 6, JAN 2013 (2013), s. 1-9 ISSN 1971-7458 Institutional support: RVO:67179843 Keywords : Research and Monitoring Network * Meta- database * Forest * Monitoring Subject RIV: EH - Ecology, Behaviour Impact factor: 1.150, year: 2013

  2. THEREDA. Thermodynamic reference database. Summary of final report

    Energy Technology Data Exchange (ETDEWEB)

    Altmaier, Marcus; Bube, Christiane; Marquardt, Christian [Karlsruher Institut fuer Technologie (KIT), Eggenstein-Leopoldshafen (Germany). Institut fuer Nukleare Entsorgung; Brendler, Vinzenz; Richter, Anke [Helmholtz-Zentrum Dresden-Rossendorf (Germany). Inst. fuer Radiochemie; Moog, Helge C.; Scharge, Tina [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Koeln (Germany); Voigt, Wolfgang [TU Bergakademie Freiburg (Germany). Inst. fuer Anorganische Chemie; Wilhelm, Stefan [AF-Colenco AG, Baden (Switzerland)

    2011-03-15

    A long term safety assessment of a repository for radioactive waste requires evidence, that all relevant processes are known and understood, which might have a significant positive or negative impact on its safety. In 2002, a working group of five institutions was established to create a common thermodynamic database for nuclear waste disposal in deep geological formations. The common database was named THEREDA: Thermodynamic Reference Database. The following institutions are members of the working group: Helmholtz-Zentrum Dresden-Rossendorf, Institute of Radiochemistry - Karlsruhe Institute of Technology, Institute for Nuclear Waste Disposal - Technische Universitaet Bergakademie Freiberg, Institute of Inorganic Chemistry - AF-Colenco AG, Baden, Switzerland, Department of Groundwater Protection and Waste Disposal - Gesellschaft fur Anlagen- und Reaktorsicherheit, Braunschweig. For the future it is intended that its usage becomes mandatory for geochemical model calculations for nuclear waste disposal in Germany. Furthermore, it was agreed that the new database should be established in accordance with the following guidelines: Long-term usability: The disposal of radioactive waste is a task encompassing decades. The database is projected to operate on a long-term basis. This has influenced the choice of software (which is open source), the documentation and the data structure. THEREDA is adapted to the present-day necessities and computational codes but also leaves many degrees of freedom for varying demands in the future. Easy access: The database is accessible via the World Wide Web for free. Applicability: To promote the usage of the database in a wide community, THEREDA is providing ready-to-use parameter files for the most common codes. These are at present: PHREEQC, EQ3/6, Geochemist's Workbench, and CHEMAPP. Internal consistency: It is distinguished between dependent and independent data. To ensure the required internal consistency of THEREDA, the

  3. THEREDA. Thermodynamic reference database. Summary of final report

    International Nuclear Information System (INIS)

    Altmaier, Marcus; Bube, Christiane; Marquardt, Christian; Voigt, Wolfgang

    2011-03-01

    A long term safety assessment of a repository for radioactive waste requires evidence, that all relevant processes are known and understood, which might have a significant positive or negative impact on its safety. In 2002, a working group of five institutions was established to create a common thermodynamic database for nuclear waste disposal in deep geological formations. The common database was named THEREDA: Thermodynamic Reference Database. The following institutions are members of the working group: Helmholtz-Zentrum Dresden-Rossendorf, Institute of Radiochemistry - Karlsruhe Institute of Technology, Institute for Nuclear Waste Disposal - Technische Universitaet Bergakademie Freiberg, Institute of Inorganic Chemistry - AF-Colenco AG, Baden, Switzerland, Department of Groundwater Protection and Waste Disposal - Gesellschaft fur Anlagen- und Reaktorsicherheit, Braunschweig. For the future it is intended that its usage becomes mandatory for geochemical model calculations for nuclear waste disposal in Germany. Furthermore, it was agreed that the new database should be established in accordance with the following guidelines: Long-term usability: The disposal of radioactive waste is a task encompassing decades. The database is projected to operate on a long-term basis. This has influenced the choice of software (which is open source), the documentation and the data structure. THEREDA is adapted to the present-day necessities and computational codes but also leaves many degrees of freedom for varying demands in the future. Easy access: The database is accessible via the World Wide Web for free. Applicability: To promote the usage of the database in a wide community, THEREDA is providing ready-to-use parameter files for the most common codes. These are at present: PHREEQC, EQ3/6, Geochemist's Workbench, and CHEMAPP. Internal consistency: It is distinguished between dependent and independent data. To ensure the required internal consistency of THEREDA, the

  4. Functional Interaction Network Construction and Analysis for Disease Discovery.

    Science.gov (United States)

    Wu, Guanming; Haw, Robin

    2017-01-01

    Network-based approaches project seemingly unrelated genes or proteins onto a large-scale network context, therefore providing a holistic visualization and analysis platform for genomic data generated from high-throughput experiments, reducing the dimensionality of data via using network modules and increasing the statistic analysis power. Based on the Reactome database, the most popular and comprehensive open-source biological pathway knowledgebase, we have developed a highly reliable protein functional interaction network covering around 60 % of total human genes and an app called ReactomeFIViz for Cytoscape, the most popular biological network visualization and analysis platform. In this chapter, we describe the detailed procedures on how this functional interaction network is constructed by integrating multiple external data sources, extracting functional interactions from human curated pathway databases, building a machine learning classifier called a Naïve Bayesian Classifier, predicting interactions based on the trained Naïve Bayesian Classifier, and finally constructing the functional interaction database. We also provide an example on how to use ReactomeFIViz for performing network-based data analysis for a list of genes.

  5. Discerning molecular interactions: A comprehensive review on biomolecular interaction databases and network analysis tools.

    Science.gov (United States)

    Miryala, Sravan Kumar; Anbarasu, Anand; Ramaiah, Sudha

    2018-02-05

    Computational analysis of biomolecular interaction networks is now gaining a lot of importance to understand the functions of novel genes/proteins. Gene interaction (GI) network analysis and protein-protein interaction (PPI) network analysis play a major role in predicting the functionality of interacting genes or proteins and gives an insight into the functional relationships and evolutionary conservation of interactions among the genes. An interaction network is a graphical representation of gene/protein interactome, where each gene/protein is a node, and interaction between gene/protein is an edge. In this review, we discuss the popular open source databases that serve as data repositories to search and collect protein/gene interaction data, and also tools available for the generation of interaction network, visualization and network analysis. Also, various network analysis approaches like topological approach and clustering approach to study the network properties and functional enrichment server which illustrates the functions and pathway of the genes and proteins has been discussed. Hence the distinctive attribute mentioned in this review is not only to provide an overview of tools and web servers for gene and protein-protein interaction (PPI) network analysis but also to extract useful and meaningful information from the interaction networks. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Children's Culture Database (CCD)

    DEFF Research Database (Denmark)

    Wanting, Birgit

    a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996......a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996...

  7. FINAL Database, LAKE COUNTY, FLORIDA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  8. Numerical databases in marine biology

    Digital Repository Service at National Institute of Oceanography (India)

    Sarupria, J.S.; Bhargava, R.M.S.

    stream_size 9 stream_content_type text/plain stream_name Natl_Workshop_Database_Networking_Mar_Biol_1991_45.pdf.txt stream_source_info Natl_Workshop_Database_Networking_Mar_Biol_1991_45.pdf.txt Content-Encoding ISO-8859-1 Content-Type... text/plain; charset=ISO-8859-1 ...

  9. Final Technical Report on the Genome Sequence DataBase (GSDB): DE-FG03 95 ER 62062 September 1997-September 1999; FINAL

    International Nuclear Information System (INIS)

    Harger, Carol A.

    1999-01-01

    Since September 1997 NCGR has produced two web-based tools for researchers to use to access and analyze data in the Genome Sequence DataBase (GSDB). These tools are: Sequence Viewer, a nucleotide sequence and annotation visualization tool, and MAR-Finder, a tool that predicts, base upon statistical inferences, the location of matrix attachment regions (MARS) within a nucleotide sequence.[The annual report for June 1996 to August 1997 is included as an attachment to this final report.

  10. Final Results of Shuttle MMOD Impact Database

    Science.gov (United States)

    Hyde, J. L.; Christiansen, E. L.; Lear, D. M.

    2015-01-01

    The Shuttle Hypervelocity Impact Database documents damage features on each Orbiter thought to be from micrometeoroids (MM) or orbital debris (OD). Data is divided into tables for crew module windows, payload bay door radiators and thermal protection systems along with other miscellaneous regions. The combined number of records in the database is nearly 3000. Each database record provides impact feature dimensions, location on the vehicle and relevant mission information. Additional detail on the type and size of particle that produced the damage site is provided when sampling data and definitive spectroscopic analysis results are available. Guidelines are described which were used in determining whether impact damage is from micrometeoroid or orbital debris impact based on the findings from scanning electron microscopy chemical analysis. Relationships assumed when converting from observed feature sizes in different shuttle materials to particle sizes will be presented. A small number of significant impacts on the windows, radiators and wing leading edge will be highlighted and discussed in detail, including the hypervelocity impact testing performed to estimate particle sizes that produced the damage.

  11. Towards future electricity networks - Final report

    Energy Technology Data Exchange (ETDEWEB)

    Papaemmanouil, A.

    2008-07-01

    This comprehensive final report for the Swiss Federal Office of Energy (SFOE) reviews work done on the development of new power transmission planning tools for restructured power networks. These are needed in order to face the challenges that arise due to economic, environmental and social issues. The integration of transmission, generation and energy policy planning in order to support a common strategy with respect to sustainable electricity networks is discussed. In the first phase of the project the main focus was placed on the definition of criteria and inputs that are most likely to affect sustainable transmission expansion plans. Models, concepts, and methods developed in order to study the impact of the internalisation of external costs in power production are examined. To consider external costs in the planning process, a concurrent software tool has been implemented that is capable of studying possible development scenarios. The report examines a concept that has been developed to identify congested transmission lines or corridors and evaluates the dependencies between the various market participants. The paper includes a set of three appendices that include a paper on the 28{sup th} USAEE North American conference, an abstract from Powertech 2009 and an SFOE report from July 2008.

  12. Towards future electricity networks - Final report

    International Nuclear Information System (INIS)

    Papaemmanouil, A.

    2008-01-01

    This comprehensive final report for the Swiss Federal Office of Energy (SFOE) reviews work done on the development of new power transmission planning tools for restructured power networks. These are needed in order to face the challenges that arise due to economic, environmental and social issues. The integration of transmission, generation and energy policy planning in order to support a common strategy with respect to sustainable electricity networks is discussed. In the first phase of the project the main focus was placed on the definition of criteria and inputs that are most likely to affect sustainable transmission expansion plans. Models, concepts, and methods developed in order to study the impact of the internalisation of external costs in power production are examined. To consider external costs in the planning process, a concurrent software tool has been implemented that is capable of studying possible development scenarios. The report examines a concept that has been developed to identify congested transmission lines or corridors and evaluates the dependencies between the various market participants. The paper includes a set of three appendices that include a paper on the 28 th USAEE North American conference, an abstract from Powertech 2009 and an SFOE report from July 2008.

  13. Temporal Databases in Network Management

    National Research Council Canada - National Science Library

    Gupta, Ajay

    1998-01-01

    .... This thesis discusses issues involved with performing network management, specifically with means of reducing and storing the large quantity of data that networks management tools and systems generate...

  14. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  15. FoodMicrobionet: A database for the visualisation and exploration of food bacterial communities based on network analysis.

    Science.gov (United States)

    Parente, Eugenio; Cocolin, Luca; De Filippis, Francesca; Zotta, Teresa; Ferrocino, Ilario; O'Sullivan, Orla; Neviani, Erasmo; De Angelis, Maria; Cotter, Paul D; Ercolini, Danilo

    2016-02-16

    Amplicon targeted high-throughput sequencing has become a popular tool for the culture-independent analysis of microbial communities. Although the data obtained with this approach are portable and the number of sequences available in public databases is increasing, no tool has been developed yet for the analysis and presentation of data obtained in different studies. This work describes an approach for the development of a database for the rapid exploration and analysis of data on food microbial communities. Data from seventeen studies investigating the structure of bacterial communities in dairy, meat, sourdough and fermented vegetable products, obtained by 16S rRNA gene targeted high-throughput sequencing, were collated and analysed using Gephi, a network analysis software. The resulting database, which we named FoodMicrobionet, was used to analyse nodes and network properties and to build an interactive web-based visualisation. The latter allows the visual exploration of the relationships between Operational Taxonomic Units (OTUs) and samples and the identification of core- and sample-specific bacterial communities. It also provides additional search tools and hyperlinks for the rapid selection of food groups and OTUs and for rapid access to external resources (NCBI taxonomy, digital versions of the original articles). Microbial interaction network analysis was carried out using CoNet on datasets extracted from FoodMicrobionet: the complexity of interaction networks was much lower than that found for other bacterial communities (human microbiome, soil and other environments). This may reflect both a bias in the dataset (which was dominated by fermented foods and starter cultures) and the lower complexity of food bacterial communities. Although some technical challenges exist, and are discussed here, the net result is a valuable tool for the exploration of food bacterial communities by the scientific community and food industry. Copyright © 2015. Published by

  16. FunCoup 3.0: database of genome-wide functional coupling networks.

    Science.gov (United States)

    Schmitt, Thomas; Ogris, Christoph; Sonnhammer, Erik L L

    2014-01-01

    We present an update of the FunCoup database (http://FunCoup.sbc.su.se) of functional couplings, or functional associations, between genes and gene products. Identifying these functional couplings is an important step in the understanding of higher level mechanisms performed by complex cellular processes. FunCoup distinguishes between four classes of couplings: participation in the same signaling cascade, participation in the same metabolic process, co-membership in a protein complex and physical interaction. For each of these four classes, several types of experimental and statistical evidence are combined by Bayesian integration to predict genome-wide functional coupling networks. The FunCoup framework has been completely re-implemented to allow for more frequent future updates. It contains many improvements, such as a regularization procedure to automatically downweight redundant evidences and a novel method to incorporate phylogenetic profile similarity. Several datasets have been updated and new data have been added in FunCoup 3.0. Furthermore, we have developed a new Web site, which provides powerful tools to explore the predicted networks and to retrieve detailed information about the data underlying each prediction.

  17. FINAL DFIRM DATABASE, LIMESTONE COUNTY, TEXAS, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  18. FINAL DFIRM DATABASE, UNION PARISH, LOUISIANA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  19. FINAL DFIRM DATABASE, SHARP COUNTY, ARKANSAS, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  20. FINAL DFIRM DATABASE, BRYAN COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  1. rSNPBase 3.0: an updated database of SNP-related regulatory elements, element-gene pairs and SNP-based gene regulatory networks.

    Science.gov (United States)

    Guo, Liyuan; Wang, Jing

    2018-01-04

    Here, we present the updated rSNPBase 3.0 database (http://rsnp3.psych.ac.cn), which provides human SNP-related regulatory elements, element-gene pairs and SNP-based regulatory networks. This database is the updated version of the SNP regulatory annotation database rSNPBase and rVarBase. In comparison to the last two versions, there are both structural and data adjustments in rSNPBase 3.0: (i) The most significant new feature is the expansion of analysis scope from SNP-related regulatory elements to include regulatory element-target gene pairs (E-G pairs), therefore it can provide SNP-based gene regulatory networks. (ii) Web function was modified according to data content and a new network search module is provided in the rSNPBase 3.0 in addition to the previous regulatory SNP (rSNP) search module. The two search modules support data query for detailed information (related-elements, element-gene pairs, and other extended annotations) on specific SNPs and SNP-related graphic networks constructed by interacting transcription factors (TFs), miRNAs and genes. (3) The type of regulatory elements was modified and enriched. To our best knowledge, the updated rSNPBase 3.0 is the first data tool supports SNP functional analysis from a regulatory network prospective, it will provide both a comprehensive understanding and concrete guidance for SNP-related regulatory studies. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. National Transportation Atlas Databases : 2012.

    Science.gov (United States)

    2012-01-01

    The National Transportation Atlas Databases 2012 (NTAD2012) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...

  3. National Transportation Atlas Databases : 2011.

    Science.gov (United States)

    2011-01-01

    The National Transportation Atlas Databases 2011 (NTAD2011) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...

  4. National Transportation Atlas Databases : 2009.

    Science.gov (United States)

    2009-01-01

    The National Transportation Atlas Databases 2009 (NTAD2009) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...

  5. National Transportation Atlas Databases : 2010.

    Science.gov (United States)

    2010-01-01

    The National Transportation Atlas Databases 2010 (NTAD2010) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...

  6. Application of CAPEC Lipid Property Databases in the Synthesis and Design of Biorefinery Networks

    DEFF Research Database (Denmark)

    Bertran, Maria-Ona; Cunico, Larissa; Gani, Rafiqul

    Petroleum is currently the primary raw material for the production of fuels and chemicals. Consequently, our society is highly dependent on fossil non-renewable resources. However, renewable raw materials are recently receiving increasing interest for the production of chemicals and fuels, so a n...... of biorefinery networks. The objective of this work is to show the application of databases of physical and thermodynamic properties of lipid components to the synthesis and design of biorefinery networks.......]. The wide variety and complex nature of components in biorefineries poses a challenge with respect to the synthesis and design of these types of processes. Whereas physical and thermodynamic property data or models for petroleum-based processes are widely available, most data and models for biobased...

  7. Citation buidelines for nuclear data retrieved from databases resident at the Nuclear Data Centers Network

    International Nuclear Information System (INIS)

    McLane, V.

    1996-07-01

    The Nuclear Data Centers Network is a world-wide cooperation of nuclear data centers under the auspices of the International Atomic Energy Agency. The Network organizes the task of collecting, compiling, standardizing, storing, assessing, and distributing the nuclear data on an international scale. Information available at the Centers includes bibliographic, experimental, and evaluated databases for nuclear reaction data and for nuclear structure and radioactive decay data. The objective of the Network is to provide the information to users in a convenient, readily-available form. To this end, online data services have been established at three of the centers: the National Nuclear Data Center (NNDC), the Nuclear Data Section of the International Atomic Energy Agency (NDS), and the OECD Nuclear Energy Agency Data Bank (NEADB). Some information is also available at the NNDC and NEADB World Wide Web sites

  8. Tradeoffs in distributed databases

    OpenAIRE

    Juntunen, R. (Risto)

    2016-01-01

    Abstract In a distributed database data is spread throughout the network into separated nodes with different DBMS systems (Date, 2000). According to CAP-theorem three database properties — consistency, availability and partition tolerance cannot be achieved simultaneously in distributed database systems. Two of these properties can be achieved but not all three at the same time (Brewer, 2000). Since this theorem there has b...

  9. The ING Seismic Network Databank (ISND : a friendly parameters and waveform database

    Directory of Open Access Journals (Sweden)

    G. Smriglio

    1995-06-01

    Full Text Available he Istituto Nazionale di Geofisica (ING Seismic Network Database (ISND includes over 300000 arrivaI times of Italian, Mediterranean and teleseismic earthquakes from 1983 to date. This database is a useful tool for Italian and foreign seismologists ( over 1000 data requests in the first 6 months of this year. Recently (1994 the ING began storing in the ISND, the digital waveforms associated with arri,Tal times and experimen- tally allowed users to retrieve waveforms recorded by the ING acquisition system. In this paper we describe the types of data stored and the interactive and batch procedures available to obtain arrivaI times and/or asso- ciated waveforms. The ISND is reachable via telephone line, P.S.I., Internet and DecNet. Users can read and send to their E-mail address alI selected earthquakes locations, parameters, arrivaI times and associated digital waveforms (in SAC, SUDS or ASCII format. For r;aedium or large amounts of data users can ask to receive data by means of magnetic media (DAT, Video 8, floppy disk.

  10. Survey on utilization of database for research and development of global environmental industry technology; Chikyu kankyo sangyo gijutsu kenkyu kaihatsu no tame no database nado no riyo ni kansuru chosa

    Energy Technology Data Exchange (ETDEWEB)

    1993-03-01

    To optimize networks and database systems for promotion of the industry technology development contributing to the solution of the global environmental problem, studies are made on reusable information resource and its utilization methods. As reusable information resource, there are external database and network system for researchers` information exchange and for computer use. The external database includes commercial database and academic database. As commercial database, 6 agents and 13 service systems are selected. As academic database, there are NACSIS-IR and the database which is connected with INTERNET in the U.S. These are used in connection with the UNIX academic research network called INTERNET. For connection with INTERNET, a commercial UNIX network service called IIJ which starts service in April 1993 can be used. However, personal computer communication network is used for the time being. 6 figs., 4 tabs.

  11. Teradata University Network: A No Cost Web-Portal for Teaching Database, Data Warehousing, and Data-Related Subjects

    Science.gov (United States)

    Jukic, Nenad; Gray, Paul

    2008-01-01

    This paper describes the value that information systems faculty and students in classes dealing with database management, data warehousing, decision support systems, and related topics, could derive from the use of the Teradata University Network (TUN), a free comprehensive web-portal. A detailed overview of TUN functionalities and content is…

  12. Artificial Neural Networks for differential diagnosis of breast lesions in MR-Mammography: A systematic approach addressing the influence of network architecture on diagnostic performance using a large clinical database

    International Nuclear Information System (INIS)

    Dietzel, Matthias; Baltzer, Pascal A.T.; Dietzel, Andreas; Zoubi, Ramy; Gröschel, Tobias; Burmeister, Hartmut P.; Bogdan, Martin; Kaiser, Werner A.

    2012-01-01

    Rationale and objectives: Differential diagnosis of lesions in MR-Mammography (MRM) remains a complex task. The aim of this MRM study was to design and to test robustness of Artificial Neural Network architectures to predict malignancy using a large clinical database. Materials and methods: For this IRB-approved investigation standardized protocols and study design were applied (T1w-FLASH; 0.1 mmol/kgBW Gd-DTPA; T2w-TSE; histological verification after MRM). All lesions were evaluated by two experienced (>500 MRM) radiologists in consensus. In every lesion, 18 previously published descriptors were assessed and documented in the database. An Artificial Neural Network (ANN) was developed to process this database (The-MathWorks/Inc., feed-forward-architecture/resilient back-propagation-algorithm). All 18 descriptors were set as input variables, whereas histological results (malignant vs. benign) was defined as classification variable. Initially, the ANN was optimized in terms of “Training Epochs” (TE), “Hidden Layers” (HL), “Learning Rate” (LR) and “Neurons” (N). Robustness of the ANN was addressed by repeated evaluation cycles (n: 9) with receiver operating characteristics (ROC) analysis of the results applying 4-fold Cross Validation. The best network architecture was identified comparing the corresponding Area under the ROC curve (AUC). Results: Histopathology revealed 436 benign and 648 malignant lesions. Enhancing the level of complexity could not increase diagnostic accuracy of the network (P: n.s.). The optimized ANN architecture (TE: 20, HL: 1, N: 5, LR: 1.2) was accurate (mean-AUC 0.888; P: <0.001) and robust (CI: 0.885–0.892; range: 0.880–0.898). Conclusion: The optimized neural network showed robust performance and high diagnostic accuracy for prediction of malignancy on unknown data.

  13. Artificial Neural Networks for differential diagnosis of breast lesions in MR-Mammography: a systematic approach addressing the influence of network architecture on diagnostic performance using a large clinical database.

    Science.gov (United States)

    Dietzel, Matthias; Baltzer, Pascal A T; Dietzel, Andreas; Zoubi, Ramy; Gröschel, Tobias; Burmeister, Hartmut P; Bogdan, Martin; Kaiser, Werner A

    2012-07-01

    Differential diagnosis of lesions in MR-Mammography (MRM) remains a complex task. The aim of this MRM study was to design and to test robustness of Artificial Neural Network architectures to predict malignancy using a large clinical database. For this IRB-approved investigation standardized protocols and study design were applied (T1w-FLASH; 0.1 mmol/kgBW Gd-DTPA; T2w-TSE; histological verification after MRM). All lesions were evaluated by two experienced (>500 MRM) radiologists in consensus. In every lesion, 18 previously published descriptors were assessed and documented in the database. An Artificial Neural Network (ANN) was developed to process this database (The-MathWorks/Inc., feed-forward-architecture/resilient back-propagation-algorithm). All 18 descriptors were set as input variables, whereas histological results (malignant vs. benign) was defined as classification variable. Initially, the ANN was optimized in terms of "Training Epochs" (TE), "Hidden Layers" (HL), "Learning Rate" (LR) and "Neurons" (N). Robustness of the ANN was addressed by repeated evaluation cycles (n: 9) with receiver operating characteristics (ROC) analysis of the results applying 4-fold Cross Validation. The best network architecture was identified comparing the corresponding Area under the ROC curve (AUC). Histopathology revealed 436 benign and 648 malignant lesions. Enhancing the level of complexity could not increase diagnostic accuracy of the network (P: n.s.). The optimized ANN architecture (TE: 20, HL: 1, N: 5, LR: 1.2) was accurate (mean-AUC 0.888; P: <0.001) and robust (CI: 0.885-0.892; range: 0.880-0.898). The optimized neural network showed robust performance and high diagnostic accuracy for prediction of malignancy on unknown data. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  14. Secure Distributed Databases Using Cryptography

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2006-01-01

    Full Text Available The computational encryption is used intensively by different databases management systems for ensuring privacy and integrity of information that are physically stored in files. Also, the information is sent over network and is replicated on different distributed systems. It is proved that a satisfying level of security is achieved if the rows and columns of tables are encrypted independently of table or computer that sustains the data. Also, it is very important that the SQL - Structured Query Language query requests and responses to be encrypted over the network connection between the client and databases server. All this techniques and methods must be implemented by the databases administrators, designer and developers in a consistent security policy.

  15. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  16. SmallSat Database

    Science.gov (United States)

    Petropulos, Dolores; Bittner, David; Murawski, Robert; Golden, Bert

    2015-01-01

    The SmallSat has an unrealized potential in both the private industry and in the federal government. Currently over 70 companies, 50 universities and 17 governmental agencies are involved in SmallSat research and development. In 1994, the U.S. Army Missile and Defense mapped the moon using smallSat imagery. Since then Smart Phones have introduced this imagery to the people of the world as diverse industries watched this trend. The deployment cost of smallSats is also greatly reduced compared to traditional satellites due to the fact that multiple units can be deployed in a single mission. Imaging payloads have become more sophisticated, smaller and lighter. In addition, the growth of small technology obtained from private industries has led to the more widespread use of smallSats. This includes greater revisit rates in imagery, significantly lower costs, the ability to update technology more frequently and the ability to decrease vulnerability of enemy attacks. The popularity of smallSats show a changing mentality in this fast paced world of tomorrow. What impact has this created on the NASA communication networks now and in future years? In this project, we are developing the SmallSat Relational Database which can support a simulation of smallSats within the NASA SCaN Compatability Environment for Networks and Integrated Communications (SCENIC) Modeling and Simulation Lab. The NASA Space Communications and Networks (SCaN) Program can use this modeling to project required network support needs in the next 10 to 15 years. The SmallSat Rational Database could model smallSats just as the other SCaN databases model the more traditional larger satellites, with a few exceptions. One being that the smallSat Database is designed to be built-to-order. The SmallSat database holds various hardware configurations that can be used to model a smallSat. It will require significant effort to develop as the research material can only be populated by hand to obtain the unique data

  17. Experience in running relational databases on clustered storage

    CERN Document Server

    Aparicio, Ruben Gaspar

    2015-01-01

    For past eight years, CERN IT Database group has based its backend storage on NAS (Network-Attached Storage) architecture, providing database access via NFS (Network File System) protocol. In last two and half years, our storage has evolved from a scale-up architecture to a scale-out one. This paper describes our setup and a set of functionalities providing key features to other services like Database on Demand [1] or CERN Oracle backup and recovery service. It also outlines possible trend of evolution that, storage for databases could follow.

  18. Experience and Lessons learnt from running high availability databases on Network Attached Storage

    CERN Document Server

    Guijarro, Juan Manuel; Segura Chinchilla, Nilo

    2008-01-01

    The Database and Engineering Services Group of CERN's Information Technology Department provides the Oracle based Central Data Base services used in many activities at CERN. In order to provide High Availability and ease management for those services, a NAS (Network Attached Storage) based infrastructure has been set up. It runs several instances of the Oracle RAC (Real Application Cluster) using NFS as share disk space for RAC purposes and Data hosting. It is composed of two private LAN's to provide access to the NAS file servers and Oracle RAC interconnect, both using network bonding. NAS nodes are configured in partnership to prevent having single points of failure and to provide automatic NAS fail-over. This presentation describes that infrastructure and gives some advice on how to automate its management and setup using a Fabric Management framework such as Quattor. It also covers aspects related with NAS Performance and Monitoring as well Data Backup and Archive of such facility using already existing i...

  19. FINAL DFIRM DATABASE, PALO PINTO COUNTY, TEXAS, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  20. Investigating the Potential Impacts of Energy Production in the Marcellus Shale Region Using the Shale Network Database and CUAHSI-Supported Data Tools

    Science.gov (United States)

    Brazil, L.

    2017-12-01

    The Shale Network's extensive database of water quality observations enables educational experiences about the potential impacts of resource extraction with real data. Through open source tools that are developed and maintained by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI), researchers, educators, and citizens can access and analyze the very same data that the Shale Network team has used in peer-reviewed publications about the potential impacts of hydraulic fracturing on water. The development of the Shale Network database has been made possible through collection efforts led by an academic team and involving numerous individuals from government agencies, citizen science organizations, and private industry. Thus far, CUAHSI-supported data tools have been used to engage high school students, university undergraduate and graduate students, as well as citizens so that all can discover how energy production impacts the Marcellus Shale region, which includes Pennsylvania and other nearby states. This presentation will describe these data tools, how the Shale Network has used them in developing educational material, and the resources available to learn more.

  1. 3D multi-view convolutional neural networks for lung nodule classification

    Science.gov (United States)

    Kang, Guixia; Hou, Beibei; Zhang, Ningbo

    2017-01-01

    The 3D convolutional neural network (CNN) is able to make full use of the spatial 3D context information of lung nodules, and the multi-view strategy has been shown to be useful for improving the performance of 2D CNN in classifying lung nodules. In this paper, we explore the classification of lung nodules using the 3D multi-view convolutional neural networks (MV-CNN) with both chain architecture and directed acyclic graph architecture, including 3D Inception and 3D Inception-ResNet. All networks employ the multi-view-one-network strategy. We conduct a binary classification (benign and malignant) and a ternary classification (benign, primary malignant and metastatic malignant) on Computed Tomography (CT) images from Lung Image Database Consortium and Image Database Resource Initiative database (LIDC-IDRI). All results are obtained via 10-fold cross validation. As regards the MV-CNN with chain architecture, results show that the performance of 3D MV-CNN surpasses that of 2D MV-CNN by a significant margin. Finally, a 3D Inception network achieved an error rate of 4.59% for the binary classification and 7.70% for the ternary classification, both of which represent superior results for the corresponding task. We compare the multi-view-one-network strategy with the one-view-one-network strategy. The results reveal that the multi-view-one-network strategy can achieve a lower error rate than the one-view-one-network strategy. PMID:29145492

  2. An Analysis of Database Replication Technologies with Regard to Deep Space Network Application Requirements

    Science.gov (United States)

    Connell, Andrea M.

    2011-01-01

    The Deep Space Network (DSN) has three communication facilities which handle telemetry, commands, and other data relating to spacecraft missions. The network requires these three sites to share data with each other and with the Jet Propulsion Laboratory for processing and distribution. Many database management systems have replication capabilities built in, which means that data updates made at one location will be automatically propagated to other locations. This project examines multiple replication solutions, looking for stability, automation, flexibility, performance, and cost. After comparing these features, Oracle Streams is chosen for closer analysis. Two Streams environments are configured - one with a Master/Slave architecture, in which a single server is the source for all data updates, and the second with a Multi-Master architecture, in which updates originating from any of the servers will be propagated to all of the others. These environments are tested for data type support, conflict resolution, performance, changes to the data structure, and behavior during and after network or server outages. Through this experimentation, it is determined which requirements of the DSN can be met by Oracle Streams and which cannot.

  3. ORACLE DATABASE SECURITY

    OpenAIRE

    Cristina-Maria Titrade

    2011-01-01

    This paper presents some security issues, namely security database system level, data level security, user-level security, user management, resource management and password management. Security is a constant concern in the design and database development. Usually, there are no concerns about the existence of security, but rather how large it should be. A typically DBMS has several levels of security, in addition to those offered by the operating system or network. Typically, a DBMS has user a...

  4. Physical database design using Oracle

    CERN Document Server

    Burleson, Donald K

    2004-01-01

    INTRODUCTION TO ORACLE PHYSICAL DESIGNPrefaceRelational Databases and Physical DesignSystems Analysis and Physical Database DesignIntroduction to Logical Database DesignEntity/Relation ModelingBridging between Logical and Physical ModelsPhysical Design Requirements Validation PHYSICAL ENTITY DESIGN FOR ORACLEData Relationships and Physical DesignMassive De-Normalization: STAR Schema DesignDesigning Class HierarchiesMaterialized Views and De-NormalizationReferential IntegrityConclusionORACLE HARDWARE DESIGNPlanning the Server EnvironmentDesigning the Network Infrastructure for OracleOracle Netw

  5. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective

    Directory of Open Access Journals (Sweden)

    Shuo Gu

    2017-01-01

    Full Text Available With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed.

  6. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective.

    Science.gov (United States)

    Gu, Shuo; Pei, Jianfeng

    2017-01-01

    With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed.

  7. The IRPVM-DB database

    International Nuclear Information System (INIS)

    Davies, L.M.; Gillemot, F.; Yanko, L.; Lyssakov, V.

    1997-01-01

    The IRPVM-DB (International Reactor Pressure Vessel Material Database) initiated by the IAEA IWG LMNPP is going to collect the available surveillance and research data world-wide on RPV material ageing. This paper presents the purpose of the database; it summarizes the type and the relationship of data included; it gives information about the data access and protection; and finally, it summarizes the state of art of the database. (author). 1 ref., 2 figs

  8. The IRPVM-DB database

    Energy Technology Data Exchange (ETDEWEB)

    Davies, L M [Davies Consultants, Oxford (United Kingdom); Gillemot, F [Atomic Energy Research Inst., Budapest (Hungary); Yanko, L [Minatom (Russian Federation); Lyssakov, V [International Atomic Energy Agency, Vienna (Austria)

    1997-09-01

    The IRPVM-DB (International Reactor Pressure Vessel Material Database) initiated by the IAEA IWG LMNPP is going to collect the available surveillance and research data world-wide on RPV material ageing. This paper presents the purpose of the database; it summarizes the type and the relationship of data included; it gives information about the data access and protection; and finally, it summarizes the state of art of the database. (author). 1 ref., 2 figs.

  9. Database design using entity-relationship diagrams

    CERN Document Server

    Bagui, Sikha

    2011-01-01

    Data, Databases, and the Software Engineering ProcessDataBuilding a DatabaseWhat is the Software Engineering Process?Entity Relationship Diagrams and the Software Engineering Life Cycle          Phase 1: Get the Requirements for the Database          Phase 2: Specify the Database          Phase 3: Design the DatabaseData and Data ModelsFiles, Records, and Data ItemsMoving from 3 × 5 Cards to ComputersDatabase Models     The Hierarchical ModelThe Network ModelThe Relational ModelThe Relational Model and Functional DependenciesFundamental Relational DatabaseRelational Database and SetsFunctional

  10. Combination of a geolocation database access with infrastructure sensing in TV bands

    OpenAIRE

    Dionísio, Rogério; Ribeiro, Jorge; Marques, Paulo; Rodriguez, Jonathan

    2014-01-01

    This paper describes the implementation and the technical specifications of a geolocation database assisted by a spectrum-monitoring outdoor network. The geolocation database is populated according to Electronic Communications Committee (ECC) report 186 methodology. The application programming interface (API) between the sensor network and the geolocation database implements an effective and secure connection to successfully gather sensing data and sends it to the geolocation database for ...

  11. Detection of material property errors in handbooks and databases using artificial neural networks with hidden correlations

    Science.gov (United States)

    Zhang, Y. M.; Evans, J. R. G.; Yang, S. F.

    2010-11-01

    The authors have discovered a systematic, intelligent and potentially automatic method to detect errors in handbooks and stop their transmission using unrecognised relationships between materials properties. The scientific community relies on the veracity of scientific data in handbooks and databases, some of which have a long pedigree covering several decades. Although various outlier-detection procedures are employed to detect and, where appropriate, remove contaminated data, errors, which had not been discovered by established methods, were easily detected by our artificial neural network in tables of properties of the elements. We started using neural networks to discover unrecognised relationships between materials properties and quickly found that they were very good at finding inconsistencies in groups of data. They reveal variations from 10 to 900% in tables of property data for the elements and point out those that are most probably correct. Compared with the statistical method adopted by Ashby and co-workers [Proc. R. Soc. Lond. Ser. A 454 (1998) p. 1301, 1323], this method locates more inconsistencies and could be embedded in database software for automatic self-checking. We anticipate that our suggestion will be a starting point to deal with this basic problem that affects researchers in every field. The authors believe it may eventually moderate the current expectation that data field error rates will persist at between 1 and 5%.

  12. The magnet components database system

    International Nuclear Information System (INIS)

    Baggett, M.J.; Leedy, R.; Saltmarsh, C.; Tompkins, J.C.

    1990-01-01

    The philosophy, structure, and usage of MagCom, the SSC magnet components database, are described. The database has been implemented in Sybase (a powerful relational database management system) on a UNIX-based workstation at the Superconducting Super Collider Laboratory (SSCL); magnet project collaborators can access the database via network connections. The database was designed to contain the specifications and measured values of important properties for major materials, plus configuration information (specifying which individual items were used in each cable, coil, and magnet) and the test results on completed magnets. The data will facilitate the tracking and control of the production process as well as the correlation of magnet performance with the properties of its constituents. 3 refs., 9 figs

  13. The magnet components database system

    International Nuclear Information System (INIS)

    Baggett, M.J.; Leedy, R.; Saltmarsh, C.; Tompkins, J.C.

    1990-01-01

    The philosophy, structure, and usage MagCom, the SSC magnet components database, are described. The database has been implemented in Sybase (a powerful relational database management system) on a UNIX-based workstation at the Superconducting Super Collider Laboratory (SSCL); magnet project collaborators can access the database via network connections. The database was designed to contain the specifications and measured values of important properties for major materials, plus configuration information (specifying which individual items were used in each cable, coil, and magnet) and the test results on completed magnets. These data will facilitate the tracking and control of the production process as well as the correlation of magnet performance with the properties of its constituents. 3 refs., 10 figs

  14. Recon2Neo4j: applying graph database technologies for managing comprehensive genome-scale networks.

    Science.gov (United States)

    Balaur, Irina; Mazein, Alexander; Saqi, Mansoor; Lysenko, Artem; Rawlings, Christopher J; Auffray, Charles

    2017-04-01

    The goal of this work is to offer a computational framework for exploring data from the Recon2 human metabolic reconstruction model. Advanced user access features have been developed using the Neo4j graph database technology and this paper describes key features such as efficient management of the network data, examples of the network querying for addressing particular tasks, and how query results are converted back to the Systems Biology Markup Language (SBML) standard format. The Neo4j-based metabolic framework facilitates exploration of highly connected and comprehensive human metabolic data and identification of metabolic subnetworks of interest. A Java-based parser component has been developed to convert query results (available in the JSON format) into SBML and SIF formats in order to facilitate further results exploration, enhancement or network sharing. The Neo4j-based metabolic framework is freely available from: https://diseaseknowledgebase.etriks.org/metabolic/browser/ . The java code files developed for this work are available from the following url: https://github.com/ibalaur/MetabolicFramework . ibalaur@eisbm.org. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  15. Establishment of database and network for research of stream generator and state of the art technology review

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jae Bong; Hur, Nam Su; Moon, Seong In; Seo, Hyeong Won; Park, Bo Kyu; Park, Sung Ho; Kim, Hyung Geun [Sungkyunkwan Univ., Seoul (Korea, Republic of)

    2004-02-15

    A significant number of steam generator tubes are defective and are removed from service or repaired world widely. This wide spread damage has been caused by diverse degradation mechanisms, some of which are difficult to detect and predict. Regarding domestic nuclear power plants, also, the increase of number of operating nuclear power plants and operating periods may result in the increase of steam generator tube failure. So, it is important to carry out the integrity evaluation process to prevent the steam generator tube damage. There are two objectives of this research. The one is to make database for the research of steam generator at domestic research institution. It will increase the efficiency and capability of limited domestic research resources by sharing data and information through network organization. Also, it will enhance the current standard of integrity evaluation procedure that is considerably conservative but can be more reasonable. The second objective is to establish the standard integrity evaluation procedure for steam generator tube by reviewing state of the art technology. The research resources related to steam generator tubes are managed by the established web-based database system. The following topics are covered in this project: development of web-based network for research on steam generator tubes review of state of the art technology.

  16. Establishment of database and network for research of stream generator and state of the art technology review

    International Nuclear Information System (INIS)

    Choi, Jae Bong; Hur, Nam Su; Moon, Seong In; Seo, Hyeong Won; Park, Bo Kyu; Park, Sung Ho; Kim, Hyung Geun

    2004-02-01

    A significant number of steam generator tubes are defective and are removed from service or repaired world widely. This wide spread damage has been caused by diverse degradation mechanisms, some of which are difficult to detect and predict. Regarding domestic nuclear power plants, also, the increase of number of operating nuclear power plants and operating periods may result in the increase of steam generator tube failure. So, it is important to carry out the integrity evaluation process to prevent the steam generator tube damage. There are two objectives of this research. The one is to make database for the research of steam generator at domestic research institution. It will increase the efficiency and capability of limited domestic research resources by sharing data and information through network organization. Also, it will enhance the current standard of integrity evaluation procedure that is considerably conservative but can be more reasonable. The second objective is to establish the standard integrity evaluation procedure for steam generator tube by reviewing state of the art technology. The research resources related to steam generator tubes are managed by the established web-based database system. The following topics are covered in this project: development of web-based network for research on steam generator tubes review of state of the art technology

  17. An Adaptive Database Intrusion Detection System

    Science.gov (United States)

    Barrios, Rita M.

    2011-01-01

    Intrusion detection is difficult to accomplish when attempting to employ current methodologies when considering the database and the authorized entity. It is a common understanding that current methodologies focus on the network architecture rather than the database, which is not an adequate solution when considering the insider threat. Recent…

  18. 21SSD: a new public 21-cm EoR database

    Science.gov (United States)

    Eames, Evan; Semelin, Benoît

    2018-05-01

    With current efforts inching closer to detecting the 21-cm signal from the Epoch of Reionization (EoR), proper preparation will require publicly available simulated models of the various forms the signal could take. In this work we present a database of such models, available at 21ssd.obspm.fr. The models are created with a fully-coupled radiative hydrodynamic simulation (LICORICE), and are created at high resolution (10243). We also begin to analyse and explore the possible 21-cm EoR signals (with Power Spectra and Pixel Distribution Functions), and study the effects of thermal noise on our ability to recover the signal out to high redshifts. Finally, we begin to explore the concepts of `distance' between different models, which represents a crucial step towards optimising parameter space sampling, training neural networks, and finally extracting parameter values from observations.

  19. In silico identification of anti-cancer compounds and plants from traditional Chinese medicine database

    Science.gov (United States)

    Dai, Shao-Xing; Li, Wen-Xing; Han, Fei-Fei; Guo, Yi-Cheng; Zheng, Jun-Juan; Liu, Jia-Qian; Wang, Qian; Gao, Yue-Dong; Li, Gong-Hua; Huang, Jing-Fei

    2016-05-01

    There is a constant demand to develop new, effective, and affordable anti-cancer drugs. The traditional Chinese medicine (TCM) is a valuable and alternative resource for identifying novel anti-cancer agents. In this study, we aim to identify the anti-cancer compounds and plants from the TCM database by using cheminformatics. We first predicted 5278 anti-cancer compounds from TCM database. The top 346 compounds were highly potent active in the 60 cell lines test. Similarity analysis revealed that 75% of the 5278 compounds are highly similar to the approved anti-cancer drugs. Based on the predicted anti-cancer compounds, we identified 57 anti-cancer plants by activity enrichment. The identified plants are widely distributed in 46 genera and 28 families, which broadens the scope of the anti-cancer drug screening. Finally, we constructed a network of predicted anti-cancer plants and approved drugs based on the above results. The network highlighted the supportive role of the predicted plant in the development of anti-cancer drug and suggested different molecular anti-cancer mechanisms of the plants. Our study suggests that the predicted compounds and plants from TCM database offer an attractive starting point and a broader scope to mine for potential anti-cancer agents.

  20. Nuclear Physics Science Network Requirements Workshop, May 2008 - Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, Ed., Brian L; Dart, Ed., Eli; Carlson, Rich; Dattoria, Vince; Ernest, Michael; Hitchcock, Daniel; Johnston, William; Kowalski, Andy; Lauret, Jerome; Maguire, Charles; Olson, Douglas; Purschke, Martin; Rai, Gulshan; Watson, Chip; Vale, Carla

    2008-11-10

    to manage those transfers effectively. Network reliability is also becoming more important as there is often a narrow window between data collection and data archiving when transfer and analysis can be done. The instruments do not stop producing data, so extended network outages can result in data loss due to analysis pipeline stalls. Finally, as the scope of collaboration continues to increase, collaboration tools such as audio and video conferencing are becoming ever more critical to the productivity of scientific collaborations.

  1. Random Walks and Diffusions on Graphs and Databases An Introduction

    CERN Document Server

    Blanchard, Philippe

    2011-01-01

    Most networks and databases that humans have to deal with contain large, albeit finite number of units. Their structure, for maintaining functional consistency of the components, is essentially not random and calls for a precise quantitative description of relations between nodes (or data units) and all network components. This book is an introduction, for both graduate students and newcomers to the field, to the theory of graphs and random walks on such graphs. The methods based on random walks and diffusions for exploring the structure of finite connected graphs and databases are reviewed (Markov chain analysis). This provides the necessary basis for consistently discussing a number of applications such diverse as electric resistance networks, estimation of land prices, urban planning, linguistic databases, music, and gene expression regulatory networks.

  2. Computer application for database management and networking of service radio physics; Aplicacion informatica para la gestion de bases de datos y conexiones en red de un servicio de radiofisica

    Energy Technology Data Exchange (ETDEWEB)

    Ferrando Sanchez, A.; Cabello Murillo, E.; Diaz Fuentes, R.; Castro Novais, J.; Clemente Gutierrez, F.; Casa de Juan, M. A. de la; Adaimi Hernandez, P.

    2011-07-01

    The databases in the quality control prove to be a powerful tool for recording, management and statistical process control. Developed in a Windows environment and under Access (Microsoft Office) our service implements this philosophy on the centers computer network. A computer that acts as the server provides the database to the treatment units to record quality control measures daily and incidents. To remove shortcuts stop working with data migration, possible use of duplicate and erroneous data loss because of errors in network connections, which are common problems, we proceeded to manage connections and access to databases ease of maintenance and use horn to all service personnel.

  3. Incremental View Maintenance for Deductive Graph Databases Using Generalized Discrimination Networks

    Directory of Open Access Journals (Sweden)

    Thomas Beyhl

    2016-12-01

    Full Text Available Nowadays, graph databases are employed when relationships between entities are in the scope of database queries to avoid performance-critical join operations of relational databases. Graph queries are used to query and modify graphs stored in graph databases. Graph queries employ graph pattern matching that is NP-complete for subgraph isomorphism. Graph database views can be employed that keep ready answers in terms of precalculated graph pattern matches for often stated and complex graph queries to increase query performance. However, such graph database views must be kept consistent with the graphs stored in the graph database. In this paper, we describe how to use incremental graph pattern matching as technique for maintaining graph database views. We present an incremental maintenance algorithm for graph database views, which works for imperatively and declaratively specified graph queries. The evaluation shows that our maintenance algorithm scales when the number of nodes and edges stored in the graph database increases. Furthermore, our evaluation shows that our approach can outperform existing approaches for the incremental maintenance of graph query results.

  4. World Input-Output Network.

    Directory of Open Access Journals (Sweden)

    Federica Cerina

    Full Text Available Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD is one of the first efforts to construct the global multi-regional input-output (GMRIO tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries.

  5. The effect of increasing levels of embedded generation on the distribution network. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Collinson, A; Earp, G K; Howson, D; Owen, R D; Wright, A J

    1999-10-01

    This report was commissioned as part of the EA Technology Strategic Technology Programme under guidance of the Module 5 (Embedded Generation) Steering Group. This report aims to provide information related to the distribution and supply of electricity in the context of increasing levels of embedded generation. There is a brief description of the operating environment within which electricity companies in the UK must operate. Technical issues related to the connection of generation to the existing distribution infrastructure are highlighted and the design philosophy adopted by network designers in accommodating applications for the connection of embedded generation to the network is discussed. The effects embedded generation has on the network and the issues raised are presented as many of them present barriers to the connection of embedded generators. The final chapters cover the forecast of required connection to 2010 and solutions to restrictions preventing the connection of more embedded generation to the network. (author)

  6. Tri-party agreement databases, access mechanism and procedures. Revision 2

    International Nuclear Information System (INIS)

    Brulotte, P.J.

    1996-01-01

    This document contains the information required for the Washington State Department of Ecology (Ecology) and the U.S. Environmental Protection Agency (EPA) to access databases related to the Hanford Federal Facility Agreement and Consent Order (Tri-Party Agreement). It identifies the procedure required to obtain access to the Hanford Site computer networks and the Tri-Party Agreement related databases. It addresses security requirements, access methods, database availability dates, database access procedures, and the minimum computer hardware and software configurations required to operate within the Hanford Site networks. This document supersedes any previous agreements including the Administrative Agreement to Provide Computer Access to U.S. Environmental Protection Agency (EPA) and the Administrative Agreement to Provide Computer Access to Washington State Department of Ecology (Ecology), agreements that were signed by the U.S. Department of Energy (DOE), Richland Operations Office (RL) in June 1990, Access approval to EPA and Ecology is extended by RL to include all Tri-Party Agreement relevant databases named in this document via the documented access method and date. Access to databases and systems not listed in this document will be granted as determined necessary and negotiated among Ecology, EPA, and RL through the Tri-Party Agreement Project Managers. The Tri-Party Agreement Project Managers are the primary points of contact for all activities to be carried out under the Tri-Party Agreement. Action Plan. Access to the Tri-Party Agreement related databases and systems does not provide or imply any ownership on behalf of Ecology or EPA whether public or private of either the database or the system. Access to identified systems and databases does not include access to network/system administrative control information, network maps, etc

  7. SIMS: addressing the problem of heterogeneity in databases

    Science.gov (United States)

    Arens, Yigal

    1997-02-01

    The heterogeneity of remotely accessible databases -- with respect to contents, query language, semantics, organization, etc. -- presents serious obstacles to convenient querying. The SIMS (single interface to multiple sources) system addresses this global integration problem. It does so by defining a single language for describing the domain about which information is stored in the databases and using this language as the query language. Each database to which SIMS is to provide access is modeled using this language. The model describes a database's contents, organization, and other relevant features. SIMS uses these models, together with a planning system drawing on techniques from artificial intelligence, to decompose a given user's high-level query into a series of queries against the databases and other data manipulation steps. The retrieval plan is constructed so as to minimize data movement over the network and maximize parallelism to increase execution speed. SIMS can recover from network failures during plan execution by obtaining data from alternate sources, when possible. SIMS has been demonstrated in the domains of medical informatics and logistics, using real databases.

  8. SPAN: A Network Providing Integrated, End-to-End, Sensor-to-Database Solutions for Environmental Sciences

    Science.gov (United States)

    Benzel, T.; Cho, Y. H.; Deschon, A.; Gullapalli, S.; Silva, F.

    2009-12-01

    In recent years, advances in sensor network technology have shown great promise to revolutionize environmental data collection. Still, wide spread adoption of these systems by domain experts has been lacking, and these have remained the purview of the engineers who design them. While there are many data logging options for basic data collection in the field currently, scientists are often required to visit the deployment sites to retrieve their data and manually import it into spreadsheets. Some advanced commercial software systems do allow scientists to collect data remotely, but most of these systems only allow point-to-point access, and require proprietary hardware. Furthermore, these commercial solutions preclude the use of sensors from other manufacturers or integration with internet based database repositories and compute engines. Therefore, scientists often must download and manually reformat their data before uploading it to the repositories if they wish to share their data. We present an open-source, low-cost, extensible, turnkey solution called Sensor Processing and Acquisition Network (SPAN) which provides a robust and flexible sensor network service. At the deployment site, SPAN leverages low-power generic embedded processors to integrate variety of commercially available sensor hardware to the network of environmental observation systems. By bringing intelligence close to the sensed phenomena, we can remotely control configuration and re-use, establish rules to trigger sensor activity, manage power requirements, and control the two-way flow of sensed data as well as control information to the sensors. Key features of our design include (1) adoption of a hardware agnostic architecture: our solutions are compatible with several programmable platforms, sensor systems, communication devices and protocols. (2) information standardization: our system supports several popular communication protocols and data formats, and (3) extensible data support: our

  9. Face recognition based on improved BP neural network

    Directory of Open Access Journals (Sweden)

    Yue Gaili

    2017-01-01

    Full Text Available In order to improve the recognition rate of face recognition, face recognition algorithm based on histogram equalization, PCA and BP neural network is proposed. First, the face image is preprocessed by histogram equalization. Then, the classical PCA algorithm is used to extract the features of the histogram equalization image, and extract the principal component of the image. And then train the BP neural network using the trained training samples. This improved BP neural network weight adjustment method is used to train the network because the conventional BP algorithm has the disadvantages of slow convergence, easy to fall into local minima and training process. Finally, the BP neural network with the test sample input is trained to classify and identify the face images, and the recognition rate is obtained. Through the use of ORL database face image simulation experiment, the analysis results show that the improved BP neural network face recognition method can effectively improve the recognition rate of face recognition.

  10. Web interfaces to relational databases

    Science.gov (United States)

    Carlisle, W. H.

    1996-01-01

    This reports on a project to extend the capabilities of a Virtual Research Center (VRC) for NASA's Advanced Concepts Office. The work was performed as part of NASA's 1995 Summer Faculty Fellowship program and involved the development of a prototype component of the VRC - a database system that provides data creation and access services within a room of the VRC. In support of VRC development, NASA has assembled a laboratory containing the variety of equipment expected to be used by scientists within the VRC. This laboratory consists of the major hardware platforms, SUN, Intel, and Motorola processors and their most common operating systems UNIX, Windows NT, Windows for Workgroups, and Macintosh. The SPARC 20 runs SUN Solaris 2.4, an Intel Pentium runs Windows NT and is installed on a different network from the other machines in the laboratory, a Pentium PC runs Windows for Workgroups, two Intel 386 machines run Windows 3.1, and finally, a PowerMacintosh and a Macintosh IIsi run MacOS.

  11. Hazard Analysis Database Report

    Energy Technology Data Exchange (ETDEWEB)

    GAULT, G.W.

    1999-10-13

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for the Tank Waste Remediation System (TWRS) Final Safety Analysis Report (FSAR). The FSAR is part of the approved TWRS Authorization Basis (AB). This document describes, identifies, and defines the contents and structure of the TWRS FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The TWRS Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The database supports the preparation of Chapters 3,4, and 5 of the TWRS FSAR and the USQ process and consists of two major, interrelated data sets: (1) Hazard Evaluation Database--Data from the results of the hazard evaluations; and (2) Hazard Topography Database--Data from the system familiarization and hazard identification.

  12. Databases for INDUS-1 and INDUS-2

    International Nuclear Information System (INIS)

    Merh, Bhavna N.; Fatnani, Pravin

    2003-01-01

    The databases for Indus are relational databases designed to store various categories of data related to the accelerator. The data archiving and retrieving system in Indus is based on a client/sever model. A general purpose commercial database is used to store parameters and equipment data for the whole machine. The database manages configuration, on-line and historical databases. On line and off line applications distributed in several systems can store and retrieve the data from the database over the network. This paper describes the structure of databases for Indus-1 and Indus-2 and their integration within the software architecture. The data analysis, design, resulting data-schema and implementation issues are discussed. (author)

  13. National Transportation Atlas Databases : 2013.

    Science.gov (United States)

    2013-01-01

    The National Transportation Atlas Databases 2013 (NTAD2013) is a set of nationwide geographic datasets of transportation facilities, transportation networks, associated infrastructure, and other political and administrative entities. These datasets i...

  14. National Transportation Atlas Databases : 2015.

    Science.gov (United States)

    2015-01-01

    The National Transportation Atlas Databases 2015 (NTAD2015) is a set of nationwide geographic datasets of transportation facilities, transportation networks, associated infrastructure, and other political and administrative entities. These datasets i...

  15. National Transportation Atlas Databases : 2014.

    Science.gov (United States)

    2014-01-01

    The National Transportation Atlas Databases 2014 (NTAD2014) is a set of nationwide geographic datasets of transportation facilities, transportation networks, associated infrastructure, and other political and administrative entities. These datasets i...

  16. 數據資料庫 Numeric Databases

    Directory of Open Access Journals (Sweden)

    Mei-ling Wang Chen

    1989-03-01

    Full Text Available 無In 1979, the International Communication Bureau of R.O.C. connected some U.S. information service centers through the international telecommunication network. Since then, there are Dialog, ORBIT & BRS introduced into this country. However, the users are interested in the bibliographic databases and seldomly know the non-bibliographic databases or the numeric databases. This article mainly describes the numeric database about its definition & characteristics, comparison with bibliographic databases, its producers. Service systems & users, data element, a brief introduction by the subject, its problem and future, Iibrary role and the present use status in the R.O.C.

  17. RODOS database adapter

    International Nuclear Information System (INIS)

    Xie Gang

    1995-11-01

    Integrated data management is an essential aspect of many automatical information systems such as RODOS, a real-time on-line decision support system for nuclear emergency management. In particular, the application software must provide access management to different commercial database systems. This report presents the tools necessary for adapting embedded SQL-applications to both HP-ALLBASE/SQL and CA-Ingres/SQL databases. The design of the database adapter and the concept of RODOS embedded SQL syntax are discussed by considering some of the most important features of SQL-functions and the identification of significant differences between SQL-implementations. Finally fully part of the software developed and the administrator's and installation guides are described. (orig.) [de

  18. A Database Design and Development Case: NanoTEK Networks

    Science.gov (United States)

    Ballenger, Robert M.

    2010-01-01

    This case provides a real-world project-oriented case study for students enrolled in a management information systems, database management, or systems analysis and design course in which database design and development are taught. The case consists of a business scenario to provide background information and details of the unique operating…

  19. SIRSALE: integrated video database management tools

    Science.gov (United States)

    Brunie, Lionel; Favory, Loic; Gelas, J. P.; Lefevre, Laurent; Mostefaoui, Ahmed; Nait-Abdesselam, F.

    2002-07-01

    Video databases became an active field of research during the last decade. The main objective in such systems is to provide users with capabilities to friendly search, access and playback distributed stored video data in the same way as they do for traditional distributed databases. Hence, such systems need to deal with hard issues : (a) video documents generate huge volumes of data and are time sensitive (streams must be delivered at a specific bitrate), (b) contents of video data are very hard to be automatically extracted and need to be humanly annotated. To cope with these issues, many approaches have been proposed in the literature including data models, query languages, video indexing etc. In this paper, we present SIRSALE : a set of video databases management tools that allow users to manipulate video documents and streams stored in large distributed repositories. All the proposed tools are based on generic models that can be customized for specific applications using ad-hoc adaptation modules. More precisely, SIRSALE allows users to : (a) browse video documents by structures (sequences, scenes, shots) and (b) query the video database content by using a graphical tool, adapted to the nature of the target video documents. This paper also presents an annotating interface which allows archivists to describe the content of video documents. All these tools are coupled to a video player integrating remote VCR functionalities and are based on active network technology. So, we present how dedicated active services allow an optimized video transport for video streams (with Tamanoir active nodes). We then describe experiments of using SIRSALE on an archive of news video and soccer matches. The system has been demonstrated to professionals with a positive feedback. Finally, we discuss open issues and present some perspectives.

  20. Database of Information technology resources

    OpenAIRE

    Barzda, Erlandas

    2005-01-01

    The subject of this master work is the internet information resource database. This work also handles the problems of old information systems which do not meet the new contemporary requirements. The aim is to create internet information system, based on object-oriented technologies and tailored to computer users’ needs. The internet information database system helps computers administrators to get the all needed information about computers network elements and easy to register all changes int...

  1. Guide to the Durham-Rutherford high energy physics databases

    International Nuclear Information System (INIS)

    Gault, F.D.; Lotts, A.P.; Read, B.J.; Crawford, R.L.; Roberts, R.G.

    1979-12-01

    New databases and graphics facilities are added in this edition of the guide. It explains, with examples, how to retrieve tabulated experimental scattering data from databases on the Rutherford Laboratory computer network. (author)

  2. Hazard Analysis Database Report

    CERN Document Server

    Grams, W H

    2000-01-01

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from t...

  3. Network discovery, characterization, and prediction : a grand challenge LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Kegelmeyer, W. Philip, Jr.

    2010-11-01

    This report is the final summation of Sandia's Grand Challenge LDRD project No.119351, 'Network Discovery, Characterization and Prediction' (the 'NGC') which ran from FY08 to FY10. The aim of the NGC, in a nutshell, was to research, develop, and evaluate relevant analysis capabilities that address adversarial networks. Unlike some Grand Challenge efforts, that ambition created cultural subgoals, as well as technical and programmatic ones, as the insistence on 'relevancy' required that the Sandia informatics research communities and the analyst user communities come to appreciate each others needs and capabilities in a very deep and concrete way. The NGC generated a number of technical, programmatic, and cultural advances, detailed in this report. There were new algorithmic insights and research that resulted in fifty-three refereed publications and presentations; this report concludes with an abstract-annotated bibliography pointing to them all. The NGC generated three substantial prototypes that not only achieved their intended goals of testing our algorithmic integration, but which also served as vehicles for customer education and program development. The NGC, as intended, has catalyzed future work in this domain; by the end it had already brought in, in new funding, as much funding as had been invested in it. Finally, the NGC knit together previously disparate research staff and user expertise in a fashion that not only addressed our immediate research goals, but which promises to have created an enduring cultural legacy of mutual understanding, in service of Sandia's national security responsibilities in cybersecurity and counter proliferation.

  4. DenHunt - A Comprehensive Database of the Intricate Network of Dengue-Human Interactions.

    Directory of Open Access Journals (Sweden)

    Prashanthi Karyala

    2016-09-01

    Full Text Available Dengue virus (DENV is a human pathogen and its etiology has been widely established. There are many interactions between DENV and human proteins that have been reported in literature. However, no publicly accessible resource for efficiently retrieving the information is yet available. In this study, we mined all publicly available dengue-human interactions that have been reported in the literature into a database called DenHunt. We retrieved 682 direct interactions of human proteins with dengue viral components, 382 indirect interactions and 4120 differentially expressed human genes in dengue infected cell lines and patients. We have illustrated the importance of DenHunt by mapping the dengue-human interactions on to the host interactome and observed that the virus targets multiple host functional complexes of important cellular processes such as metabolism, immune system and signaling pathways suggesting a potential role of these interactions in viral pathogenesis. We also observed that 7 percent of the dengue virus interacting human proteins are also associated with other infectious and non-infectious diseases. Finally, the understanding that comes from such analyses could be used to design better strategies to counteract the diseases caused by dengue virus. The whole dataset has been catalogued in a searchable database, called DenHunt (http://proline.biochem.iisc.ernet.in/DenHunt/.

  5. DenHunt - A Comprehensive Database of the Intricate Network of Dengue-Human Interactions.

    Science.gov (United States)

    Karyala, Prashanthi; Metri, Rahul; Bathula, Christopher; Yelamanchi, Syam K; Sahoo, Lipika; Arjunan, Selvam; Sastri, Narayan P; Chandra, Nagasuma

    2016-09-01

    Dengue virus (DENV) is a human pathogen and its etiology has been widely established. There are many interactions between DENV and human proteins that have been reported in literature. However, no publicly accessible resource for efficiently retrieving the information is yet available. In this study, we mined all publicly available dengue-human interactions that have been reported in the literature into a database called DenHunt. We retrieved 682 direct interactions of human proteins with dengue viral components, 382 indirect interactions and 4120 differentially expressed human genes in dengue infected cell lines and patients. We have illustrated the importance of DenHunt by mapping the dengue-human interactions on to the host interactome and observed that the virus targets multiple host functional complexes of important cellular processes such as metabolism, immune system and signaling pathways suggesting a potential role of these interactions in viral pathogenesis. We also observed that 7 percent of the dengue virus interacting human proteins are also associated with other infectious and non-infectious diseases. Finally, the understanding that comes from such analyses could be used to design better strategies to counteract the diseases caused by dengue virus. The whole dataset has been catalogued in a searchable database, called DenHunt (http://proline.biochem.iisc.ernet.in/DenHunt/).

  6. The magnet database system

    International Nuclear Information System (INIS)

    Baggett, P.; Delagi, N.; Leedy, R.; Marshall, W.; Robinson, S.L.; Tompkins, J.C.

    1991-01-01

    This paper describes the current status of MagCom, a central database of SSC magnet information that is available to all magnet scientists via network connections. The database has been designed to contain the specifications and measured values of important properties for major materials, plus configuration information (specifying which individual items were used in each cable, coil, and magnet) and the test results on completed magnets. These data will help magnet scientists to track and control the production process and to correlate the performance of magnets with the properties of their constituents

  7. The Design of Lexical Database for Indonesian Language

    Science.gov (United States)

    Gunawan, D.; Amalia, A.

    2017-03-01

    Kamus Besar Bahasa Indonesia (KBBI), an official dictionary for Indonesian language, provides lists of words with their meaning. The online version can be accessed via Internet network. Another online dictionary is Kateglo. KBBI online and Kateglo only provides an interface for human. A machine cannot retrieve data from the dictionary easily without using advanced techniques. Whereas, lexical of words is required in research or application development which related to natural language processing, text mining, information retrieval or sentiment analysis. To address this requirement, we need to build a lexical database which provides well-defined structured information about words. A well-known lexical database is WordNet, which provides the relation among words in English. This paper proposes the design of a lexical database for Indonesian language based on the combination of KBBI 4th edition, Kateglo and WordNet structure. Knowledge representation by utilizing semantic networks depict the relation among words and provide the new structure of lexical database for Indonesian language. The result of this design can be used as the foundation to build the lexical database for Indonesian language.

  8. Concurrency control in distributed database systems

    CERN Document Server

    Cellary, W; Gelenbe, E

    1989-01-01

    Distributed Database Systems (DDBS) may be defined as integrated database systems composed of autonomous local databases, geographically distributed and interconnected by a computer network.The purpose of this monograph is to present DDBS concurrency control algorithms and their related performance issues. The most recent results have been taken into consideration. A detailed analysis and selection of these results has been made so as to include those which will promote applications and progress in the field. The application of the methods and algorithms presented is not limited to DDBSs but a

  9. Construction and Application of a National Data-Sharing Service Network of Material Environmental Corrosion

    Directory of Open Access Journals (Sweden)

    Xiaogang Li

    2007-12-01

    Full Text Available This article discusses the key features of a newly developed national data-sharing online network for material environmental corrosion. Written in Java language and based on Oracle database technology, the central database in the network is supported with two unique series of corrosion failure data, both of which were accumulated during a long period of time. The first category of data, provided by national environment corrosion test sites, is corrosion failure data for different materials in typical environments (atmosphere, seawater and soil. The other category is corrosion data in production environments, provided by a variety of firms. This network system enables standardized management of environmental corrosion data, an effective data sharing process, and research and development support for new products and after-sale services. Moreover this network system provides a firm base and data-service platform for the evaluation of project bids, safety, and service life. This article also discusses issues including data quality management and evaluation in the material corrosion data sharing process, access authority of different users, compensation for providers of shared historical data, and finally, the related policy and law legal processes, which are required to protect the intellectual property rights of the database.

  10. PyPathway: Python Package for Biological Network Analysis and Visualization.

    Science.gov (United States)

    Xu, Yang; Luo, Xiao-Chun

    2018-05-01

    Life science studies represent one of the biggest generators of large data sets, mainly because of rapid sequencing technological advances. Biological networks including interactive networks and human curated pathways are essential to understand these high-throughput data sets. Biological network analysis offers a method to explore systematically not only the molecular complexity of a particular disease but also the molecular relationships among apparently distinct phenotypes. Currently, several packages for Python community have been developed, such as BioPython and Goatools. However, tools to perform comprehensive network analysis and visualization are still needed. Here, we have developed PyPathway, an extensible free and open source Python package for functional enrichment analysis, network modeling, and network visualization. The network process module supports various interaction network and pathway databases such as Reactome, WikiPathway, STRING, and BioGRID. The network analysis module implements overrepresentation analysis, gene set enrichment analysis, network-based enrichment, and de novo network modeling. Finally, the visualization and data publishing modules enable users to share their analysis by using an easy web application. For package availability, see the first Reference.

  11. Network Connection Management

    CERN Document Server

    IT Department, Communication Systems and Network Group

    2005-01-01

    The CERN network database is a key element of the CERN network infrastructure. It is absolutely essential that its information is kept up-to-date for security reasons and to ensure a smooth running of the network infrastructure. Over the years, some of the information in the database has become obsolete. The database therefore needs to be cleaned up, for which we are requesting your help. In the coming weeks, you may receive an electronic mail from Netops.database@cern.ch relating to the clean-up. If you receive such a message, it will be for one of the following reasons: You are the person responsible for or the main user of a system for which a problem has been detected, or You have been the supervisor of a person who has now left CERN (according to the HR database), or The problem has been passed up to you because someone under your supervision has not taken the necessary action within four weeks of notification. Just open the link that will be included in the message and follow the instructions....

  12. Network Connection Management

    CERN Multimedia

    IT Department

    2005-01-01

    The CERN network database is a key element of the CERN network infrastructure. It is absolutely essential that its information is kept up-to-date for security reasons and to ensure smooth running of the network infrastructure. Over the years, some of the information in the database has become obsolete. The database therefore needs to be cleaned up, for which we are requesting your help. In the coming weeks, you may receive an electronic mail from Netops.database@cern.ch relating to the clean-up. If you receive such a message, it will be for one of the following reasons: You are the person responsible for or the main user of a system for which a problem has been detected, or You have been the supervisor of a person who has now left CERN (according to the HR database), or The problem has been passed up to you because someone under your supervision has not taken the necessary action within four weeks of notification. Just open the link that will be included in the message and follow the instructions. Thank ...

  13. Automated tools for cross-referencing large databases. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Clapp, N E; Green, P L; Bell, D [and others

    1997-05-01

    A Cooperative Research and Development Agreement (CRADA) was funded with TRESP Associates, Inc., to develop a limited prototype software package operating on one platform (e.g., a personal computer, small workstation, or other selected device) to demonstrate the concepts of using an automated database application to improve the process of detecting fraud and abuse of the welfare system. An analysis was performed on Tennessee`s welfare administration system. This analysis was undertaken to determine if the incidence of welfare waste, fraud, and abuse could be reduced and if the administrative process could be improved to reduce benefits overpayment errors. The analysis revealed a general inability to obtain timely data to support the verification of a welfare recipient`s economic status and eligibility for benefits. It has been concluded that the provision of more modern computer-based tools and the establishment of electronic links to other state and federal data sources could increase staff efficiency, reduce the incidence of out-of-date information provided to welfare assistance staff, and make much of the new data required available in real time. Electronic data links have been proposed to allow near-real-time access to data residing in databases located in other states and at federal agency data repositories. The ability to provide these improvements to the local office staff would require the provision of additional computers, software, and electronic data links within each of the offices and the establishment of approved methods of accessing remote databases and transferring potentially sensitive data. In addition, investigations will be required to ascertain if existing laws would allow such data transfers, and if not, what changed or new laws would be required. The benefits, in both cost and efficiency, to the state of Tennessee of having electronically-enhanced welfare system administration and control are expected to result in a rapid return of investment.

  14. Spatial-temporal data model and fractal analysis of transportation network in GIS environment

    Science.gov (United States)

    Feng, Yongjiu; Tong, Xiaohua; Li, Yangdong

    2008-10-01

    How to organize transportation data characterized by multi-time, multi-scale, multi-resolution and multi-source is one of the fundamental problems of GIS-T development. A spatial-temporal data model for GIS-T is proposed based on Spatial-temporal- Object Model. Transportation network data is systemically managed using dynamic segmentation technologies. And then a spatial-temporal database is built to integrally store geographical data of multi-time for transportation. Based on the spatial-temporal database, functions of spatial analysis of GIS-T are substantively extended. Fractal module is developed to improve the analyzing in intensity, density, structure and connectivity of transportation network based on the validation and evaluation of topologic relation. Integrated fractal with GIS-T strengthens the functions of spatial analysis and enriches the approaches of data mining and knowledge discovery of transportation network. Finally, the feasibility of the model and methods are tested thorough Guangdong Geographical Information Platform for Highway Project.

  15. Database management for an electrical distribution network of intermediate complexity CERN

    CERN Document Server

    De Ruschi, Daniele; Burdet, Georges

    2010-01-01

    This thesis is submitted as the final work for the degree of Master of Science in Engineering of Information that has been taken by the writer at at University of Bergamo, Italy. The report is based on the work conducted by the writer from September 2009 throughout June 2010 on a project assignment given by the department of Engineering Electrical Control at CERN Genève The work performed is a contribution to the GESMAR system of CERN. GESMAR is a CERN made complex platform for support and management of electric network. In this work is developed an information system for an ETL process. The report presents the design, implementation and evaluation made, prototypes of applications which take advantages of new information inserted in GESMAR are also presented.

  16. The hippocampal network model: A transdiagnostic metaconnectomic approach

    Directory of Open Access Journals (Sweden)

    Eithan Kotkowski

    Full Text Available Purpose: The hippocampus plays a central role in cognitive and affective processes and is commonly implicated in neurodegenerative diseases. Our study aimed to identify and describe a hippocampal network model (HNM using trans-diagnostic MRI data from the BrainMap® database. We used meta-analysis to test the network degeneration hypothesis (NDH (Seeley et al., 2009 by identifying structural and functional covariance in this hippocampal network. Methods: To generate our network model, we used BrainMap's VBM database to perform a region-to-whole-brain (RtWB meta-analysis of 269 VBM experiments from 165 published studies across a range of 38 psychiatric and neurological diseases reporting hippocampal gray matter density alterations. This step identified 11 significant gray matter foci, or nodes. We subsequently used meta-analytic connectivity modeling (MACM to define edges of structural covariance between nodes from VBM data as well as functional covariance using the functional task-activation database, also from BrainMap. Finally, we applied a correlation analysis using Pearson's r to assess the similarities and differences between the structural and functional covariance models. Key findings: Our hippocampal RtWB meta-analysis reported consistent and significant structural covariance in 11 key regions. The subsequent structural and functional MACMs showed a strong correlation between HNM nodes with a significant structural-functional covariance correlation of r = .377 (p = .000049. Significance: This novel method of studying network covariance using VBM and functional meta-analytic techniques allows for the identification of generalizable patterns of functional and structural abnormalities pertaining to the hippocampus. In accordance with the NDH, this framework could have major implications in studying and predicting spatial disease patterns using network-based assays. Keywords: Anatomic likelihood estimation, ALE, BrainMap, Functional

  17. Developmental and Reproductive Toxicology Database (DART)

    Data.gov (United States)

    U.S. Department of Health & Human Services — A bibliographic database on the National Library of Medicine's (NLM) Toxicology Data Network (TOXNET) with references to developmental and reproductive toxicology...

  18. Development of a computational database for application in Probabilistic Safety Analysis of nuclear research reactors

    International Nuclear Information System (INIS)

    Macedo, Vagner dos Santos

    2016-01-01

    The objective of this work is to present the computational database that was developed to store technical information and process data on component operation, failure and maintenance for the nuclear research reactors located at the Nuclear and Energy Research Institute (Instituto de Pesquisas Energéticas e Nucleares, IPEN), in São Paulo, Brazil. Data extracted from this database may be applied in the Probabilistic Safety Analysis of these research reactors or in less complex quantitative assessments related to safety, reliability, availability and maintainability of these facilities. This database may be accessed by users of the corporate network, named IPEN intranet. Professionals who require the access to the database must be duly registered by the system administrator, so that they will be able to consult and handle the information. The logical model adopted to represent the database structure is an entity-relationship model, which is in accordance with the protocols installed in IPEN intranet. The open-source relational database management system called MySQL, which is based on the Structured Query Language (SQL), was used in the development of this work. The PHP programming language was adopted to allow users to handle the database. Finally, the main result of this work was the creation a web application for the component reliability database named PSADB, specifically developed for the research reactors of IPEN; furthermore, the database management system provides relevant information efficiently. (author)

  19. Database Independent Migration of Objects into an Object-Relational Database

    CERN Document Server

    Ali, A; Munir, K; Waseem-Hassan, M; Willers, I

    2002-01-01

    CERN's (European Organization for Nuclear Research) WISDOM project [1] deals with the replication of data between homogeneous sources in a Wide Area Network (WAN) using the extensible Markup Language (XML). The last phase of the WISDOM (Wide-area, database Independent Serialization of Distributed Objects for data Migration) project [2], indicates the future directions for this work to be to incorporate heterogeneous sources as compared to homogeneous sources as described by [3]. This work will become essential for the CERN community once the need to transfer their legacy data to some other source, other then Objectivity [4], arises. Oracle 9i - an Object-Relational Database (including support for abstract data types, ADTs) appears to be a potential candidate for the physics event store in the CERN CMS experiment as suggested by [4] & [5]. Consequently this database has been selected for study. As a result of this work the HEP community will get a tool for migrating their data from Objectivity to Oracle9i.

  20. Final Technical Report on the Genome Sequence DataBase (GSDB): DE-FG03 95 ER 62062 September 1997-September 1999

    Energy Technology Data Exchange (ETDEWEB)

    Harger, Carol A.

    1999-10-28

    Since September 1997 NCGR has produced two web-based tools for researchers to use to access and analyze data in the Genome Sequence DataBase (GSDB). These tools are: Sequence Viewer, a nucleotide sequence and annotation visualization tool, and MAR-Finder, a tool that predicts, base upon statistical inferences, the location of matrix attachment regions (MARS) within a nucleotide sequence. [The annual report for June 1996 to August 1997 is included as an attachment to this final report.

  1. FINAL DIGITAL FLOOD INSURANCE RATE MAP DATABASE, GREENWOOD COUNTY, SC

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  2. Neurovascular Network Explorer 1.0: a database of 2-photon single-vessel diameter measurements with MATLAB(®) graphical user interface.

    Science.gov (United States)

    Sridhar, Vishnu B; Tian, Peifang; Dale, Anders M; Devor, Anna; Saisan, Payam A

    2014-01-01

    We present a database client software-Neurovascular Network Explorer 1.0 (NNE 1.0)-that uses MATLAB(®) based Graphical User Interface (GUI) for interaction with a database of 2-photon single-vessel diameter measurements from our previous publication (Tian et al., 2010). These data are of particular interest for modeling the hemodynamic response. NNE 1.0 is downloaded by the user and then runs either as a MATLAB script or as a standalone program on a Windows platform. The GUI allows browsing the database according to parameters specified by the user, simple manipulation and visualization of the retrieved records (such as averaging and peak-normalization), and export of the results. Further, we provide NNE 1.0 source code. With this source code, the user can database their own experimental results, given the appropriate data structure and naming conventions, and thus share their data in a user-friendly format with other investigators. NNE 1.0 provides an example of seamless and low-cost solution for sharing of experimental data by a regular size neuroscience laboratory and may serve as a general template, facilitating dissemination of biological results and accelerating data-driven modeling approaches.

  3. Comparative Study of Temporal Aspects of Data Collection Using Remote Database vs. Abstract Database in WSN

    Directory of Open Access Journals (Sweden)

    Abderrahmen BELFKIH

    2015-06-01

    Full Text Available The real-time aspect is an important factor for Wireless Sensor Network (WSN applications, which demand a timely data delivery, to reflect the current state of the environment. However, most existing works in the field of sensor networks are only interested in data processing and energy consumption, to improve the lifetime of WSN. Extensive researches have been done on data processing in WSN. Some works have been interested in improving data collection methods, and others have focused on various query processing techniques. In recent years, the query processing using abstract database technology like Cougar and TinyDB has shown efficient results in many researches. This paper presents a study of timing properties through two data processing techniques for WSN. The first is based on a warehousing approach, in which data are collected and stored on a remote database. The second is based on query processing using an abstract database like TinyDB. This study has allowed us to identify some factors which enhance the respect of temporal constraints.

  4. PERANCANGAN MODEL NETWORK PADA MESIN DATABASE NON SPATIAL UNTUK MANUVER JARINGAN LISTRIK SEKTOR DISTRIBUSI DENGAN PL SQ

    Directory of Open Access Journals (Sweden)

    I Made Sukarsa

    2009-06-01

    Full Text Available Saat ini aplikasi di bidang SIG telah banyak yang dikembangkan berbasis mesin DBMS (Database Management System non spatial sehingga mampu mendukung model penyajian data secara client server dan menangani data dalam jumlah yang besar. Salah satunya telah dikembangkan untuk menangani data jaringan listrik.Kenyataannya, mesin-mesin DBMS belum dilengkapi dengan kemampuan untuk melakukan analisis network seperti manuver jaringan dan merupakan dasar untuk pengembangan berbagai aplikasi lainnya. Oleh karena itu,perlu dikembangkan suatu model network untuk manuver jaringan listrik dengan berbagai kekhasannya.Melalui beberapa tahapan penelitian yang dilakukan, telah dapat dikembangkan suatu model network yangdapat digunakan untuk menangani manuver jaringan. Model ini dibangun dengan memperhatikan kepentingan pengintegrasian dengan sistem eksisting dengan meminimalkan adanya perubahan pada aplikasi eksisting.Pemilihan implementasi berbasis PL SQL (Pragrammable Language Structure Query Language akan memberikan berbagai keuntungan termasuk unjuk kerja sistem. Model ini telah diujikan untuk simulasi pemadaman,menghitung perubahan struktur pembebanan jaringan dan dapat dikembangkan untuk analisis sistem tenaga listrik seperti rugi-rugi, load flow dan sebagainya sehingga pada akhirnya aplikasi SIG akan mampu mensubstitusi danmengatasi kelemahan aplikasi analisis sistem tenaga yang banyak dipakai saat ini seperti EDSA (Electrical DesignSystem Anaysis .

  5. Antitumor Mechanisms of Curcumae Rhizoma Based on Network Pharmacology

    Directory of Open Access Journals (Sweden)

    Yan-Hua Bi

    2018-01-01

    Full Text Available Curcumae Rhizoma, a traditional Chinese medication, is commonly used in both traditional treatment and modern clinical care. Its anticancer effects have attracted a great deal of attention, but the mechanisms of action remain obscure. In this study, we screened for the active compounds of Curcumae Rhizoma using a drug-likeness approach. Candidate protein targets with functions related to cancer were predicted by reverse docking and then checked by manual search of the PubMed database. Potential target genes were uploaded to the GeneMANIA server and DAVID 6.8 database for analysis. Finally, compound-target, target-pathway, and compound-target-pathway networks were constructed using Cytoscape 3.3. The results revealed that the anticancer activity of Curcumae Rhizoma potentially involves 13 active compounds, 33 potential targets, and 31 signaling pathways, thus constituting a “multiple compounds, multiple targets, and multiple pathways” network corresponding to the concept of systematic actions in TCM. These findings provide an overview of the anticancer action of Curcumae Rhizoma from a network perspective, as well as setting an example for future studies of other materials used in TCM.

  6. Dealing with Big Data and Network Analysis Using Neo4j

    Directory of Open Access Journals (Sweden)

    Jon MacKay

    2018-02-01

    relationships between people and organizations can be. This tutorial will focus on the Neo4j graph database, and the Cypher query language that comes with it. - Neo4j is a free, open-source graph database written in java that is available for all major computing platforms. - Cypher is the query language for the Neo4j database that is designed to insert and select information from the database. By the end of this lesson you will be able to construct, analyze and visualize networks based on big — or just inconveniently large — data. The final section of this lesson contains code and data to illustrate the key points of this lesson.

  7. Relational Database Extension Oriented, Self-adaptive Imagery Pyramid Model

    Directory of Open Access Journals (Sweden)

    HU Zhenghua

    2015-06-01

    Full Text Available With the development of remote sensing technology, especially the improvement of sensor resolution, the amount of image data is increasing. This puts forward higher requirements to manage huge amount of data efficiently and intelligently. And how to access massive remote sensing data with efficiency and smartness becomes an increasingly popular topic. In this paper, against current development status of Spatial Data Management System, we proposed a self-adaptive strategy for image blocking and a method for LoD(level of detailmodel construction that adapts, with the combination of database storage, network transmission and the hardware of the client. Confirmed by experiments, this imagery management mechanism can achieve intelligent and efficient storage and access in a variety of different conditions of database, network and client. This study provides a feasible idea and method for efficient image data management, contributing to the efficient access and management for remote sensing image data which are based on database technology under network environment of C/S architecture.

  8. Network performance for graphical control systems

    International Nuclear Information System (INIS)

    Clout, P.; Geib, M.; Westervelt, R.

    1992-01-01

    Vsystem is a toolbox for building graphically-based control systems. The real-tiem database component, Vaccess, includes all the networking support necessary to build multi-computer control systems. Vaccess has two modes of database access, synchronous and asynchronous. Vdraw is another component of Vsystem that allows developers and users to develop control screens and windows by drawing rather than programming. Based on X-windows, Vsystem provides the possibility of running Vdraw either on the workstation with the graphics or on the computer with the database. We have made some measurements on the cpu loading, elapsed time and the network loading to give some guidance in system configuration performance. It will be seen that asynchronous network access gives large performance increases and that the network database change notification protocol can be either more or less efficient than the X-window network protocol, depending on the graphical representation of the data. (author)

  9. Gigabit network technology. Final technical report

    Energy Technology Data Exchange (ETDEWEB)

    Davenport, C.M.C. [ed.

    1996-10-01

    Current digital networks are evolving toward distributed multimedia with a wide variety of applications with individual data rates ranging from kb/sec to tens and hundreds of Mb/sec. Link speed requirements are pushing into the Gb/sec range and beyond the envelop of electronic networking capabilities. There is a vast amount of untapped bandwidth available in the low-attenuation communication bands of an optical fiber. The capacity in one fiber thread is enough to carry more than two thousand times as much information as all the current radio and microwave frequencies. And while fiber optics has replaced copper wire as the transmission medium of choice, the communication capacity of conventional fiber optic networks is ultimately limited by electronic processing speeds.

  10. European database of nuclear-seismic profiles. Final report

    International Nuclear Information System (INIS)

    Wenzel, F.; Fuchs, K.; Tittgemeyer, M.

    1997-01-01

    The project serves the purpose of conserving the nuclear-seismic data collections of the former USSR by suitable processing of data as well as digitization, and incorporation into the databases stored at Potsdam (Germany) and Moscow (Russia). In a joint activity assisted by the Russian State Committee for Geology, a complete set of nuclear-seismic data has been integrated into the data collections of an international computer center which is one of the centers participating in the EUROPROBE project. Furthermore, computer stations are being established across Russia so that Russian geoscientists will have at their disposal the required data processing facilities. (DG) [de

  11. KALIMER design database development and operation manual

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Hahn, Do Hee; Lee, Yong Bum; Chang, Won Pyo

    2000-12-01

    KALIMER Design Database is developed to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applications. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), 3D CAD database, Team Cooperation System, and Reserved Documents. Results Database is a research results database for mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is a schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment

  12. KALIMER design database development and operation manual

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Hahn, Do Hee; Lee, Yong Bum; Chang, Won Pyo

    2000-12-01

    KALIMER Design Database is developed to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applications. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), 3D CAD database, Team Cooperation System, and Reserved Documents. Results Database is a research results database for mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is a schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment.

  13. Critical assessment of human metabolic pathway databases: a stepping stone for future integration

    Directory of Open Access Journals (Sweden)

    Stobbe Miranda D

    2011-10-01

    Full Text Available Abstract Background Multiple pathway databases are available that describe the human metabolic network and have proven their usefulness in many applications, ranging from the analysis and interpretation of high-throughput data to their use as a reference repository. However, so far the various human metabolic networks described by these databases have not been systematically compared and contrasted, nor has the extent to which they differ been quantified. For a researcher using these databases for particular analyses of human metabolism, it is crucial to know the extent of the differences in content and their underlying causes. Moreover, the outcomes of such a comparison are important for ongoing integration efforts. Results We compared the genes, EC numbers and reactions of five frequently used human metabolic pathway databases. The overlap is surprisingly low, especially on reaction level, where the databases agree on 3% of the 6968 reactions they have combined. Even for the well-established tricarboxylic acid cycle the databases agree on only 5 out of the 30 reactions in total. We identified the main causes for the lack of overlap. Importantly, the databases are partly complementary. Other explanations include the number of steps a conversion is described in and the number of possible alternative substrates listed. Missing metabolite identifiers and ambiguous names for metabolites also affect the comparison. Conclusions Our results show that each of the five networks compared provides us with a valuable piece of the puzzle of the complete reconstruction of the human metabolic network. To enable integration of the networks, next to a need for standardizing the metabolite names and identifiers, the conceptual differences between the databases should be resolved. Considerable manual intervention is required to reach the ultimate goal of a unified and biologically accurate model for studying the systems biology of human metabolism. Our comparison

  14. An event-oriented database for continuous data flows in the TJ-II environment

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, E. [Asociacion Euratom/CIEMAT para Fusion Madrid, 28040 Madrid (Spain)], E-mail: edi.sanchez@ciemat.es; Pena, A. de la; Portas, A.; Pereira, A.; Vega, J. [Asociacion Euratom/CIEMAT para Fusion Madrid, 28040 Madrid (Spain); Neto, A.; Fernandes, H. [Associacao Euratom/IST, Centro de Fusao Nuclear, Avenue Rovisco Pais P-1049-001 Lisboa (Portugal)

    2008-04-15

    A new database for storing data related to the TJ-II experiment has been designed and implemented. It allows the storage of raw data not acquired during plasma shots, i.e. data collected continuously or between plasma discharges while testing subsystems (e.g. during neutral beam test pulses). This new database complements already existing ones by permitting the storage of raw data that are not classified by shot number. Rather these data are indexed according to a more general entity entitled event. An event is defined as any occurrence relevant to the TJ-II environment. Such occurrences are registered thus allowing relationships to be established between data acquisition, TJ-II control-system and diagnostic control-system actions. In the new database, raw data are stored in files on the TJ-II UNIX central server disks while meta-data are stored in Oracle tables thereby permitting fast data searches according to different criteria. In addition, libraries for registering data/events in the database from different subsystems within the laboratory local area network have been developed. Finally, a Shared Data Access System has been implemented for external access to data. It permits both new event-indexed as well as old data (indexed by shot number) to be read from a common event perspective.

  15. Fish Karyome: A karyological information network database of Indian Fishes.

    Science.gov (United States)

    Nagpure, Naresh Sahebrao; Pathak, Ajey Kumar; Pati, Rameshwar; Singh, Shri Prakash; Singh, Mahender; Sarkar, Uttam Kumar; Kushwaha, Basdeo; Kumar, Ravindra

    2012-01-01

    'Fish Karyome', a database on karyological information of Indian fishes have been developed that serves as central source for karyotype data about Indian fishes compiled from the published literature. Fish Karyome has been intended to serve as a liaison tool for the researchers and contains karyological information about 171 out of 2438 finfish species reported in India and is publically available via World Wide Web. The database provides information on chromosome number, morphology, sex chromosomes, karyotype formula and cytogenetic markers etc. Additionally, it also provides the phenotypic information that includes species name, its classification, and locality of sample collection, common name, local name, sex, geographical distribution, and IUCN Red list status. Besides, fish and karyotype images, references for 171 finfish species have been included in the database. Fish Karyome has been developed using SQL Server 2008, a relational database management system, Microsoft's ASP.NET-2008 and Macromedia's FLASH Technology under Windows 7 operating environment. The system also enables users to input new information and images into the database, search and view the information and images of interest using various search options. Fish Karyome has wide range of applications in species characterization and identification, sex determination, chromosomal mapping, karyo-evolution and systematics of fishes.

  16. Management system of instrument database

    International Nuclear Information System (INIS)

    Zhang Xin

    1997-01-01

    The author introduces a management system of instrument database. This system has been developed using with Foxpro on network. The system has some characters such as clear structure, easy operation, flexible and convenient query, as well as the data safety and reliability

  17. PEP725 Pan European Phenological Database

    Science.gov (United States)

    Koch, E.; Adler, S.; Lipa, W.; Ungersböck, M.; Zach-Hermann, S.

    2010-09-01

    Europe is in the fortunate situation that it has a long tradition in phenological networking: the history of collecting phenological data and using them in climatology has its starting point in 1751 when Carl von Linné outlined in his work Philosophia Botanica methods for compiling annual plant calendars of leaf opening, flowering, fruiting and leaf fall together with climatological observations "so as to show how areas differ". Recently in most European countries, phenological observations have been carried out routinely for more than 50 years by different governmental and non governmental organisations and following different observation guidelines, the data stored at different places in different formats. This has been really hampering pan European studies as one has to address many network operators to get access to the data before one can start to bring them in a uniform style. From 2004 to 2009 the COST-action 725 established a European wide data set of phenological observations. But the deliverables of this COST action was not only the common phenological database and common observation guidelines - COST725 helped to trigger a revival of some old networks and to establish new ones as for instance in Sweden. At the end of 2009 the COST action the database comprised about 8 million data in total from 15 European countries plus the data from the International Phenological Gardens IPG. In January 2010 PEP725 began its work as follow up project with funding from EUMETNET the network of European meteorological services and of ZAMG the Austrian national meteorological service. PEP725 not only will take over the part of maintaining, updating the COST725 database, but also to bring in phenological data from the time before 1951, developing better quality checking procedures and ensuring an open access to the database. An attractive webpage will make phenology and climate impacts on vegetation more visible in the public enabling a monitoring of vegetation development.

  18. A Database Query Processing Model in Peer-To-Peer Network ...

    African Journals Online (AJOL)

    Peer-to-peer databases are becoming more prevalent on the internet for sharing and distributing applications, documents, files, and other digital media. The problem associated with answering large-scale ad hoc analysis queries, aggregation queries, on these databases poses unique challenges. This paper presents an ...

  19. A KINETIC DATABASE FOR ASTROCHEMISTRY (KIDA)

    International Nuclear Information System (INIS)

    Wakelam, V.; Pavone, B.; Hébrard, E.; Hersant, F.; Herbst, E.; Loison, J.-C.; Chandrasekaran, V.; Bergeat, A.; Smith, I. W. M.; Adams, N. G.; Bacchus-Montabonel, M.-C.; Béroff, K.; Bierbaum, V. M.; Chabot, M.; Dalgarno, A.; Van Dishoeck, E. F.; Faure, A.; Geppert, W. D.; Gerlich, D.; Galli, D.

    2012-01-01

    We present a novel chemical database for gas-phase astrochemistry. Named the KInetic Database for Astrochemistry (KIDA), this database consists of gas-phase reactions with rate coefficients and uncertainties that will be vetted to the greatest extent possible. Submissions of measured and calculated rate coefficients are welcome, and will be studied by experts before inclusion into the database. Besides providing kinetic information for the interstellar medium, KIDA is planned to contain such data for planetary atmospheres and for circumstellar envelopes. Each year, a subset of the reactions in the database (kida.uva) will be provided as a network for the simulation of the chemistry of dense interstellar clouds with temperatures between 10 K and 300 K. We also provide a code, named Nahoon, to study the time-dependent gas-phase chemistry of zero-dimensional and one-dimensional interstellar sources.

  20. Database use and technology in Japan: JTEC panel report. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Wiederhold, G.; Beech, D.; Bourne, C.; Farmer, N.; Jajodia, Sushil; Kahaner, D.; Minoura, Toshi; Smith, D.; Smith, J.M.

    1992-04-01

    This report presents the findings of a group of database experts, sponsored by the Japanese Technology Evaluation Center (JTEC), based on an intensive study trip to Japan during March 1991. Academic, industrial, and governmental sites were visited. The primary findings are that Japan is supporting its academic research establishment poorly, that industry is making progress in key areas, and that both academic and industrial researchers are well aware of current domestic and foreign technology. Information sharing between industry and academia is effectively supported by governmental sponsorship of joint planning and review activities, and enhances technology transfer. In two key areas, multimedia and object-oriented databases, the authors can expect to see future export of Japanese database products, typically integrated into larger systems. Support for academic research is relatively modest. Nevertheless, the senior faculty are well-known and respected, and communicate frequently and in depth with each other, with government agencies, and with industry. In 1988 there were a total of 1,717 Ph.D.`s in engineering and 881 in science. It appears that only about 30 of these were academic Ph.D.`s in the basic computer sciences.

  1. HEP Science Network Requirements--Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Bakken, Jon; Barczyk, Artur; Blatecky, Alan; Boehnlein, Amber; Carlson, Rich; Chekanov, Sergei; Cotter, Steve; Cottrell, Les; Crawford, Glen; Crawford, Matt; Dart, Eli; Dattoria, Vince; Ernst, Michael; Fisk, Ian; Gardner, Rob; Johnston, Bill; Kent, Steve; Lammel, Stephan; Loken, Stewart; Metzger, Joe; Mount, Richard; Ndousse-Fetter, Thomas; Newman, Harvey; Schopf, Jennifer; Sekine, Yukiko; Stone, Alan; Tierney, Brian; Tull, Craig; Zurawski, Jason

    2010-04-27

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In August 2009 ESnet and the Office of High Energy Physics (HEP), of the DOE Office of Science, organized a workshop to characterize the networking requirements of the programs funded by HEP. The International HEP community has been a leader in data intensive science from the beginning. HEP data sets have historically been the largest of all scientific data sets, and the communty of interest the most distributed. The HEP community was also the first to embrace Grid technologies. The requirements identified at the workshop are summarized below, and described in more detail in the case studies and the Findings section: (1) There will be more LHC Tier-3 sites than orginally thought, and likely more Tier-2 to Tier-2 traffic than was envisioned. It it not yet known what the impact of this will be on ESnet, but we will need to keep an eye on this traffic. (2) The LHC Tier-1 sites (BNL and FNAL) predict the need for 40-50 Gbps of data movement capacity in 2-5 years, and 100-200 Gbps in 5-10 years for HEP program related traffic. Other key HEP sites include LHC Tier-2 and Tier-3 sites, many of which are located at universities. To support the LHC, ESnet must continue its collaborations with university and international networks. (3) While in all cases the deployed 'raw' network bandwidth must exceed the user requirements in order to meet the data transfer and reliability requirements, network engineering for trans

  2. HEP Science Network Requirements. Final Report

    International Nuclear Information System (INIS)

    Dart, Eli; Tierney, Brian

    2010-01-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In August 2009 ESnet and the Office of High Energy Physics (HEP), of the DOE Office of Science, organized a workshop to characterize the networking requirements of the programs funded by HEP. The International HEP community has been a leader in data intensive science from the beginning. HEP data sets have historically been the largest of all scientific data sets, and the communty of interest the most distributed. The HEP community was also the first to embrace Grid technologies. The requirements identified at the workshop are summarized below, and described in more detail in the case studies and the Findings section: (1) There will be more LHC Tier-3 sites than orginally thought, and likely more Tier-2 to Tier-2 traffic than was envisioned. It it not yet known what the impact of this will be on ESnet, but we will need to keep an eye on this traffic. (2) The LHC Tier-1 sites (BNL and FNAL) predict the need for 40-50 Gbps of data movement capacity in 2-5 years, and 100-200 Gbps in 5-10 years for HEP program related traffic. Other key HEP sites include LHC Tier-2 and Tier-3 sites, many of which are located at universities. To support the LHC, ESnet must continue its collaborations with university and international networks. (3) While in all cases the deployed 'raw' network bandwidth must exceed the user requirements in order to meet the data transfer and reliability requirements, network engineering for trans-Atlantic connectivity

  3. 經由校園網路存取圖書館光碟資料庫之研究 Studies on Multiuser Access Library CD-ROM Database via Campus Network

    Directory of Open Access Journals (Sweden)

    Ruey-shun Chen

    1992-06-01

    Full Text Available 無Library CD-ROM with its enormous storage, retrieval capabilities and reasonable price. It has been gradually replacing some of its printed counterpart. But one of the greatest limitation on the use of stand-alone CD-ROM workstation is that only one user can access the CD-ROM database at a time. This paper is proposed a new method to solve this problem. The method use personal computer via standard network system Ethernet high speed fiber network FADDY and standard protocol TCP/IP can access library CD-ROM database and perform a practical CD-ROM campus network system. Its advantage reduce redundant CD-ROM purchase fee and reduce damage by handed in and out and allows multiuser to access the same CD-ROM disc simultaneously.

  4. Final Report: Efficient Databases for MPC Microdata

    Energy Technology Data Exchange (ETDEWEB)

    Michael A. Bender; Martin Farach-Colton; Bradley C. Kuszmaul

    2011-08-31

    The purpose of this grant was to develop the theory and practice of high-performance databases for massive streamed datasets. Over the last three years, we have developed fast indexing technology, that is, technology for rapidly ingesting data and storing that data so that it can be efficiently queried and analyzed. During this project we developed the technology so that high-bandwidth data streams can be indexed and queried efficiently. Our technology has been proven to work data sets composed of tens of billions of rows when the data streams arrives at over 40,000 rows per second. We achieved these numbers even on a single disk driven by two cores. Our work comprised (1) new write-optimized data structures with better asymptotic complexity than traditional structures, (2) implementation, and (3) benchmarking. We furthermore developed a prototype of TokuFS, a middleware layer that can handle microdata I/O packaged up in an MPI-IO abstraction.

  5. A Generic Data Harmonization Process for Cross-linked Research and Network Interaction. Construction and Application for the Lung Cancer Phenotype Database of the German Center for Lung Research.

    Science.gov (United States)

    Firnkorn, D; Ganzinger, M; Muley, T; Thomas, M; Knaup, P

    2015-01-01

    Joint data analysis is a key requirement in medical research networks. Data are available in heterogeneous formats at each network partner and their harmonization is often rather complex. The objective of our paper is to provide a generic approach for the harmonization process in research networks. We applied the process when harmonizing data from three sites for the Lung Cancer Phenotype Database within the German Center for Lung Research. We developed a spreadsheet-based solution as tool to support the harmonization process for lung cancer data and a data integration procedure based on Talend Open Studio. The harmonization process consists of eight steps describing a systematic approach for defining and reviewing source data elements and standardizing common data elements. The steps for defining common data elements and harmonizing them with local data definitions are repeated until consensus is reached. Application of this process for building the phenotype database led to a common basic data set on lung cancer with 285 structured parameters. The Lung Cancer Phenotype Database was realized as an i2b2 research data warehouse. Data harmonization is a challenging task requiring informatics skills as well as domain knowledge. Our approach facilitates data harmonization by providing guidance through a uniform process that can be applied in a wide range of projects.

  6. A strong-motion database from the Central American subduction zone

    Science.gov (United States)

    Arango, Maria Cristina; Strasser, Fleur O.; Bommer, Julian J.; Hernández, Douglas A.; Cepeda, Jose M.

    2011-04-01

    Subduction earthquakes along the Pacific Coast of Central America generate considerable seismic risk in the region. The quantification of the hazard due to these events requires the development of appropriate ground-motion prediction equations, for which purpose a database of recordings from subduction events in the region is indispensable. This paper describes the compilation of a comprehensive database of strong ground-motion recordings obtained during subduction-zone events in Central America, focusing on the region from 8 to 14° N and 83 to 92° W, including Guatemala, El Salvador, Nicaragua and Costa Rica. More than 400 accelerograms recorded by the networks operating across Central America during the last decades have been added to data collected by NORSAR in two regional projects for the reduction of natural disasters. The final database consists of 554 triaxial ground-motion recordings from events of moment magnitudes between 5.0 and 7.7, including 22 interface and 58 intraslab-type events for the time period 1976-2006. Although the database presented in this study is not sufficiently complete in terms of magnitude-distance distribution to serve as a basis for the derivation of predictive equations for interface and intraslab events in Central America, it considerably expands the Central American subduction data compiled in previous studies and used in early ground-motion modelling studies for subduction events in this region. Additionally, the compiled database will allow the assessment of the existing predictive models for subduction-type events in terms of their applicability for the Central American region, which is essential for an adequate estimation of the hazard due to subduction earthquakes in this region.

  7. Networking and Information Technology Workforce Study: Final Report

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This report presents the results of a study of the global Networking and Information Technology NIT workforce undertaken for the Networking and Information...

  8. NATIONAL TRANSPORTATION ATLAS DATABASE: RAILROADS 2011

    Data.gov (United States)

    Kansas Data Access and Support Center — The Rail Network is a comprehensive database of the nation's railway system at the 1:100,000 scale or better. The data set covers all 50 States plus the District of...

  9. NABIC marker database: A molecular markers information network of agricultural crops.

    Science.gov (United States)

    Kim, Chang-Kug; Seol, Young-Joo; Lee, Dong-Jun; Jeong, In-Seon; Yoon, Ung-Han; Lee, Gang-Seob; Hahn, Jang-Ho; Park, Dong-Suk

    2013-01-01

    In 2013, National Agricultural Biotechnology Information Center (NABIC) reconstructs a molecular marker database for useful genetic resources. The web-based marker database consists of three major functional categories: map viewer, RSN marker and gene annotation. It provides 7250 marker locations, 3301 RSN marker property, 3280 molecular marker annotation information in agricultural plants. The individual molecular marker provides information such as marker name, expressed sequence tag number, gene definition and general marker information. This updated marker-based database provides useful information through a user-friendly web interface that assisted in tracing any new structures of the chromosomes and gene positional functions using specific molecular markers. The database is available for free at http://nabic.rda.go.kr/gere/rice/molecularMarkers/

  10. ANZA Seismic Network- From Monitoring to Science

    Science.gov (United States)

    Vernon, F.; Eakin, J.; Martynov, V.; Newman, R.; Offield, G.; Hindley, A.; Astiz, L.

    2007-05-01

    The ANZA Seismic Network (http:eqinfo.ucsd.edu) utilizes broadband and strong motion sensors with 24-bit dataloggers combined with real-time telemetry to monitor local and regional seismicity in southernmost California. The ANZA network provides real-time data to the IRIS DMC, California Integrated Seismic Network (CISN), other regional networks, and the Advanced National Seismic System (ANSS), in addition to providing near real-time information and monitoring to the greater San Diego community. Twelve high dynamic range broadband and strong motion sensors adjacent to the San Jacinto Fault zone contribute data for earthquake source studies and continue the monitoring of the seismic activity of the San Jacinto fault initiated 24 years ago. Five additional stations are located in the San Diego region with one more station on San Clemente Island. The ANZA network uses the advance wireless networking capabilities of the NSF High Performance Wireless Research and Education Network (http:hpwren.ucsd.edu) to provide the communication infrastructure for the real-time telemetry of Anza seismic stations. The ANZA network uses the Antelope data acquisition software. The combination of high quality hardware, communications, and software allow for an annual network uptime in excess of 99.5% with a median annual station real-time data return rate of 99.3%. Approximately 90,000 events, dominantly local sources but including regional and teleseismic events, comprise the ANZA network waveform database. All waveform data and event data are managed using the Datascope relational database. The ANZA network data has been used in a variety of scientific research including detailed structure of the San Jacinto Fault Zone, earthquake source physics, spatial and temporal studies of aftershocks, array studies of teleseismic body waves, and array studies on the source of microseisms. To augment the location, detection, and high frequency observations of the seismic source spectrum from local

  11. The global transmission network of HIV-1.

    Science.gov (United States)

    Wertheim, Joel O; Leigh Brown, Andrew J; Hepler, N Lance; Mehta, Sanjay R; Richman, Douglas D; Smith, Davey M; Kosakovsky Pond, Sergei L

    2014-01-15

    Human immunodeficiency virus type 1 (HIV-1) is pandemic, but its contemporary global transmission network has not been characterized. A better understanding of the properties and dynamics of this network is essential for surveillance, prevention, and eventual eradication of HIV. Here, we apply a simple and computationally efficient network-based approach to all publicly available HIV polymerase sequences in the global database, revealing a contemporary picture of the spread of HIV-1 within and between countries. This approach automatically recovered well-characterized transmission clusters and extended other clusters thought to be contained within a single country across international borders. In addition, previously undescribed transmission clusters were discovered. Together, these clusters represent all known modes of HIV transmission. The extent of international linkage revealed by our comprehensive approach demonstrates the need to consider the global diversity of HIV, even when describing local epidemics. Finally, the speed of this method allows for near-real-time surveillance of the pandemic's progression.

  12. IAU Meteor Data Center-the shower database: A status report

    Science.gov (United States)

    Jopek, Tadeusz Jan; Kaňuchová, Zuzana

    2017-09-01

    Currently, the meteor shower part of Meteor Data Center database includes: 112 established showers, 563 in the working list, among them 36 have the pro tempore status. The list of shower complexes contains 25 groups, 3 have established status and 1 has the pro tempore status. In the past three years, new meteor showers submitted to the MDC database were detected amongst the meteors observed by CAMS stations (Cameras for Allsky Meteor Surveillance), those included in the EDMOND (European viDeo MeteOr Network Database), those collected by the Japanese SonotaCo Network, recorded in the IMO (International Meteor Organization) database, observed by the Croatian Meteor Network and on the Southern Hemisphere by the SAAMER radar. At the XXIX General Assembly of the IAU in Honolulu, Hawaii in 2015, the names of 18 showers were officially accepted and moved to the list of established ones. Also, one shower already officially named (3/SIA the Southern iota Aquariids) was moved back to the working list of meteor showers. At the XXIX GA IAU the basic shower nomenclature rule was modified, the new formulation predicates ;The general rule is that a meteor shower (and a meteoroid stream) should be named after the constellation that contains the nearest star to the radiant point, using the possessive Latin form;. Over the last three years the MDC database was supplemented with the earlier published original data on meteor showers, which permitted verification of the correctness of the MDC data and extension of bibliographic information. Slowly but surely new database software options are implemented, and software bugs are corrected.

  13. The Accounting Network: How Financial Institutions React to Systemic Crisis.

    Science.gov (United States)

    Puliga, Michelangelo; Flori, Andrea; Pappalardo, Giuseppe; Chessa, Alessandro; Pammolli, Fabio

    2016-01-01

    The role of Network Theory in the study of the financial crisis has been widely spotted in the latest years. It has been shown how the network topology and the dynamics running on top of it can trigger the outbreak of large systemic crisis. Following this methodological perspective we introduce here the Accounting Network, i.e. the network we can extract through vector similarities techniques from companies' financial statements. We build the Accounting Network on a large database of worldwide banks in the period 2001-2013, covering the onset of the global financial crisis of mid-2007. After a careful data cleaning, we apply a quality check in the construction of the network, introducing a parameter (the Quality Ratio) capable of trading off the size of the sample (coverage) and the representativeness of the financial statements (accuracy). We compute several basic network statistics and check, with the Louvain community detection algorithm, for emerging communities of banks. Remarkably enough sensible regional aggregations show up with the Japanese and the US clusters dominating the community structure, although the presence of a geographically mixed community points to a gradual convergence of banks into similar supranational practices. Finally, a Principal Component Analysis procedure reveals the main economic components that influence communities' heterogeneity. Even using the most basic vector similarity hypotheses on the composition of the financial statements, the signature of the financial crisis clearly arises across the years around 2008. We finally discuss how the Accounting Networks can be improved to reflect the best practices in the financial statement analysis.

  14. The Accounting Network: How Financial Institutions React to Systemic Crisis

    Science.gov (United States)

    Puliga, Michelangelo; Flori, Andrea; Pappalardo, Giuseppe; Chessa, Alessandro; Pammolli, Fabio

    2016-01-01

    The role of Network Theory in the study of the financial crisis has been widely spotted in the latest years. It has been shown how the network topology and the dynamics running on top of it can trigger the outbreak of large systemic crisis. Following this methodological perspective we introduce here the Accounting Network, i.e. the network we can extract through vector similarities techniques from companies’ financial statements. We build the Accounting Network on a large database of worldwide banks in the period 2001–2013, covering the onset of the global financial crisis of mid-2007. After a careful data cleaning, we apply a quality check in the construction of the network, introducing a parameter (the Quality Ratio) capable of trading off the size of the sample (coverage) and the representativeness of the financial statements (accuracy). We compute several basic network statistics and check, with the Louvain community detection algorithm, for emerging communities of banks. Remarkably enough sensible regional aggregations show up with the Japanese and the US clusters dominating the community structure, although the presence of a geographically mixed community points to a gradual convergence of banks into similar supranational practices. Finally, a Principal Component Analysis procedure reveals the main economic components that influence communities’ heterogeneity. Even using the most basic vector similarity hypotheses on the composition of the financial statements, the signature of the financial crisis clearly arises across the years around 2008. We finally discuss how the Accounting Networks can be improved to reflect the best practices in the financial statement analysis. PMID:27736865

  15. The Accounting Network: How Financial Institutions React to Systemic Crisis.

    Directory of Open Access Journals (Sweden)

    Michelangelo Puliga

    Full Text Available The role of Network Theory in the study of the financial crisis has been widely spotted in the latest years. It has been shown how the network topology and the dynamics running on top of it can trigger the outbreak of large systemic crisis. Following this methodological perspective we introduce here the Accounting Network, i.e. the network we can extract through vector similarities techniques from companies' financial statements. We build the Accounting Network on a large database of worldwide banks in the period 2001-2013, covering the onset of the global financial crisis of mid-2007. After a careful data cleaning, we apply a quality check in the construction of the network, introducing a parameter (the Quality Ratio capable of trading off the size of the sample (coverage and the representativeness of the financial statements (accuracy. We compute several basic network statistics and check, with the Louvain community detection algorithm, for emerging communities of banks. Remarkably enough sensible regional aggregations show up with the Japanese and the US clusters dominating the community structure, although the presence of a geographically mixed community points to a gradual convergence of banks into similar supranational practices. Finally, a Principal Component Analysis procedure reveals the main economic components that influence communities' heterogeneity. Even using the most basic vector similarity hypotheses on the composition of the financial statements, the signature of the financial crisis clearly arises across the years around 2008. We finally discuss how the Accounting Networks can be improved to reflect the best practices in the financial statement analysis.

  16. Some Considerations about Modern Database Machines

    Directory of Open Access Journals (Sweden)

    Manole VELICANU

    2010-01-01

    Full Text Available Optimizing the two computing resources of any computing system - time and space - has al-ways been one of the priority objectives of any database. A current and effective solution in this respect is the computer database. Optimizing computer applications by means of database machines has been a steady preoccupation of researchers since the late seventies. Several information technologies have revolutionized the present information framework. Out of these, those which have brought a major contribution to the optimization of the databases are: efficient handling of large volumes of data (Data Warehouse, Data Mining, OLAP – On Line Analytical Processing, the improvement of DBMS – Database Management Systems facilities through the integration of the new technologies, the dramatic increase in computing power and the efficient use of it (computer networks, massive parallel computing, Grid Computing and so on. All these information technologies, and others, have favored the resumption of the research on database machines and the obtaining in the last few years of some very good practical results, as far as the optimization of the computing resources is concerned.

  17. Final DIGITAL FLOOD INSURANCE RATE MAP DATABASE, RANDOLPH COUNTY, ILLINOIS USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  18. Final Digital Flood Insurance Rate Map Database, Lubbock County, TX, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  19. Danish Gynecological Cancer Database

    DEFF Research Database (Denmark)

    Sørensen, Sarah Mejer; Bjørn, Signe Frahm; Jochumsen, Kirsten Marie

    2016-01-01

    AIM OF DATABASE: The Danish Gynecological Cancer Database (DGCD) is a nationwide clinical cancer database and its aim is to monitor the treatment quality of Danish gynecological cancer patients, and to generate data for scientific purposes. DGCD also records detailed data on the diagnostic measures...... data forms as follows: clinical data, surgery, pathology, pre- and postoperative care, complications, follow-up visits, and final quality check. DGCD is linked with additional data from the Danish "Pathology Registry", the "National Patient Registry", and the "Cause of Death Registry" using the unique...... Danish personal identification number (CPR number). DESCRIPTIVE DATA: Data from DGCD and registers are available online in the Statistical Analysis Software portal. The DGCD forms cover almost all possible clinical variables used to describe gynecological cancer courses. The only limitation...

  20. DATABASES DEVELOPED IN INDIA FOR BIOLOGICAL SCIENCES

    Directory of Open Access Journals (Sweden)

    Gitanjali Yadav

    2017-09-01

    Full Text Available The complexity of biological systems requires use of a variety of experimental methods with ever increasing sophistication to probe various cellular processes at molecular and atomic resolution. The availability of technologies for determining nucleic acid sequences of genes and atomic resolution structures of biomolecules prompted development of major biological databases like GenBank and PDB almost four decades ago. India was one of the few countries to realize early, the utility of such databases for progress in modern biology/biotechnology. Department of Biotechnology (DBT, India established Biotechnology Information System (BTIS network in late eighties. Starting with the genome sequencing revolution at the turn of the century, application of high-throughput sequencing technologies in biology and medicine for analysis of genomes, transcriptomes, epigenomes and microbiomes have generated massive volumes of sequence data. BTIS network has not only provided state of the art computational infrastructure to research institutes and universities for utilizing various biological databases developed abroad in their research, it has also actively promoted research and development (R&D projects in Bioinformatics to develop a variety of biological databases in diverse areas. It is encouraging to note that, a large number of biological databases or data driven software tools developed in India, have been published in leading peer reviewed international journals like Nucleic Acids Research, Bioinformatics, Database, BMC, PLoS and NPG series publication. Some of these databases are not only unique, they are also highly accessed as reflected in number of citations. Apart from databases developed by individual research groups, BTIS has initiated consortium projects to develop major India centric databases on Mycobacterium tuberculosis, Rice and Mango, which can potentially have practical applications in health and agriculture. Many of these biological

  1. Harvesting Covert Networks: The Case Study of the iMiner Database

    DEFF Research Database (Denmark)

    Memon, Nasrullah; Wiil, Uffe Kock; Alhajj, Reda

    2011-01-01

    was incorporated in the iMiner prototype tool, which makes use of investigative data mining techniques to analyse data. This paper will present the developed framework along with the form and structure of the terrorist data in the database. Selected cases will be referenced to highlight the effectiveness of the i...... collected by intelligence agencies and government organisations is inaccessible to researchers. To counter the information scarcity, we designed and built a database of terrorist-related data and information by harvesting such data from publicly available authenticated websites. The database...

  2. Database of Renewable Energy and Energy Efficiency Incentives and Policies Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Lips, Brian

    2018-03-28

    The Database of State Incentives for Renewables and Efficiency (DSIRE) is an online resource that provides summaries of all financial incentives and regulatory policies that support the use of renewable energy and energy efficiency across all 50 states. This project involved making enhancements to the database and website, and the ongoing research and maintenance of the policy and incentive summaries.

  3. Prediction and characterization of protein-protein interaction networks in swine

    Directory of Open Access Journals (Sweden)

    Wang Fen

    2012-01-01

    Full Text Available Abstract Background Studying the large-scale protein-protein interaction (PPI network is important in understanding biological processes. The current research presents the first PPI map of swine, which aims to give new insights into understanding their biological processes. Results We used three methods, Interolog-based prediction of porcine PPI network, domain-motif interactions from structural topology-based prediction of porcine PPI network and motif-motif interactions from structural topology-based prediction of porcine PPI network, to predict porcine protein interactions among 25,767 porcine proteins. We predicted 20,213, 331,484, and 218,705 porcine PPIs respectively, merged the three results into 567,441 PPIs, constructed four PPI networks, and analyzed the topological properties of the porcine PPI networks. Our predictions were validated with Pfam domain annotations and GO annotations. Averages of 70, 10,495, and 863 interactions were related to the Pfam domain-interacting pairs in iPfam database. For comparison, randomized networks were generated, and averages of only 4.24, 66.79, and 44.26 interactions were associated with Pfam domain-interacting pairs in iPfam database. In GO annotations, we found 52.68%, 75.54%, 27.20% of the predicted PPIs sharing GO terms respectively. However, the number of PPI pairs sharing GO terms in the 10,000 randomized networks reached 52.68%, 75.54%, 27.20% is 0. Finally, we determined the accuracy and precision of the methods. The methods yielded accuracies of 0.92, 0.53, and 0.50 at precisions of about 0.93, 0.74, and 0.75, respectively. Conclusion The results reveal that the predicted PPI networks are considerably reliable. The present research is an important pioneering work on protein function research. The porcine PPI data set, the confidence score of each interaction and a list of related data are available at (http://pppid.biositemap.com/.

  4. Complexities’ day-to-day dynamic evolution analysis and prediction for a Didi taxi trip network based on complex network theory

    Science.gov (United States)

    Zhang, Lin; Lu, Jian; Zhou, Jialin; Zhu, Jinqing; Li, Yunxuan; Wan, Qian

    2018-03-01

    Didi Dache is the most popular taxi order mobile app in China, which provides online taxi-hailing service. The obtained big database from this app could be used to analyze the complexities’ day-to-day dynamic evolution of Didi taxi trip network (DTTN) from the level of complex network dynamics. First, this paper proposes the data cleaning and modeling methods for expressing Nanjing’s DTTN as a complex network. Second, the three consecutive weeks’ data are cleaned to establish 21 DTTNs based on the proposed big data processing technology. Then, multiple topology measures that characterize the complexities’ day-to-day dynamic evolution of these networks are provided. Third, these measures of 21 DTTNs are calculated and subsequently explained with actual implications. They are used as a training set for modeling the BP neural network which is designed for predicting DTTN complexities evolution. Finally, the reliability of the designed BP neural network is verified by comparing with the actual data and the results obtained from ARIMA method simultaneously. Because network complexities are the basis for modeling cascading failures and conducting link prediction in complex system, this proposed research framework not only provides a novel perspective for analyzing DTTN from the level of system aggregated behavior, but can also be used to improve the DTTN management level.

  5. BDFGEOTHERM - A Swiss geothermal fluids database; BDFGEOTHERM - Base de donnees des fluides geothermiques de la Suisse - Rapport final

    Energy Technology Data Exchange (ETDEWEB)

    Sonney, R.; Vuataz, F.-D.

    2007-07-01

    The motivation to build up the database BDFGeotherm was to put at the disposal of the geothermal community a comprehensive set of data on the deep fluids of Switzerland and of some neighbouring areas. Researchers, engineers and all persons wanting to know the type and properties of geothermal fluids existing in a given area or underground system can find in BDFGeotherm a wealth of information which are generally widely dispersed and often difficult to reach. The BDFGeotherm database has been built under Microsoft ACCESS code and consists of nine tables connected with a primary key: the field 'Code'. A selection of parameters has been chosen from the following fields: general and geographical description, geology, hydrogeology, hydraulics, hydrochemistry and isotopes and finally geothermal parameters. Data implemented in BDFGeotherm are in numerical or in text format. Moreover, in the field 'Lithological log', one can visualize and save bitmap images containing lithological logs of boreholes. A total of 203 thermal springs or deep boreholes from 82 geothermal sites are implemented in BDFGeotherm. Among the 68 Swiss sites, a large majority of them are located in the northern part of the Jura range and in the upper Rhone valley (Wallis). Some sites, in Germany (5), France (3) and Italy (6), were selected for the following reasons: located near Swiss hot springs or deep boreholes, having similar geological features or representing a significant geothermal potential. Many types of queries could be realised, using any fields of the database and the results can be put into tables and printed or exported and saved in other files. (author)

  6. ChemProt: a disease chemical biology database

    DEFF Research Database (Denmark)

    Taboureau, Olivier; Nielsen, Sonny Kim; Audouze, Karine Marie Laure

    2011-01-01

    Systems pharmacology is an emergent area that studies drug action across multiple scales of complexity, from molecular and cellular to tissue and organism levels. There is a critical need to develop network-based approaches to integrate the growing body of chemical biology knowledge with network...... biology. Here, we report ChemProt, a disease chemical biology database, which is based on a compilation of multiple chemical-protein annotation resources, as well as disease-associated protein-protein interactions (PPIs). We assembled more than 700 000 unique chemicals with biological annotation for 30...... evaluation of environmental chemicals, natural products and approved drugs, as well as the selection of new compounds based on their activity profile against most known biological targets, including those related to adverse drug events. Results from the disease chemical biology database associate citalopram...

  7. An Examination of Job Skills Posted on Internet Databases: Implications for Information Systems Degree Programs.

    Science.gov (United States)

    Liu, Xia; Liu, Lai C.; Koong, Kai S.; Lu, June

    2003-01-01

    Analysis of 300 information technology job postings in two Internet databases identified the following skill categories: programming languages (Java, C/C++, and Visual Basic were most frequent); website development (57% sought SQL and HTML skills); databases (nearly 50% required Oracle); networks (only Windows NT or wide-area/local-area networks);…

  8. Aspects of the design of distributed databases

    OpenAIRE

    Burlacu Irina-Andreea

    2011-01-01

    Distributed data - data, processed by a system, can be distributed among several computers, but it is accessible from any of them. A distributed database design problem is presented that involves the development of a global model, a fragmentation, and a data allocation. The student is given a conceptual entity-relationship model for the database and a description of the transactions and a generic network environment. A stepwise solution approach to this problem is shown, based on mean value a...

  9. Advanced Scientific Computing Research Network Requirements: ASCR Network Requirements Review Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Bacon, Charles [Argonne National Lab. (ANL), Argonne, IL (United States); Bell, Greg [ESnet, Berkeley, CA (United States); Canon, Shane [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dart, Eli [ESnet, Berkeley, CA (United States); Dattoria, Vince [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Goodwin, Dave [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Lee, Jason [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hicks, Susan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Holohan, Ed [Argonne National Lab. (ANL), Argonne, IL (United States); Klasky, Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lauzon, Carolyn [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Rogers, Jim [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Skinner, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tierney, Brian [ESnet, Berkeley, CA (United States)

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  10. Road network selection for small-scale maps using an improved centrality-based algorithm

    Directory of Open Access Journals (Sweden)

    Roy Weiss

    2014-12-01

    Full Text Available The road network is one of the key feature classes in topographic maps and databases. In the task of deriving road networks for products at smaller scales, road network selection forms a prerequisite for all other generalization operators, and is thus a fundamental operation in the overall process of topographic map and database production. The objective of this work was to develop an algorithm for automated road network selection from a large-scale (1:10,000 to a small-scale database (1:200,000. The project was pursued in collaboration with swisstopo, the national mapping agency of Switzerland, with generic mapping requirements in mind. Preliminary experiments suggested that a selection algorithm based on betweenness centrality performed best for this purpose, yet also exposed problems. The main contribution of this paper thus consists of four extensions that address deficiencies of the basic centrality-based algorithm and lead to a significant improvement of the results. The first two extensions improve the formation of strokes concatenating the road segments, which is crucial since strokes provide the foundation upon which the network centrality measure is computed. Thus, the first extension ensures that roundabouts are detected and collapsed, thus avoiding interruptions of strokes by roundabouts, while the second introduces additional semantics in the process of stroke formation, allowing longer and more plausible strokes to built. The third extension detects areas of high road density (i.e., urban areas using density-based clustering and then locally increases the threshold of the centrality measure used to select road segments, such that more thinning takes place in those areas. Finally, since the basic algorithm tends to create dead-ends—which however are not tolerated in small-scale maps—the fourth extension reconnects these dead-ends to the main network, searching for the best path in the main heading of the dead-end.

  11. A gene network bioinformatics analysis for pemphigoid autoimmune blistering diseases.

    Science.gov (United States)

    Barone, Antonio; Toti, Paolo; Giuca, Maria Rita; Derchi, Giacomo; Covani, Ugo

    2015-07-01

    In this theoretical study, a text mining search and clustering analysis of data related to genes potentially involved in human pemphigoid autoimmune blistering diseases (PAIBD) was performed using web tools to create a gene/protein interaction network. The Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) database was employed to identify a final set of PAIBD-involved genes and to calculate the overall significant interactions among genes: for each gene, the weighted number of links, or WNL, was registered and a clustering procedure was performed using the WNL analysis. Genes were ranked in class (leader, B, C, D and so on, up to orphans). An ontological analysis was performed for the set of 'leader' genes. Using the above-mentioned data network, 115 genes represented the final set; leader genes numbered 7 (intercellular adhesion molecule 1 (ICAM-1), interferon gamma (IFNG), interleukin (IL)-2, IL-4, IL-6, IL-8 and tumour necrosis factor (TNF)), class B genes were 13, whereas the orphans were 24. The ontological analysis attested that the molecular action was focused on extracellular space and cell surface, whereas the activation and regulation of the immunity system was widely involved. Despite the limited knowledge of the present pathologic phenomenon, attested by the presence of 24 genes revealing no protein-protein direct or indirect interactions, the network showed significant pathways gathered in several subgroups: cellular components, molecular functions, biological processes and the pathologic phenomenon obtained from the Kyoto Encyclopaedia of Genes and Genomes (KEGG) database. The molecular basis for PAIBD was summarised and expanded, which will perhaps give researchers promising directions for the identification of new therapeutic targets.

  12. Shapley ratings in brain networks

    Directory of Open Access Journals (Sweden)

    Rolf Kötter

    2007-11-01

    Full Text Available Recent applications of network theory to brain networks as well as the expanding empirical databases of brain architecture spawn an interest in novel techniques for analyzing connectivity patterns in the brain. Treating individual brain structures as nodes in a directed graph model permits the application of graph theoretical concepts to the analysis of these structures within their large-scale connectivity networks. In this paper, we explore the application of concepts from graph and game theory toward this end. Specifically, we utilize the Shapley value principle, which assigns a rank to players in a coalition based upon their individual contributions to the collective profit of that coalition, to assess the contributions of individual brain structures to the graph derived from the global connectivity network. We report Shapley values for variations of a prefrontal network, as well as for a visual cortical network, which had both been extensively investigated previously. This analysis highlights particular nodes as strong or weak contributors to global connectivity. To understand the nature of their contribution, we compare the Shapley values obtained from these networks and appropriate controls to other previously described nodal measures of structural connectivity. We find a strong correlation between Shapley values and both betweenness centrality and connection density. Moreover, a stepwise multiple linear regression analysis indicates that approximately 79% of the variance in Shapley values obtained from random networks can be explained by betweenness centrality alone. Finally, we investigate the effects of local lesions on the Shapley ratings, showing that the present networks have an immense structural resistance to degradation. We discuss our results highlighting the use of such measures for characterizing the organization and functional role of brain networks.

  13. Nuclear Physics Science Network Requirements Workshop, May 6 and 7, 2008. Final Report

    International Nuclear Information System (INIS)

    Tierney, Ed. Brian L; Dart, Ed. Eli; Carlson, Rich; Dattoria, Vince; Ernest, Michael; Hitchcock, Daniel; Johnston, William; Kowalski, Andy; Lauret, Jerome; Maguire, Charles; Olson, Douglas; Purschke, Martin; Rai, Gulshan; Watson, Chip; Vale, Carla

    2008-01-01

    to manage those transfers effectively. Network reliability is also becoming more important as there is often a narrow window between data collection and data archiving when transfer and analysis can be done. The instruments do not stop producing data, so extended network outages can result in data loss due to analysis pipeline stalls. Finally, as the scope of collaboration continues to increase, collaboration tools such as audio and video conferencing are becoming ever more critical to the productivity of scientific collaborations

  14. Information retrieval system of nuclear power plant database (PPD) user's guide

    International Nuclear Information System (INIS)

    Izumi, Fumio; Horikami, Kunihiko; Kobayashi, Kensuke.

    1990-12-01

    A nuclear power plant database (PPD) and its retrieval system have been developed. The database involves a large number of safety design data of nuclear power plants, operating and planned in Japan. The information stored in the database can be retrieved at high speed, whenever they are needed, by use of the retrieval system. The report is a user's manual of the system to access the database utilizing a display unit of the JAERI computer network system. (author)

  15. [The future of clinical laboratory database management system].

    Science.gov (United States)

    Kambe, M; Imidy, D; Matsubara, A; Sugimoto, Y

    1999-09-01

    To assess the present status of the clinical laboratory database management system, the difference between the Clinical Laboratory Information System and Clinical Laboratory System was explained in this study. Although three kinds of database management systems (DBMS) were shown including the relational model, tree model and network model, the relational model was found to be the best DBMS for the clinical laboratory database based on our experience and developments of some clinical laboratory expert systems. As a future clinical laboratory database management system, the IC card system connected to an automatic chemical analyzer was proposed for personal health data management and a microscope/video system was proposed for dynamic data management of leukocytes or bacteria.

  16. The Knowledge-Integrated Network Biomarkers Discovery for Major Adverse Cardiac Events

    Science.gov (United States)

    Jin, Guangxu; Zhou, Xiaobo; Wang, Honghui; Zhao, Hong; Cui, Kemi; Zhang, Xiang-Sun; Chen, Luonan; Hazen, Stanley L.; Li, King; Wong, Stephen T. C.

    2010-01-01

    The mass spectrometry (MS) technology in clinical proteomics is very promising for discovery of new biomarkers for diseases management. To overcome the obstacles of data noises in MS analysis, we proposed a new approach of knowledge-integrated biomarker discovery using data from Major Adverse Cardiac Events (MACE) patients. We first built up a cardiovascular-related network based on protein information coming from protein annotations in Uniprot, protein–protein interaction (PPI), and signal transduction database. Distinct from the previous machine learning methods in MS data processing, we then used statistical methods to discover biomarkers in cardiovascular-related network. Through the tradeoff between known protein information and data noises in mass spectrometry data, we finally could firmly identify those high-confident biomarkers. Most importantly, aided by protein–protein interaction network, that is, cardiovascular-related network, we proposed a new type of biomarkers, that is, network biomarkers, composed of a set of proteins and the interactions among them. The candidate network biomarkers can classify the two groups of patients more accurately than current single ones without consideration of biological molecular interaction. PMID:18665624

  17. A Sandia telephone database system

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, S.D.; Tolendino, L.F.

    1991-08-01

    Sandia National Laboratories, Albuquerque, may soon have more responsibility for the operation of its own telephone system. The processes that constitute providing telephone service can all be improved through the use of a central data information system. We studied these processes, determined the requirements for a database system, then designed the first stages of a system that meets our needs for work order handling, trouble reporting, and ISDN hardware assignments. The design was based on an extensive set of applications that have been used for five years to manage the Sandia secure data network. The system utilizes an Ingres database management system and is programmed using the Application-By-Forms tools.

  18. ISSUES IN MOBILE DISTRIBUTED REAL TIME DATABASES: PERFORMANCE AND REVIEW

    OpenAIRE

    VISHNU SWAROOP,; Gyanendra Kumar Gupta,; UDAI SHANKER

    2011-01-01

    Increase in handy and small electronic devices in computing fields; it makes the computing more popularand useful in business. Tremendous advances in wireless networks and portable computing devices have led to development of mobile computing. Support of real time database system depending upon thetiming constraints, due to availability of data distributed database, and ubiquitous computing pull the mobile database concept, which emerges them in a new form of technology as mobile distributed ...

  19. Hidden neural networks: application to speech recognition

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1998-01-01

    We evaluate the hidden neural network HMM/NN hybrid on two speech recognition benchmark tasks; (1) task independent isolated word recognition on the Phonebook database, and (2) recognition of broad phoneme classes in continuous speech from the TIMIT database. It is shown how hidden neural networks...

  20. Evaluation report on research and development of a database system for mutual computer operation; Denshi keisanki sogo un'yo database system no kenkyu kaihatsu ni kansuru hyoka hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1992-03-01

    This paper describes evaluation on the research and development of a database system for mutual computer operation, with respect to discrete database technology, multi-media technology, high reliability technology, and mutual operation network system technology. A large number of research results placing the views on the future were derived, such as the issues of discretion and utilization patterns of the discrete database, structuring of data for multi-media information, retrieval systems, flexible and high-level utilization of the network, and the issues in database protection. These achievements are publicly disclosed widely. The largest feature of this project is in aiming at forming a network system that can be operated mutually under multi-vender environment. Therefore, the researches and developments have been executed under the spirit of the principle of openness to public and international cooperation. These efforts are represented by organizing the rule establishment committee, execution of mutual interconnection experiment (including demonstration evaluation), and development of the mounting rules based on the ISO's 'open system interconnection (OSI)'. These results are compiled in the JIS as the basic reference model for the open system interconnection, whereas the targets shown in the basic plan have been achieved sufficiently. (NEDO)

  1. Construction of a bibliographic information database for the nuclear science and engineering

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Whan; Kim, Soon Ja; Choi, Kwang [Korea Atomic Energy Reasech Institute, Taejon (Korea)

    1997-12-01

    The major goal of this project is to construct a database in nuclear science and engineering information materials, support the R and D activities of users in this field, finally to support KRISTAL(Korea Research Information for Science and Technology Access Line)'s DB through the KREONet(Korea Research Environment Open Network), as one of the five national information networks. The contents of this project are as follows: (1) Materials selection and collection, (2) Indexing and abstract preparation, (3) Data input and transmission, and (4) document delivery service. In this seventh year, 45000 records, as total of inputted data, are added to the existing SATURN DB. These records are covered with the articles of nuclear-related core journals, proceedings, seminars, and research reports, etc. And using the Web, this project was important for users to get their needed information itself and to receive the materials of online requested information. And then it will give the chance users not only to promote the effectiveness of R and D activities, but also to obviate the duplicated research works. (author). 1 fig., 1 tab.

  2. Guide to QSPIRES and the particle physics databases on SLACVM

    International Nuclear Information System (INIS)

    Galic, H.

    1992-06-01

    SLAC, in collarboration with DESY, LBL, and several other institutions, maintains many databases of interest to the high energy physics community. You do not to have a computer account at SLAC to search through some of these databases, they can be reached via the remote server QSPIRES, set at the BITNET node SLACVM. This text describes, in great detail, how to search in the popular HEP database via QSPIRES, HEP contains bibliographic summaries of more than 200,000 particle physics papers. Other databases available remotely are also reviewed, and the registration procedure for those who would like to use QSPIRES is explained. To utilize QSPIRES, you must have access to a large computer network. It is not necessary that the network be BITNET; it may be a different one. However, a gateway must exist between your network and BITNET. It should be mentioned at BITNET users have some advantages in searching, e.g., the possibility of interactive communication with QSPIRES. Therefore, if you have a choice, let a BITNET machine be your base for QSPIRES searches. You will also need an authorization to use HEP and other databases; and should know the set of relevant commands and rules. The authorization is free, the commands are simple, and BITNET can be reached from all over the world. Join, therefore, the group of thousands of satisfied users, log on to your local computer, and from the comfort of your office or home fine, e.g., the number of citations of your most famous high energy physics paper

  3. Guide to QSPIRES and the particle physics databases on SLACVM

    Energy Technology Data Exchange (ETDEWEB)

    Galic, H.

    1992-06-01

    SLAC, in collarboration with DESY, LBL, and several other institutions, maintains many databases of interest to the high energy physics community. You do not to have a computer account at SLAC to search through some of these databases, they can be reached via the remote server QSPIRES, set at the BITNET node SLACVM. This text describes, in great detail, how to search in the popular HEP database via QSPIRES, HEP contains bibliographic summaries of more than 200,000 particle physics papers. Other databases available remotely are also reviewed, and the registration procedure for those who would like to use QSPIRES is explained. To utilize QSPIRES, you must have access to a large computer network. It is not necessary that the network be BITNET; it may be a different one. However, a gateway must exist between your network and BITNET. It should be mentioned at BITNET users have some advantages in searching, e.g., the possibility of interactive communication with QSPIRES. Therefore, if you have a choice, let a BITNET machine be your base for QSPIRES searches. You will also need an authorization to use HEP and other databases; and should know the set of relevant commands and rules. The authorization is free, the commands are simple, and BITNET can be reached from all over the world. Join, therefore, the group of thousands of satisfied users, log on to your local computer, and from the comfort of your office or home fine, e.g., the number of citations of your most famous high energy physics paper.

  4. The NAGRA/PSI thermochemical database: new developments

    International Nuclear Information System (INIS)

    Hummel, W.; Berner, U.; Thoenen, T.; Pearson, F.J.Jr.

    2000-01-01

    The development of a high quality thermochemical database for performance assessment is a scientifically fascinating and demanding task, and is not simply collecting and recording numbers. The final product can by visualised as a complex building with different storeys representing different levels of complexity. The present status report illustrates the various building blocks which we believe are integral to such a database structure. (authors)

  5. The NAGRA/PSI thermochemical database: new developments

    Energy Technology Data Exchange (ETDEWEB)

    Hummel, W.; Berner, U.; Thoenen, T. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Pearson, F.J.Jr. [Ground-Water Geochemistry, New Bern, NC (United States)

    2000-07-01

    The development of a high quality thermochemical database for performance assessment is a scientifically fascinating and demanding task, and is not simply collecting and recording numbers. The final product can by visualised as a complex building with different storeys representing different levels of complexity. The present status report illustrates the various building blocks which we believe are integral to such a database structure. (authors)

  6. Requirements for the next generation of nuclear databases and services

    Energy Technology Data Exchange (ETDEWEB)

    Pronyaev, Vladimir; Zerkin, Viktor; Muir, Douglas [International Atomic Energy Agency, Nuclear Data Section, Vienna (Austria); Winchell, David; Arcilla, Ramon [Brookhaven National Laboratory, National Nuclear Data Center, Upton, NY (United States)

    2002-08-01

    The use of relational database technology and general requirements for the next generation of nuclear databases and services are discussed. These requirements take into account an increased number of co-operating data centres working on diverse hardware and software platforms and users with different data-access capabilities. It is argued that the introduction of programming standards will allow the development of nuclear databases and data retrieval tools in a heterogeneous hardware and software environment. The functionality of this approach was tested with full-scale nuclear databases installed on different platforms having different operating and database management systems. User access through local network, internet, or CD-ROM has been investigated. (author)

  7. NONATObase: a database for Polychaeta (Annelida) from the Southwestern Atlantic Ocean.

    Science.gov (United States)

    Pagliosa, Paulo R; Doria, João G; Misturini, Dairana; Otegui, Mariana B P; Oortman, Mariana S; Weis, Wilson A; Faroni-Perez, Larisse; Alves, Alexandre P; Camargo, Maurício G; Amaral, A Cecília Z; Marques, Antonio C; Lana, Paulo C

    2014-01-01

    Networks can greatly advance data sharing attitudes by providing organized and useful data sets on marine biodiversity in a friendly and shared scientific environment. NONATObase, the interactive database on polychaetes presented herein, will provide new macroecological and taxonomic insights of the Southwestern Atlantic region. The database was developed by the NONATO network, a team of South American researchers, who integrated available information on polychaetes from between 5°N and 80°S in the Atlantic Ocean and near the Antarctic. The guiding principle of the database is to keep free and open access to data based on partnerships. Its architecture consists of a relational database integrated in the MySQL and PHP framework. Its web application allows access to the data from three different directions: species (qualitative data), abundance (quantitative data) and data set (reference data). The database has built-in functionality, such as the filter of data on user-defined taxonomic levels, characteristics of site, sample, sampler, and mesh size used. Considering that there are still many taxonomic issues related to poorly known regional fauna, a scientific committee was created to work out consistent solutions to current misidentifications and equivocal taxonomy status of some species. Expertise from this committee will be incorporated by NONATObase continually. The use of quantitative data was possible by standardization of a sample unit. All data, maps of distribution and references from a data set or a specified query can be visualized and exported to a commonly used data format in statistical analysis or reference manager software. The NONATO network has initialized with NONATObase, a valuable resource for marine ecologists and taxonomists. The database is expected to grow in functionality as it comes in useful, particularly regarding the challenges of dealing with molecular genetic data and tools to assess the effects of global environment change

  8. PubData: search engine for bioinformatics databases worldwide

    OpenAIRE

    Vand, Kasra; Wahlestedt, Thor; Khomtchouk, Kelly; Sayed, Mohammed; Wahlestedt, Claes; Khomtchouk, Bohdan

    2016-01-01

    We propose a search engine and file retrieval system for all bioinformatics databases worldwide. PubData searches biomedical data in a user-friendly fashion similar to how PubMed searches biomedical literature. PubData is built on novel network programming, natural language processing, and artificial intelligence algorithms that can patch into the file transfer protocol servers of any user-specified bioinformatics database, query its contents, retrieve files for download, and adapt to the use...

  9. Historical database for estimating flows in a water supply network; Base de datos historica para estimacion de caudales en una red de abastecimiento de agua

    Energy Technology Data Exchange (ETDEWEB)

    Menendez Martinez, A.; Ariel Gomez Gutierrez, A.; Alvarez Ramos, I.; Biscarri Trivino, F. [Universidad de Sevilla (Spain)

    2000-07-01

    Monitoring the flows managed by water supply companies involves processing huge amounts of data. These data also have to correspond to the topology of the network in a way that is consistent with the data collection time. The specific purpose database described in this article was developed to meet such requirements. (Author) 4 refs.

  10. Location in Mobile Networks Using Database Correlation (DCM

    Directory of Open Access Journals (Sweden)

    Michal Mada

    2008-01-01

    Full Text Available The article presents one of the methods of location (DCM, which is suitable for using in urban environments, where other methods are less accurate. The principle of this method is in comparison of measured samples of signal with samples stored in database. Next the article deals with methods for processing the data and other possibilities of correction of location.

  11. Migration from relational to NoSQL database

    Science.gov (United States)

    Ghotiya, Sunita; Mandal, Juhi; Kandasamy, Saravanakumar

    2017-11-01

    Data generated by various real time applications, social networking sites and sensor devices is of very huge amount and unstructured, which makes it difficult for Relational database management systems to handle the data. Data is very precious component of any application and needs to be analysed after arranging it in some structure. Relational databases are only able to deal with structured data, so there is need of NoSQL Database management System which can deal with semi -structured data also. Relational database provides the easiest way to manage the data but as the use of NoSQL is increasing it is becoming necessary to migrate the data from Relational to NoSQL databases. Various frameworks has been proposed previously which provides mechanisms for migration of data stored at warehouses in SQL, middle layer solutions which can provide facility of data to be stored in NoSQL databases to handle data which is not structured. This paper provides a literature review of some of the recent approaches proposed by various researchers to migrate data from relational to NoSQL databases. Some researchers proposed mechanisms for the co-existence of NoSQL and Relational databases together. This paper provides a summary of mechanisms which can be used for mapping data stored in Relational databases to NoSQL databases. Various techniques for data transformation and middle layer solutions are summarised in the paper.

  12. Technical Network

    CERN Multimedia

    2007-01-01

    In order to optimise the management of the Technical Network (TN), to facilitate understanding of the purpose of devices connected to the TN and to improve security incident handling, the Technical Network Administrators and the CNIC WG have asked IT/CS to verify the "description" and "tag" fields of devices connected to the TN. Therefore, persons responsible for systems connected to the TN will receive e-mails from IT/CS asking them to add the corresponding information in the network database at "network-cern-ch". Thank you very much for your cooperation. The Technical Network Administrators & the CNIC WG

  13. PlantNATsDB: a comprehensive database of plant natural antisense transcripts.

    Science.gov (United States)

    Chen, Dijun; Yuan, Chunhui; Zhang, Jian; Zhang, Zhao; Bai, Lin; Meng, Yijun; Chen, Ling-Ling; Chen, Ming

    2012-01-01

    Natural antisense transcripts (NATs), as one type of regulatory RNAs, occur prevalently in plant genomes and play significant roles in physiological and pathological processes. Although their important biological functions have been reported widely, a comprehensive database is lacking up to now. Consequently, we constructed a plant NAT database (PlantNATsDB) involving approximately 2 million NAT pairs in 69 plant species. GO annotation and high-throughput small RNA sequencing data currently available were integrated to investigate the biological function of NATs. PlantNATsDB provides various user-friendly web interfaces to facilitate the presentation of NATs and an integrated, graphical network browser to display the complex networks formed by different NATs. Moreover, a 'Gene Set Analysis' module based on GO annotation was designed to dig out the statistical significantly overrepresented GO categories from the specific NAT network. PlantNATsDB is currently the most comprehensive resource of NATs in the plant kingdom, which can serve as a reference database to investigate the regulatory function of NATs. The PlantNATsDB is freely available at http://bis.zju.edu.cn/pnatdb/.

  14. SU-E-T-51: Bayesian Network Models for Radiotherapy Error Detection

    International Nuclear Information System (INIS)

    Kalet, A; Phillips, M; Gennari, J

    2014-01-01

    Purpose: To develop a probabilistic model of radiotherapy plans using Bayesian networks that will detect potential errors in radiation delivery. Methods: Semi-structured interviews with medical physicists and other domain experts were employed to generate a set of layered nodes and arcs forming a Bayesian Network (BN) which encapsulates relevant radiotherapy concepts and their associated interdependencies. Concepts in the final network were limited to those whose parameters are represented in the institutional database at a level significant enough to develop mathematical distributions. The concept-relation knowledge base was constructed using the Web Ontology Language (OWL) and translated into Hugin Expert Bayes Network files via the the RHugin package in the R statistical programming language. A subset of de-identified data derived from a Mosaiq relational database representing 1937 unique prescription cases was processed and pre-screened for errors and then used by the Hugin implementation of the Estimation-Maximization (EM) algorithm for machine learning all parameter distributions. Individual networks were generated for each of several commonly treated anatomic regions identified by ICD-9 neoplasm categories including lung, brain, lymphoma, and female breast. Results: The resulting Bayesian networks represent a large part of the probabilistic knowledge inherent in treatment planning. By populating the networks entirely with data captured from a clinical oncology information management system over the course of several years of normal practice, we were able to create accurate probability tables with no additional time spent by experts or clinicians. These probabilistic descriptions of the treatment planning allow one to check if a treatment plan is within the normal scope of practice, given some initial set of clinical evidence and thereby detect for potential outliers to be flagged for further investigation. Conclusion: The networks developed here support the

  15. SU-E-T-51: Bayesian Network Models for Radiotherapy Error Detection

    Energy Technology Data Exchange (ETDEWEB)

    Kalet, A; Phillips, M; Gennari, J [UniversityWashington, Seattle, WA (United States)

    2014-06-01

    Purpose: To develop a probabilistic model of radiotherapy plans using Bayesian networks that will detect potential errors in radiation delivery. Methods: Semi-structured interviews with medical physicists and other domain experts were employed to generate a set of layered nodes and arcs forming a Bayesian Network (BN) which encapsulates relevant radiotherapy concepts and their associated interdependencies. Concepts in the final network were limited to those whose parameters are represented in the institutional database at a level significant enough to develop mathematical distributions. The concept-relation knowledge base was constructed using the Web Ontology Language (OWL) and translated into Hugin Expert Bayes Network files via the the RHugin package in the R statistical programming language. A subset of de-identified data derived from a Mosaiq relational database representing 1937 unique prescription cases was processed and pre-screened for errors and then used by the Hugin implementation of the Estimation-Maximization (EM) algorithm for machine learning all parameter distributions. Individual networks were generated for each of several commonly treated anatomic regions identified by ICD-9 neoplasm categories including lung, brain, lymphoma, and female breast. Results: The resulting Bayesian networks represent a large part of the probabilistic knowledge inherent in treatment planning. By populating the networks entirely with data captured from a clinical oncology information management system over the course of several years of normal practice, we were able to create accurate probability tables with no additional time spent by experts or clinicians. These probabilistic descriptions of the treatment planning allow one to check if a treatment plan is within the normal scope of practice, given some initial set of clinical evidence and thereby detect for potential outliers to be flagged for further investigation. Conclusion: The networks developed here support the

  16. Transduction motif analysis of gastric cancer based on a human signaling network

    Energy Technology Data Exchange (ETDEWEB)

    Liu, G.; Li, D.Z.; Jiang, C.S.; Wang, W. [Fuzhou General Hospital of Nanjing Command, Department of Gastroenterology, Fuzhou, China, Department of Gastroenterology, Fuzhou General Hospital of Nanjing Command, Fuzhou (China)

    2014-04-04

    To investigate signal regulation models of gastric cancer, databases and literature were used to construct the signaling network in humans. Topological characteristics of the network were analyzed by CytoScape. After marking gastric cancer-related genes extracted from the CancerResource, GeneRIF, and COSMIC databases, the FANMOD software was used for the mining of gastric cancer-related motifs in a network with three vertices. The significant motif difference method was adopted to identify significantly different motifs in the normal and cancer states. Finally, we conducted a series of analyses of the significantly different motifs, including gene ontology, function annotation of genes, and model classification. A human signaling network was constructed, with 1643 nodes and 5089 regulating interactions. The network was configured to have the characteristics of other biological networks. There were 57,942 motifs marked with gastric cancer-related genes out of a total of 69,492 motifs, and 264 motifs were selected as significantly different motifs by calculating the significant motif difference (SMD) scores. Genes in significantly different motifs were mainly enriched in functions associated with cancer genesis, such as regulation of cell death, amino acid phosphorylation of proteins, and intracellular signaling cascades. The top five significantly different motifs were mainly cascade and positive feedback types. Almost all genes in the five motifs were cancer related, including EPOR, MAPK14, BCL2L1, KRT18, PTPN6, CASP3, TGFBR2, AR, and CASP7. The development of cancer might be curbed by inhibiting signal transductions upstream and downstream of the selected motifs.

  17. Waste management and technologies analytical database project for Los Alamos National Laboratory/Department of Energy. Final report, June 7, 1993--June 15, 1994

    International Nuclear Information System (INIS)

    1995-01-01

    The Waste Management and Technologies Analytical Database System (WMTADS) supported by the Department of Energy's (DOE) Office of Environmental Management (EM), Office of Technology Development (EM-50), was developed and based at the Los Alamos National Laboratory (LANL), Los Alamos, New Mexico, to collect, identify, organize, track, update, and maintain information related to existing/available/developing and planned technologies to characterize, treat, and handle mixed, hazardous and radioactive waste for storage and disposal in support of EM strategies and goals and to focus area projects. WMTADS was developed as a centralized source of on-line information regarding technologies for environmental management processes that can be accessed by a computer, modem, phone line, and communications software through a Local Area Network (LAN), and server connectivity on the Internet, the world's largest computer network, and with file transfer protocol (FTP) can also be used to globally transfer files from the server to the user's computer through Internet and World Wide Web (WWW) using Mosaic

  18. Extended functions of the database machine FREND for interactive systems

    International Nuclear Information System (INIS)

    Hikita, S.; Kawakami, S.; Sano, K.

    1984-01-01

    Well-designed visual interfaces encourage non-expert users to use relational database systems. In those systems such as office automation systems or engineering database systems, non-expert users interactively access to database from visual terminals. Some users may want to occupy database or other users may share database according to various situations. Because, those jobs need a lot of time to be completed, concurrency control must be well designed to enhance the concurrency. The extended method of concurrency control of FREND is presented in this paper. The authors assume that systems are composed of workstations, a local area network and the database machine FREND. This paper also stresses that those workstations and FREND must cooperate to complete concurrency control for interactive applications

  19. The magnet database system

    International Nuclear Information System (INIS)

    Ball, M.J.; Delagi, N.; Horton, B.; Ivey, J.C.; Leedy, R.; Li, X.; Marshall, B.; Robinson, S.L.; Tompkins, J.C.

    1992-01-01

    The Test Department of the Magnet Systems Division of the Superconducting Super Collider Laboratory (SSCL) is developing a central database of SSC magnet information that will be available to all magnet scientists at the SSCL or elsewhere, via network connections. The database contains information on the magnets' major components, configuration information (specifying which individual items were used in each cable, coil, and magnet), measurements made at major fabrication stages, and the test results on completed magnets. These data will facilitate the correlation of magnet performance with the properties of its constituents. Recent efforts have focused on the development of procedures for user-friendly access to the data, including displays in the format of the production open-quotes travelerclose quotes data sheets, standard summary reports, and a graphical interface for ad hoc queues and plots

  20. Technical Network

    CERN Multimedia

    2007-01-01

    In order to optimize the management of the Technical Network (TN), to ease the understanding and purpose of devices connected to the TN, and to improve security incident handling, the Technical Network Administrators and the CNIC WG have asked IT/CS to verify the "description" and "tag" fields of devices connected to the TN. Therefore, persons responsible for systems connected to the TN will receive email notifications from IT/CS asking them to add the corresponding information in the network database. Thank you very much for your cooperation. The Technical Network Administrators & the CNIC WG

  1. Structure, function, and control of the human musculoskeletal network.

    Directory of Open Access Journals (Sweden)

    Andrew C Murphy

    2018-01-01

    Full Text Available The human body is a complex organism, the gross mechanical properties of which are enabled by an interconnected musculoskeletal network controlled by the nervous system. The nature of musculoskeletal interconnection facilitates stability, voluntary movement, and robustness to injury. However, a fundamental understanding of this network and its control by neural systems has remained elusive. Here we address this gap in knowledge by utilizing medical databases and mathematical modeling to reveal the organizational structure, predicted function, and neural control of the musculoskeletal system. We constructed a highly simplified whole-body musculoskeletal network in which single muscles connect to multiple bones via both origin and insertion points. We demonstrated that, using this simplified model, a muscle's role in this network could offer a theoretical prediction of the susceptibility of surrounding components to secondary injury. Finally, we illustrated that sets of muscles cluster into network communities that mimic the organization of control modules in primary motor cortex. This novel formalism for describing interactions between the muscular and skeletal systems serves as a foundation to develop and test therapeutic responses to injury, inspiring future advances in clinical treatments.

  2. Federated or cached searches: providing expected performance from multiple invasive species databases

    Science.gov (United States)

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-01-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  3. Final DIGITAL FLOOD INSURANCE RATE MAP DATABASE, McLean County, ILLINOIS USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  4. Database setup insuring radiopharmaceuticals traceability

    International Nuclear Information System (INIS)

    Robert, N.; Salmon, F.; Clermont-Gallerande, H. de; Celerier, C.

    2002-01-01

    Having to organize radiopharmacy and to insure proper traceability of radiopharmaceutical medicines brings numerous problems, especially for the departments which are not assisted with global management network systems. Our work has been to find a solution enabling to use high street software to cover those needs. We have set up a PC database run by the Microsoft software ACCESS 97. Its use consists in: saving data related to generators, isotopes and kits reception and deletion, as well as the results of quality control; transferring data collected from the software that is connected to the activimeter (elutions and preparations registers, prescription book). By relating all the saved data, ACCESS enables to mix all information in order to proceed requests. At this stage, it is possible to edit all regular registers (prescription book, generator and radionuclides follow-up, blood derived medicines traceability) and to quickly retrieve patients who have received a particular radiopharmaceutical, or the radiopharmaceutical that has been given to a particular patient. This user-friendly database provides a considerable support to nuclear medicine department that don't possess any network management for their radiopharmaceutical activity. (author)

  5. Construction of a bibliographic information database for the nuclear science and engineering (X)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Whan; Lee, Ji Ho; Oh, Jeong Hun; Choi, Kwang; Chun, Young Chun; Yoo, Jae Bok; Yu, Anna [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-12-01

    The major goal of this project is to construct a database in nuclear science and engineering information materials, support the R and D activities of users in this field, finally to support KRISTAL(Korea Research Information for Science and Technology Access Line)'s DB through the KREONet(Korea Research Environment Open Network), as one of the five national information networks.The contents of this project are as follows: 1) Materials selection and collection, 2) Indexing and abstract preparation, 3) Data input and transmission, and 4) document delivery service. In this Tenth year, 30,500 records ,as total of inputted data, are added to the existing SATURN DB. These records are covered with the articles of nuclear-related core journals, proceedings, seminars, and research reports, etc. And using the Web, this project was important for users to get their needed information itself and to receive the materials of online requested information . And then it will give the chance users not only to promote the effectiveness of R and D activities, but also to obviate the duplicated research works. 2 tabs. (Author)

  6. Development of Multimedia Databases%多媒体数据库开发

    Institute of Scientific and Technical Information of China (English)

    李娟

    2001-01-01

    The paper starts with an introduction to the development process of database technologies. Then, it discusses the techniques and methods used to develop multimedia database. Finally, taking the Information Management System of Graduate Students as an example, it describes the design process of a simple multimedia database.

  7. Access to DNA and protein databases on the Internet.

    Science.gov (United States)

    Harper, R

    1994-02-01

    During the past year, the number of biological databases that can be queried via Internet has dramatically increased. This increase has resulted from the introduction of networking tools, such as Gopher and WAIS, that make it easy for research workers to index databases and make them available for on-line browsing. Biocomputing in the nineties will see the advent of more client/server options for the solution of problems in bioinformatics.

  8. The GTN-P Data Management System: A central database for permafrost monitoring parameters of the Global Terrestrial Network for Permafrost (GTN-P) and beyond

    Science.gov (United States)

    Lanckman, Jean-Pierre; Elger, Kirsten; Karlsson, Ævar Karl; Johannsson, Halldór; Lantuit, Hugues

    2013-04-01

    Permafrost is a direct indicator of climate change and has been identified as Essential Climate Variable (ECV) by the global observing community. The monitoring of permafrost temperatures, active-layer thicknesses and other parameters has been performed for several decades already, but it was brought together within the Global Terrestrial Network for Permafrost (GTN-P) in the 1990's only, including the development of measurement protocols to provide standardized data. GTN-P is the primary international observing network for permafrost sponsored by the Global Climate Observing System (GCOS) and the Global Terrestrial Observing System (GTOS), and managed by the International Permafrost Association (IPA). All GTN-P data was outfitted with an "open data policy" with free data access via the World Wide Web. The existing data, however, is far from being homogeneous: it is not yet optimized for databases, there is no framework for data reporting or archival and data documentation is incomplete. As a result, and despite the utmost relevance of permafrost in the Earth's climate system, the data has not been used by as many researchers as intended by the initiators of the programs. While the monitoring of many other ECVs has been tackled by organized international networks (e.g. FLUXNET), there is still no central database for all permafrost-related parameters. The European Union project PAGE21 created opportunities to develop this central database for permafrost monitoring parameters of GTN-P during the duration of the project and beyond. The database aims to be the one location where the researcher can find data, metadata, and information of all relevant parameters for a specific site. Each component of the Data Management System (DMS), including parameters, data levels and metadata formats were developed in cooperation with the GTN-P and the IPA. The general framework of the GTN-P DMS is based on an object oriented model (OOM), open for as many parameters as possible, and

  9. Establishing the ACORN National Practitioner Database: Strategies to Recruit Practitioners to a National Practice-Based Research Network.

    Science.gov (United States)

    Adams, Jon; Steel, Amie; Moore, Craig; Amorin-Woods, Lyndon; Sibbritt, David

    2016-10-01

    The purpose of this paper is to report on the recruitment and promotion strategies employed by the Australian Chiropractic Research Network (ACORN) project aimed at helping recruit a substantial national sample of participants and to describe the features of our practice-based research network (PBRN) design that may provide key insights to others looking to establish a similar network or draw on the ACORN project to conduct sub-studies. The ACORN project followed a multifaceted recruitment and promotion strategy drawing on distinct branding, a practitioner-focused promotion campaign, and a strategically designed questionnaire and distribution/recruitment approach to attract sufficient participation from the ranks of registered chiropractors across Australia. From the 4684 chiropractors registered at the time of recruitment, the project achieved a database response rate of 36% (n = 1680), resulting in a large, nationally representative sample across age, gender, and location. This sample constitutes the largest proportional coverage of participants from any voluntary national PBRN across any single health care profession. It does appear that a number of key promotional and recruitment features of the ACORN project may have helped establish the high response rate for the PBRN, which constitutes an important sustainable resource for future national and international efforts to grow the chiropractic evidence base and research capacity. Further rigorous enquiry is needed to help evaluate the direct contribution of specific promotional and recruitment strategies in attaining high response rates from practitioner populations who may be invited to participate in future PBRNs. Copyright © 2016. Published by Elsevier Inc.

  10. Foundations of database systems : an introductory tutorial

    NARCIS (Netherlands)

    Paredaens, J.; Paredaens, J.; Tenenbaum, L. A.

    1994-01-01

    A very short overview is given of the principles of databases. The entity relationship model is used to define the conceptual base. Furthermore file management, the hierarchical model, the network model, the relational model and the object oriented model are discussed During the second world war,

  11. Cloud networking understanding cloud-based data center networks

    CERN Document Server

    Lee, Gary

    2014-01-01

    Cloud Networking: Understanding Cloud-Based Data Center Networks explains the evolution of established networking technologies into distributed, cloud-based networks. Starting with an overview of cloud technologies, the book explains how cloud data center networks leverage distributed systems for network virtualization, storage networking, and software-defined networking. The author offers insider perspective to key components that make a cloud network possible such as switch fabric technology and data center networking standards. The final chapters look ahead to developments in architectures

  12. Database Perspectives on Blockchains

    OpenAIRE

    Cohen, Sara; Zohar, Aviv

    2018-01-01

    Modern blockchain systems are a fresh look at the paradigm of distributed computing, applied under assumptions of large-scale public networks. They can be used to store and share information without a trusted central party. There has been much effort to develop blockchain systems for a myriad of uses, ranging from cryptocurrencies to identity control, supply chain management, etc. None of this work has directly studied the fundamental database issues that arise when using blockchains as the u...

  13. SSC lattice database and graphical interface

    International Nuclear Information System (INIS)

    Trahern, C.G.; Zhou, J.

    1991-11-01

    When completed the Superconducting Super Collider will be the world's largest accelerator complex. In order to build this system on schedule, the use of database technologies will be essential. In this paper we discuss one of the database efforts underway at the SSC, the lattice database. The SSC lattice database provides a centralized source for the design of each major component of the accelerator complex. This includes the two collider rings, the High Energy Booster, Medium Energy Booster, Low Energy Booster, and the LINAC as well as transfer and test beam lines. These designs have been created using a menagerie of programs such as SYNCH, DIMAD, MAD, TRANSPORT, MAGIC, TRACE3D AND TEAPOT. However, once a design has been completed, it is entered into a uniform database schema in the database system. In this paper we discuss the reasons for creating the lattice database and its implementation via the commercial database system SYBASE. Each lattice in the lattice database is composed of a set of tables whose data structure can describe any of the SSC accelerator lattices. In order to allow the user community access to the databases, a programmatic interface known as dbsf (for database to several formats) has been written. Dbsf creates ascii input files appropriate to the above mentioned accelerator design programs. In addition it has a binary dataset output using the Self Describing Standard data discipline provided with the Integrated Scientific Tool Kit software tools. Finally we discuss the graphical interfaces to the lattice database. The primary interface, known as OZ, is a simulation environment as well as a database browser

  14. Characterization analysis database system (CADS). A system overview

    International Nuclear Information System (INIS)

    1997-12-01

    The CADS database is a standardized, quality-assured, and configuration-controlled data management system developed to assist in the task of characterizing the DOE surplus HEU material. Characterization of the surplus HEU inventory includes identifying the specific material; gathering existing data about the inventory; defining the processing steps that may be necessary to prepare the material for transfer to a blending site; and, ultimately, developing a range of the preliminary cost estimates for those processing steps. Characterization focuses on producing commercial reactor fuel as the final step in material disposition. Based on the project analysis results, the final determination will be made as to the viability of the disposition path for each particular item of HEU. The purpose of this document is to provide an informational overview of the CADS database, its evolution, and its current capabilities. This document describes the purpose of CADS, the system requirements it fulfills, the database structure, and the operational guidelines of the system

  15. Nuclear integrated database and design advancement system

    International Nuclear Information System (INIS)

    Ha, Jae Joo; Jeong, Kwang Sub; Kim, Seung Hwan; Choi, Sun Young.

    1997-01-01

    The objective of NuIDEAS is to computerize design processes through an integrated database by eliminating the current work style of delivering hardcopy documents and drawings. The major research contents of NuIDEAS are the advancement of design processes by computerization, the establishment of design database and 3 dimensional visualization of design data. KSNP (Korea Standard Nuclear Power Plant) is the target of legacy database and 3 dimensional model, so that can be utilized in the next plant design. In the first year, the blueprint of NuIDEAS is proposed, and its prototype is developed by applying the rapidly revolutionizing computer technology. The major results of the first year research were to establish the architecture of the integrated database ensuring data consistency, and to build design database of reactor coolant system and heavy components. Also various softwares were developed to search, share and utilize the data through networks, and the detailed 3 dimensional CAD models of nuclear fuel and heavy components were constructed, and walk-through simulation using the models are developed. This report contains the major additions and modifications to the object oriented database and associated program, using methods and Javascript.. (author). 36 refs., 1 tab., 32 figs

  16. Database Constraints Applied to Metabolic Pathway Reconstruction Tools

    Directory of Open Access Journals (Sweden)

    Jordi Vilaplana

    2014-01-01

    Full Text Available Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (reannotation of proteomes, to properly identify both the individual proteins involved in the process(es of interest and their function. It also enables the sets of proteins involved in the process(es in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes.

  17. Database constraints applied to metabolic pathway reconstruction tools.

    Science.gov (United States)

    Vilaplana, Jordi; Solsona, Francesc; Teixido, Ivan; Usié, Anabel; Karathia, Hiren; Alves, Rui; Mateo, Jordi

    2014-01-01

    Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the process(es) of interest and their function. It also enables the sets of proteins involved in the process(es) in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes.

  18. A novel deep learning algorithm for incomplete face recognition: Low-rank-recovery network.

    Science.gov (United States)

    Zhao, Jianwei; Lv, Yongbiao; Zhou, Zhenghua; Cao, Feilong

    2017-10-01

    There have been a lot of methods to address the recognition of complete face images. However, in real applications, the images to be recognized are usually incomplete, and it is more difficult to realize such a recognition. In this paper, a novel convolution neural network frame, named a low-rank-recovery network (LRRNet), is proposed to conquer the difficulty effectively inspired by matrix completion and deep learning techniques. The proposed LRRNet first recovers the incomplete face images via an approach of matrix completion with the truncated nuclear norm regularization solution, and then extracts some low-rank parts of the recovered images as the filters. With these filters, some important features are obtained by means of the binaryzation and histogram algorithms. Finally, these features are classified with the classical support vector machines (SVMs). The proposed LRRNet method has high face recognition rate for the heavily corrupted images, especially for the images in the large databases. The proposed LRRNet performs well and efficiently for the images with heavily corrupted, especially in the case of large databases. Extensive experiments on several benchmark databases demonstrate that the proposed LRRNet performs better than some other excellent robust face recognition methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Evaluation report on research and development of a database system for mutual computer operation; Denshi keisanki sogo un'yo database system no kenkyu kaihatsu ni kansuru hyoka hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1992-03-01

    This paper describes evaluation on the research and development of a database system for mutual computer operation, with respect to discrete database technology, multi-media technology, high reliability technology, and mutual operation network system technology. A large number of research results placing the views on the future were derived, such as the issues of discretion and utilization patterns of the discrete database, structuring of data for multi-media information, retrieval systems, flexible and high-level utilization of the network, and the issues in database protection. These achievements are publicly disclosed widely. The largest feature of this project is in aiming at forming a network system that can be operated mutually under multi-vender environment. Therefore, the researches and developments have been executed under the spirit of the principle of openness to public and international cooperation. These efforts are represented by organizing the rule establishment committee, execution of mutual interconnection experiment (including demonstration evaluation), and development of the mounting rules based on the ISO's 'open system interconnection (OSI)'. These results are compiled in the JIS as the basic reference model for the open system interconnection, whereas the targets shown in the basic plan have been achieved sufficiently. (NEDO)

  20. Construction of a bibliographic information database for the nuclear engineering

    International Nuclear Information System (INIS)

    Kim, Tae Whan; Lim, Yeon Soo; Kwac, Dong Chul

    1991-12-01

    The major goal of the project is to develop a nuclear science database of materials that have been published in Korea and to establish a network system that will give relevant information to people in the nuclear industry by linking this system with the proposed National Science Technical Information Network. This project aims to establish a database consisted of about 1,000 research reports that were prepared by KAERI from 1979 to 1990. The contents of the project are as follows: 1. Materials Selection and Collection 2. Index and Abstract Preparation 3. Data Input and Transmission. This project is intended to achieve the goal of maximum utilization of nuclear information in Korea. (Author)

  1. Distributed Database Access in the LHC Computing Grid with CORAL

    CERN Document Server

    Molnár, Z; Düllmann, D; Giacomo, G; Kalkhof, A; Valassi, A; CERN. Geneva. IT Department

    2009-01-01

    The CORAL package is the LCG Persistency Framework foundation for accessing relational databases. From the start CORAL has been designed to facilitate the deployment of the LHC experiment database applications in a distributed computing environment. In particular we cover - improvements to database service scalability by client connection management - platform-independent, multi-tier scalable database access by connection multiplexing, caching - a secure authentication and authorisation scheme integrated with existing grid services. We will summarize the deployment experience from several experiment productions using the distributed database infrastructure, which is now available in LCG. Finally, we present perspectives for future developments in this area.

  2. MIPS: a database for genomes and protein sequences.

    Science.gov (United States)

    Mewes, H W; Frishman, D; Güldener, U; Mannhaupt, G; Mayer, K; Mokrejs, M; Morgenstern, B; Münsterkötter, M; Rudd, S; Weil, B

    2002-01-01

    The Munich Information Center for Protein Sequences (MIPS-GSF, Neuherberg, Germany) continues to provide genome-related information in a systematic way. MIPS supports both national and European sequencing and functional analysis projects, develops and maintains automatically generated and manually annotated genome-specific databases, develops systematic classification schemes for the functional annotation of protein sequences, and provides tools for the comprehensive analysis of protein sequences. This report updates the information on the yeast genome (CYGD), the Neurospora crassa genome (MNCDB), the databases for the comprehensive set of genomes (PEDANT genomes), the database of annotated human EST clusters (HIB), the database of complete cDNAs from the DHGP (German Human Genome Project), as well as the project specific databases for the GABI (Genome Analysis in Plants) and HNB (Helmholtz-Netzwerk Bioinformatik) networks. The Arabidospsis thaliana database (MATDB), the database of mitochondrial proteins (MITOP) and our contribution to the PIR International Protein Sequence Database have been described elsewhere [Schoof et al. (2002) Nucleic Acids Res., 30, 91-93; Scharfe et al. (2000) Nucleic Acids Res., 28, 155-158; Barker et al. (2001) Nucleic Acids Res., 29, 29-32]. All databases described, the protein analysis tools provided and the detailed descriptions of our projects can be accessed through the MIPS World Wide Web server (http://mips.gsf.de).

  3. An approach to efficient mobility management in intelligent networks

    Science.gov (United States)

    Murthy, K. M. S.

    1995-01-01

    Providing personal communication systems supporting full mobility require intelligent networks for tracking mobile users and facilitating outgoing and incoming calls over different physical and network environments. In realizing the intelligent network functionalities, databases play a major role. Currently proposed network architectures envision using the SS7-based signaling network for linking these DB's and also for interconnecting DB's with switches. If the network has to support ubiquitous, seamless mobile services, then it has to support additionally mobile application parts, viz., mobile origination calls, mobile destination calls, mobile location updates and inter-switch handovers. These functions will generate significant amount of data and require them to be transferred between databases (HLR, VLR) and switches (MSC's) very efficiently. In the future, the users (fixed or mobile) may use and communicate with sophisticated CPE's (e.g. multimedia, multipoint and multisession calls) which may require complex signaling functions. This will generate volumness service handling data and require efficient transfer of these message between databases and switches. Consequently, the network providers would be able to add new services and capabilities to their networks incrementally, quickly and cost-effectively.

  4. The TJ-II Relational Database Access Library: A User's Guide

    International Nuclear Information System (INIS)

    Sanchez, E.; Portas, A. B.; Vega, J.

    2003-01-01

    A relational database has been developed to store data representing physical values from TJ-II discharges. This new database complements the existing TJ-EI raw data database. This database resides in a host computer running Windows 2000 Server operating system and it is managed by SQL Server. A function library has been developed that permits remote access to these data from user programs running in computers connected to TJ-II local area networks via remote procedure cali. In this document a general description of the database and its organization are provided. Also given are a detailed description of the functions included in the library and examples of how to use these functions in computer programs written in the FORTRAN and C languages. (Author) 8 refs

  5. Using consensus bayesian network to model the reactive oxygen species regulatory pathway.

    Directory of Open Access Journals (Sweden)

    Liangdong Hu

    Full Text Available Bayesian network is one of the most successful graph models for representing the reactive oxygen species regulatory pathway. With the increasing number of microarray measurements, it is possible to construct the bayesian network from microarray data directly. Although large numbers of bayesian network learning algorithms have been developed, when applying them to learn bayesian networks from microarray data, the accuracies are low due to that the databases they used to learn bayesian networks contain too few microarray data. In this paper, we propose a consensus bayesian network which is constructed by combining bayesian networks from relevant literatures and bayesian networks learned from microarray data. It would have a higher accuracy than the bayesian networks learned from one database. In the experiment, we validated the bayesian network combination algorithm on several classic machine learning databases and used the consensus bayesian network to model the Escherichia coli's ROS pathway.

  6. Data-Based Energy Efficient Clustered Routing Protocol for Wireless Sensors Networks – Tabuk Flood Monitoring System Case Study

    Directory of Open Access Journals (Sweden)

    Ammar Babiker

    2017-10-01

    Full Text Available Energy efficiency has been considered as the most important issue in wireless sensor networks. As in many applications, wireless sensors are scattered in a wide harsh area, where the battery replacement or charging will be quite difficult and it is the most important challenge. Therefore, the design of energy saving mechanism becomes mandatory in most recent research. In this paper, a new energy efficient clustered routing protocol is proposed: the proposed protocol is based on analyzing the data collected from the sensors in a base-station. Based on this analysis the cluster head will be selected as the one with the most useful data. Then, a variable time slot is specified to each sensor to minimize the transmission of repetitive and un-useful data. The proposed protocol Data-Based Energy Efficient Clustered Routing Protocol for Wireless Sensors Networks (DCRP was compared with the famous energy efficient LEACH protocol and also with one of the recent energy efficient routing protocols named Position Responsive Routing Protocol (PRRP. DCRP has been used in monitoring the floods in Tabuk area –Saudi Arabia. It shows comparatively better results.

  7. Vehicle-network defensive aids suite

    Science.gov (United States)

    Rapanotti, John

    2005-05-01

    Defensive Aids Suites (DAS) developed for vehicles can be extended to the vehicle network level. The vehicle network, typically comprising four platoon vehicles, will benefit from improved communications and automation based on low latency response to threats from a flexible, dynamic, self-healing network environment. Improved DAS performance and reliability relies on four complementary sensor technologies including: acoustics, visible and infrared optics, laser detection and radar. Long-range passive threat detection and avoidance is based on dual-purpose optics, primarily designed for manoeuvring, targeting and surveillance, combined with dazzling, obscuration and countermanoeuvres. Short-range active armour is based on search and track radar and intercepting grenades to defeat the threat. Acoustic threat detection increases the overall robustness of the DAS and extends the detection range to include small calibers. Finally, detection of active targeting systems is carried out with laser and radar warning receivers. Synthetic scene generation will provide the integrated environment needed to investigate, develop and validate these new capabilities. Computer generated imagery, based on validated models and an acceptable set of benchmark vignettes, can be used to investigate and develop fieldable sensors driven by real-time algorithms and countermeasure strategies. The synthetic scene environment will be suitable for sensor and countermeasure development in hardware-in-the-loop simulation. The research effort focuses on two key technical areas: a) computing aspects of the synthetic scene generation and b) and development of adapted models and databases. OneSAF is being developed for research and development, in addition to the original requirement of Simulation and Modelling for Acquisition, Rehearsal, Requirements and Training (SMARRT), and is becoming useful as a means for transferring technology to other users, researchers and contractors. This procedure

  8. FY 1998 survey report. Examinational research on the construction of body function database; 1998 nendo chosa hokokusho. Shintai kino database no kochiku ni kansuru chosa kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    The body function database is aimed at supplying and supporting products and environment friendly to aged people by supplying the data on body function of aged people in case of planning, designing and production when companies supply the products and environment. As a method for survey, group measuring was made for measurement of visual characteristics. For the measurement of action characteristics, the moving action including posture change was studied, the experimental plan was carried out, and items of group measurement and measuring methods were finally proposed. The database structure was made public at the end of this fiscal year, through the pre-publication/evaluation after the trial evaluation conducted using pilot database. In the study of the measurement of action characteristics, the verification test was conducted for a small-size group. By this, the measurement of action characteristics was finally proposed. In the body function database system, subjects on operation were extracted/bettered by trially evaluating pilot database, and also adjustment of right relations toward publication and preparation of management methods were made. An evaluation version was made supposing its publication. (NEDO)

  9. Regional variations of basal cell carcinoma incidence in the U.K. using The Health Improvement Network database (2004-10).

    Science.gov (United States)

    Musah, A; Gibson, J E; Leonardi-Bee, J; Cave, M R; Ander, E L; Bath-Hextall, F

    2013-11-01

    Basal cell carcinoma (BCC) is one of the most common types of nonmelanoma skin cancer affecting the white population; however, little is known about how the incidence varies across the U.K. To determine the variation in BCC throughout the U.K. Data from 2004 to 2010 were obtained from The Health Improvement Network database. European and world age-standardized incidence rates (EASRs and WASRs, respectively) were obtained for country-level estimates and levels of socioeconomic deprivation, while strategic health-authority-level estimates were directly age and sex standardized to the U.K. standard population. Incidence-rate ratios were estimated using multivariable Poisson regression models. The overall EASR and WASR of BCC in the U.K. were 98.6 per 100,000 person-years and 66.9 per 100,000 person-years, respectively. Regional-level incidence rates indicated a significant geographical variation in the distribution of BCC, which was more pronounced in the southern parts of the country. The South East Coast had the highest BCC rate followed by South Central, Wales and the South West. Incidence rates were substantially higher in the least deprived groups and we observed a trend of decreasing incidence with increasing levels of deprivation (P < 0.001). Finally, in terms of age groups, the largest annual increase was observed among those aged 30-49 years. Basal cell carcinoma is an increasing health problem in the U.K.; the southern regions of the U.K. and those in the least deprived groups had a higher incidence of BCC. Our findings indicate an increased incidence of BCC for younger age groups below 49 years. © 2013 British Association of Dermatologists.

  10. BioMart Central Portal: an open database network for the biological community

    Science.gov (United States)

    Guberman, Jonathan M.; Ai, J.; Arnaiz, O.; Baran, Joachim; Blake, Andrew; Baldock, Richard; Chelala, Claude; Croft, David; Cros, Anthony; Cutts, Rosalind J.; Di Génova, A.; Forbes, Simon; Fujisawa, T.; Gadaleta, E.; Goodstein, D. M.; Gundem, Gunes; Haggarty, Bernard; Haider, Syed; Hall, Matthew; Harris, Todd; Haw, Robin; Hu, S.; Hubbard, Simon; Hsu, Jack; Iyer, Vivek; Jones, Philip; Katayama, Toshiaki; Kinsella, R.; Kong, Lei; Lawson, Daniel; Liang, Yong; Lopez-Bigas, Nuria; Luo, J.; Lush, Michael; Mason, Jeremy; Moreews, Francois; Ndegwa, Nelson; Oakley, Darren; Perez-Llamas, Christian; Primig, Michael; Rivkin, Elena; Rosanoff, S.; Shepherd, Rebecca; Simon, Reinhard; Skarnes, B.; Smedley, Damian; Sperling, Linda; Spooner, William; Stevenson, Peter; Stone, Kevin; Teague, J.; Wang, Jun; Wang, Jianxin; Whitty, Brett; Wong, D. T.; Wong-Erasmus, Marie; Yao, L.; Youens-Clark, Ken; Yung, Christina; Zhang, Junjun; Kasprzyk, Arek

    2011-01-01

    BioMart Central Portal is a first of its kind, community-driven effort to provide unified access to dozens of biological databases spanning genomics, proteomics, model organisms, cancer data, ontology information and more. Anybody can contribute an independently maintained resource to the Central Portal, allowing it to be exposed to and shared with the research community, and linking it with the other resources in the portal. Users can take advantage of the common interface to quickly utilize different sources without learning a new system for each. The system also simplifies cross-database searches that might otherwise require several complicated steps. Several integrated tools streamline common tasks, such as converting between ID formats and retrieving sequences. The combination of a wide variety of databases, an easy-to-use interface, robust programmatic access and the array of tools make Central Portal a one-stop shop for biological data querying. Here, we describe the structure of Central Portal and show example queries to demonstrate its capabilities. Database URL: http://central.biomart.org. PMID:21930507

  11. Diabetic retinopathy screening using deep neural network.

    Science.gov (United States)

    Ramachandran, Nishanthan; Hong, Sheng Chiong; Sime, Mary J; Wilson, Graham A

    2017-09-07

    There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Retrospective audit. Diabetic retinal photos from Otago database photographed during October 2016 (485 photos), and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Area under the receiver operating characteristic curve, sensitivity and specificity. For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% confidence interval 0.807-0.995), with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% confidence interval 0.973-0.986), with 96.0% sensitivity and 90.0% specificity for Messidor. This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  12. A generic method for improving the spatial interoperability of medical and ecological databases.

    Science.gov (United States)

    Ghenassia, A; Beuscart, J B; Ficheur, G; Occelli, F; Babykina, E; Chazard, E; Genin, M

    2017-10-03

    The availability of big data in healthcare and the intensive development of data reuse and georeferencing have opened up perspectives for health spatial analysis. However, fine-scale spatial studies of ecological and medical databases are limited by the change of support problem and thus a lack of spatial unit interoperability. The use of spatial disaggregation methods to solve this problem introduces errors into the spatial estimations. Here, we present a generic, two-step method for merging medical and ecological databases that avoids the use of spatial disaggregation methods, while maximizing the spatial resolution. Firstly, a mapping table is created after one or more transition matrices have been defined. The latter link the spatial units of the original databases to the spatial units of the final database. Secondly, the mapping table is validated by (1) comparing the covariates contained in the two original databases, and (2) checking the spatial validity with a spatial continuity criterion and a spatial resolution index. We used our novel method to merge a medical database (the French national diagnosis-related group database, containing 5644 spatial units) with an ecological database (produced by the French National Institute of Statistics and Economic Studies, and containing with 36,594 spatial units). The mapping table yielded 5632 final spatial units. The mapping table's validity was evaluated by comparing the number of births in the medical database and the ecological databases in each final spatial unit. The median [interquartile range] relative difference was 2.3% [0; 5.7]. The spatial continuity criterion was low (2.4%), and the spatial resolution index was greater than for most French administrative areas. Our innovative approach improves interoperability between medical and ecological databases and facilitates fine-scale spatial analyses. We have shown that disaggregation models and large aggregation techniques are not necessarily the best ways to

  13. Reliability databases: State-of-the-art and perspectives

    DEFF Research Database (Denmark)

    Akhmedjanov, Farit

    2001-01-01

    The report gives a history of development and an overview of the existing reliability databases. This overview also describes some other (than computer databases) sources of reliability and failures information, e.g. reliability handbooks, but the mainattention is paid to standard models...... and software packages containing the data mentioned. The standards corresponding to collection and exchange of reliability data are observed too. Finally, perspective directions in such data sources development areshown....

  14. RA radiological characterization database application

    International Nuclear Information System (INIS)

    Steljic, M.M; Ljubenov, V.Lj. . E-mail address of corresponding author: milijanas@vin.bg.ac.yu; Steljic, M.M.)

    2005-01-01

    Radiological characterization of the RA research reactor is one of the main activities in the first two years of the reactor decommissioning project. The raw characterization data from direct measurements or laboratory analyses (defined within the existing sampling and measurement programme) have to be interpreted, organized and summarized in order to prepare the final characterization survey report. This report should be made so that the radiological condition of the entire site is completely and accurately shown with the radiological condition of the components clearly depicted. This paper presents an electronic database application, designed as a serviceable and efficient tool for characterization data storage, review and analysis, as well as for the reports generation. Relational database model was designed and the application is made by using Microsoft Access 2002 (SP1), a 32-bit RDBMS for the desktop and client/server database applications that run under Windows XP. (author)

  15. Active system area networks for data intensive computations. Final report

    Energy Technology Data Exchange (ETDEWEB)

    None

    2002-04-01

    The goal of the Active System Area Networks (ASAN) project is to develop hardware and software technologies for the implementation of active system area networks (ASANs). The use of the term ''active'' refers to the ability of the network interfaces to perform application-specific as well as system level computations in addition to their traditional role of data transfer. This project adopts the view that the network infrastructure should be an active computational entity capable of supporting certain classes of computations that would otherwise be performed on the host CPUs. The result is a unique network-wide programming model where computations are dynamically placed within the host CPUs or the NIs depending upon the quality of service demands and network/CPU resource availability. The projects seeks to demonstrate that such an approach is a better match for data intensive network-based applications and that the advent of low-cost powerful embedded processors and configurable hardware makes such an approach economically viable and desirable.

  16. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The... Article title: Author name(s): Journal: External Links: Original website information Database maintenance s...DB (Protein Data Bank) KEGG PATHWAY Database DrugPort Entry list Available Query search Available Web servic

  17. Interpenetrating metal-organic and inorganic 3D networks: a computer-aided systematic investigation. Part II [1]. Analysis of the Inorganic Crystal Structure Database (ICSD)

    International Nuclear Information System (INIS)

    Baburin, I.A.; Blatov, V.A.; Carlucci, L.; Ciani, G.; Proserpio, D.M.

    2005-01-01

    Interpenetration in metal-organic and inorganic networks has been investigated by a systematic analysis of the crystallographic structural databases. We have used a version of TOPOS (a package for multipurpose crystallochemical analysis) adapted for searching for interpenetration and based on the concept of Voronoi-Dirichlet polyhedra and on the representation of a crystal structure as a reduced finite graph. In this paper, we report comprehensive lists of interpenetrating inorganic 3D structures from the Inorganic Crystal Structure Database (ICSD), inclusive of 144 Collection Codes for equivalent interpenetrating nets, analyzed on the basis of their topologies. Distinct Classes, corresponding to the different modes in which individual identical motifs can interpenetrate, have been attributed to the entangled structures. Interpenetrating nets of different nature as well as interpenetrating H-bonded nets were also examined

  18. Beyond the "I" in the obesity epidemic: a review of social relational and network interventions on obesity.

    Science.gov (United States)

    Leroux, Janette S; Moore, Spencer; Dubé, Laurette

    2013-01-01

    Recent research has shown the importance of networks in the spread of obesity. Yet, the translation of research on social networks and obesity into health promotion practice has been slow. To review the types of obesity interventions targeting social relational factors. Six databases were searched in January 2013. A Boolean search was employed with the following sets of terms: (1) social dimensions: social capital, cohesion, collective efficacy, support, social networks, or trust; (2) intervention type: intervention, experiment, program, trial, or policy; and (3) obesity in the title or abstract. Titles and abstracts were reviewed. Articles were included if they described an obesity intervention with the social relational component central. Articles were assessed on the social relational factor(s) addressed, social ecological level(s) targeted, the intervention's theoretical approach, and the conceptual placement of the social relational component in the intervention. Database searches and final article screening yielded 30 articles. Findings suggested that (1) social support was most often targeted; (2) few interventions were beyond the individual level; (3) most interventions were framed on behaviour change theories; and (4) the social relational component tended to be conceptually ancillary to the intervention. Theoretically and practically, social networks remain marginal to current interventions addressing obesity.

  19. Building spatio-temporal database model based on ontological approach using relational database environment

    International Nuclear Information System (INIS)

    Mahmood, N.; Burney, S.M.A.

    2017-01-01

    Everything in this world is encapsulated by space and time fence. Our daily life activities are utterly linked and related with other objects in vicinity. Therefore, a strong relationship exist with our current location, time (including past, present and future) and event through with we are moving as an object also affect our activities in life. Ontology development and its integration with database are vital for the true understanding of the complex systems involving both spatial and temporal dimensions. In this paper we propose a conceptual framework for building spatio-temporal database model based on ontological approach. We have used relational data model for modelling spatio-temporal data content and present our methodology with spatio-temporal ontological accepts and its transformation into spatio-temporal database model. We illustrate the implementation of our conceptual model through a case study related to cultivated land parcel used for agriculture to exhibit the spatio-temporal behaviour of agricultural land and related entities. Moreover, it provides a generic approach for designing spatiotemporal databases based on ontology. The proposed model is capable to understand the ontological and somehow epistemological commitments and to build spatio-temporal ontology and transform it into a spatio-temporal data model. Finally, we highlight the existing and future research challenges. (author)

  20. Evaluating parallel relational databases for medical data analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Rintoul, Mark Daniel; Wilson, Andrew T.

    2012-03-01

    Hospitals have always generated and consumed large amounts of data concerning patients, treatment and outcomes. As computers and networks have permeated the hospital environment it has become feasible to collect and organize all of this data. This raises naturally the question of how to deal with the resulting mountain of information. In this report we detail a proof-of-concept test using two commercially available parallel database systems to analyze a set of real, de-identified medical records. We examine database scalability as data sizes increase as well as responsiveness under load from multiple users.

  1. Joint activities with IAEA on uploading of scieintific papers from Kazakhstan and Uzbekistan into the EXFOR database

    International Nuclear Information System (INIS)

    Kenzhebayev, N.; Kurmangalieva, V.; Takibayev, N.; Otuka, N.

    2015-01-01

    More than a year passed since Kazakhstan joined the international network of nuclear reactions data centers (NRDC). A Central Asian centre for nuclear reactions (CANRDB, Central Asia Nuclear Reaction Database) has been established at al-Farabi Kazakh National University, and a group of experts there is actively working on expansion of the database, further development of the specialized software, and fostering partnership with international nuclear physicists. There are also on-going activities aimed on training, searching of the published nuclear data obtained earlier by scientists from Central Asia to incorporate their results in the database. The main objective of CA-NRDB is the development and formation in Kazakhstan of open and user-friendly database on nuclear reactions with further incorporation of this database in the international network of nuclear databases under the International Atomic Energy Agency (IAEA). We note that such a database is created in the entire Central Asian region for the first time

  2. Joint activities with IAEA on uploading of scientific papers from Kazakhstan and Uzbekistan into the EXFOR database

    International Nuclear Information System (INIS)

    Kenzhebayev, N.; Kurmangalieva, V.; Takibayev, N.; Otsuka, N.; )

    2014-01-01

    More than a year passed since Kazakhstan joined the international network of nuclear reactions data centers (NRDC). A Central Asian centre for nuclear reactions (CANRDB, Central Asia Nuclear Reaction Database) has been established at al-Farabi Kazakh National University, and a group of experts there is actively working on expansion of the database, further development of the specialized software, and fostering partnership with international nuclear physicists. There are also on-going activities aimed on training, searching of the published nuclear data obtained earlier by scientists from Central Asia to incorporate their results in the database. The main objective of CA-NRDB is the development and formation in Kazakhstan of open and user-friendly database on nuclear reactions with further incorporation of this database in the international network of nuclear databases under the International Atomic Energy Agency (IAEA). We note that such a database is created in the entire Central Asian region for the first time

  3. MiRNA-TF-gene network analysis through ranking of biomolecules for multi-informative uterine leiomyoma dataset.

    Science.gov (United States)

    Mallik, Saurav; Maulik, Ujjwal

    2015-10-01

    Gene ranking is an important problem in bioinformatics. Here, we propose a new framework for ranking biomolecules (viz., miRNAs, transcription-factors/TFs and genes) in a multi-informative uterine leiomyoma dataset having both gene expression and methylation data using (statistical) eigenvector centrality based approach. At first, genes that are both differentially expressed and methylated, are identified using Limma statistical test. A network, comprising these genes, corresponding TFs from TRANSFAC and ITFP databases, and targeter miRNAs from miRWalk database, is then built. The biomolecules are then ranked based on eigenvector centrality. Our proposed method provides better average accuracy in hub gene and non-hub gene classifications than other methods. Furthermore, pre-ranked Gene set enrichment analysis is applied on the pathway database as well as GO-term databases of Molecular Signatures Database with providing a pre-ranked gene-list based on different centrality values for comparing among the ranking methods. Finally, top novel potential gene-markers for the uterine leiomyoma are provided. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Database Description - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database

  5. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  6. Healthcare databases in Europe for studying medicine use and safety during pregnancy

    OpenAIRE

    Charlton, Rachel A.; Neville, Amanda J.; Jordan, Sue; Pierini, Anna; Damase-Michel, Christine; Klungsøyr, Kari; Andersen, Anne-Marie Nybo; Hansen, Anne Vinkel; Gini, Rosa; Bos, Jens H.J.; Puccini, Aurora; Hurault-Delarue, Caroline; Brooks, Caroline J.; De Jong-van den Berg, Lolkje T.V.; de Vries, Corinne S.

    2014-01-01

    Purpose The aim of this study was to describe a number of electronic healthcare databases in Europe in terms of the population covered, the source of the data captured and the availability of data on key variables required for evaluating medicine use and medicine safety during pregnancy. Methods A sample of electronic healthcare databases that captured pregnancies and prescription data was selected on the basis of contacts within the EUROCAT network. For each participating database, a data...

  7. Modeling the Pan-Arctic terrestrial and atmospheric water cycle. Final report; FINAL

    International Nuclear Information System (INIS)

    Gutowski, W.J. Jr.

    2001-01-01

    This report describes results of DOE grant DE-FG02-96ER61473 to Iowa State University (ISU). Work on this grant was performed at Iowa State University and at the University of New Hampshire in collaboration with Dr. Charles Vorosmarty and fellow scientists at the University of New Hampshire's (UNH's) Institute for the Study of the Earth, Oceans, and Space, a subcontractor to the project. Research performed for the project included development, calibration and validation of a regional climate model for the pan-Arctic, modeling river networks, extensive hydrologic database development, and analyses of the water cycle, based in part on the assembled databases and models. Details appear in publications produced from the grant

  8. IAEA/NDS requirements related to database software

    International Nuclear Information System (INIS)

    Pronyaev, V.; Zerkin, V.

    2001-01-01

    Full text: The Nuclear Data Section of the IAEA disseminates data to the NDS users through Internet or on CD-ROMs and diskettes. OSU Web-server on DEC Alpha with Open VMS and Oracle/DEC DBMS provides via CGI scripts and FORTRAN retrieval programs access to the main nuclear databases supported by the networks of Nuclear Reactions Data Centres and Nuclear Structure and Decay Data Centres (CINDA, EXFOR, ENDF, NSR, ENSDF). For Web-access to data from other libraries and files, hyper-links to the files stored in ASCII text or other formats are used. Databases on CD-ROM are usually provided with some retrieval system. They are distributed in the run-time mode and comply with all license requirements for software used in their development. Although major development work is done now at the PC with MS-Windows and Linux, NDS may not at present, due to some institutional conditions, use these platforms for organization of the Web access to the data. Starting the end of 1999, the NDS, in co-operation with other data centers, began to work out the strategy of migration of main network nuclear data bases onto platforms other than DEC Alpha/Open VMS/DBMS. Because the different co-operating centers have their own preferences for hardware and software, the requirement to provide maximum platform independence for nuclear databases is the most important and desirable feature. This requirement determined some standards for the nuclear database software development. Taking into account the present state and future development, these standards can be formulated as follows: 1. All numerical data (experimental, evaluated, recommended values and their uncertainties) prepared for inclusion in the IAEA/NDS nuclear database should be submitted in the form of the ASCII text files and will be kept at NDS as a master file. 2. Databases with complex structure should be submitted in the form of the files with standard SQL statements describing all its components. All extensions of standard SQL

  9. How Artificial Intelligence Can Improve Our Understanding of the Genes Associated with Endometriosis: Natural Language Processing of the PubMed Database.

    Science.gov (United States)

    Bouaziz, J; Mashiach, R; Cohen, S; Kedem, A; Baron, A; Zajicek, M; Feldman, I; Seidman, D; Soriano, D

    2018-01-01

    Endometriosis is a disease characterized by the development of endometrial tissue outside the uterus, but its cause remains largely unknown. Numerous genes have been studied and proposed to help explain its pathogenesis. However, the large number of these candidate genes has made functional validation through experimental methodologies nearly impossible. Computational methods could provide a useful alternative for prioritizing those most likely to be susceptibility genes. Using artificial intelligence applied to text mining, this study analyzed the genes involved in the pathogenesis, development, and progression of endometriosis. The data extraction by text mining of the endometriosis-related genes in the PubMed database was based on natural language processing, and the data were filtered to remove false positives. Using data from the text mining and gene network information as input for the web-based tool, 15,207 endometriosis-related genes were ranked according to their score in the database. Characterization of the filtered gene set through gene ontology, pathway, and network analysis provided information about the numerous mechanisms hypothesized to be responsible for the establishment of ectopic endometrial tissue, as well as the migration, implantation, survival, and proliferation of ectopic endometrial cells. Finally, the human genome was scanned through various databases using filtered genes as a seed to determine novel genes that might also be involved in the pathogenesis of endometriosis but which have not yet been characterized. These genes could be promising candidates to serve as useful diagnostic biomarkers and therapeutic targets in the management of endometriosis.

  10. Persistent storage of non-event data in the CMS databases

    International Nuclear Information System (INIS)

    De Gruttola, M; Di Guida, S; Innocente, V; Schlatter, D; Futyan, D; Glege, F; Paolucci, P; Govi, G; Picca, P; Pierro, A; Xie, Z

    2010-01-01

    In the CMS experiment, the non event data needed to set up the detector, or being produced by it, and needed to calibrate the physical responses of the detector itself are stored in ORACLE databases. The large amount of data to be stored, the number of clients involved and the performance requirements make the database system an essential service for the experiment to run. This note describes the CMS condition database architecture, the data-flow and PopCon, the tool built in order to populate the offline databases. Finally, the first experience obtained during the 2008 and 2009 cosmic data taking are presented.

  11. A Database for Decision-Making in Training and Distributed Learning Technology

    National Research Council Canada - National Science Library

    Stouffer, Virginia

    1998-01-01

    .... A framework for incorporating data about distributed learning courseware into the existing training database was devised and a plan for a national electronic courseware redistribution network was recommended...

  12. An Aerosol Extinction-to-Backscatter Ratio Database Derived from the NASA Micro-Pulse Lidar Network: Applications for Space-based Lidar Observations

    Science.gov (United States)

    Welton, Ellsworth J.; Campbell, James R.; Spinhime, James D.; Berkoff, Timothy A.; Holben, Brent; Tsay, Si-Chee; Bucholtz, Anthony

    2004-01-01

    Backscatter lidar signals are a function of both backscatter and extinction. Hence, these lidar observations alone cannot separate the two quantities. The aerosol extinction-to-backscatter ratio, S, is the key parameter required to accurately retrieve extinction and optical depth from backscatter lidar observations of aerosol layers. S is commonly defined as 4*pi divided by the product of the single scatter albedo and the phase function at 180-degree scattering angle. Values of S for different aerosol types are not well known, and are even more difficult to determine when aerosols become mixed. Here we present a new lidar-sunphotometer S database derived from Observations of the NASA Micro-Pulse Lidar Network (MPLNET). MPLNET is a growing worldwide network of eye-safe backscatter lidars co-located with sunphotometers in the NASA Aerosol Robotic Network (AERONET). Values of S for different aerosol species and geographic regions will be presented. A framework for constructing an S look-up table will be shown. Look-up tables of S are needed to calculate aerosol extinction and optical depth from space-based lidar observations in the absence of co-located AOD data. Applications for using the new S look-up table to reprocess aerosol products from NASA's Geoscience Laser Altimeter System (GLAS) will be discussed.

  13. Inferring pregnancy episodes and outcomes within a network of observational databases.

    Directory of Open Access Journals (Sweden)

    Amy Matcho

    Full Text Available Administrative claims and electronic health records are valuable resources for evaluating pharmaceutical effects during pregnancy. However, direct measures of gestational age are generally not available. Establishing a reliable approach to infer the duration and outcome of a pregnancy could improve pharmacovigilance activities. We developed and applied an algorithm to define pregnancy episodes in four observational databases: three US-based claims databases: Truven MarketScan® Commercial Claims and Encounters (CCAE, Truven MarketScan® Multi-state Medicaid (MDCD, and the Optum ClinFormatics® (Optum database and one non-US database, the United Kingdom (UK based Clinical Practice Research Datalink (CPRD. Pregnancy outcomes were classified as live births, stillbirths, abortions and ectopic pregnancies. Start dates were estimated using a derived hierarchy of available pregnancy markers, including records such as last menstrual period and nuchal ultrasound dates. Validation included clinical adjudication of 700 electronic Optum and CPRD pregnancy episode profiles to assess the operating characteristics of the algorithm, and a comparison of the algorithm's Optum pregnancy start estimates to starts based on dates of assisted conception procedures. Distributions of pregnancy outcome types were similar across all four data sources and pregnancy episode lengths found were as expected for all outcomes, excepting term lengths in episodes that used amenorrhea and urine pregnancy tests for start estimation. Validation survey results found highest agreement between reviewer chosen and algorithm operating characteristics for questions assessing pregnancy status and accuracy of outcome category with 99-100% agreement for Optum and CPRD. Outcome date agreement within seven days in either direction ranged from 95-100%, while start date agreement within seven days in either direction ranged from 90-97%. In Optum validation sensitivity analysis, a total of 73% of

  14. Inferring pregnancy episodes and outcomes within a network of observational databases

    Science.gov (United States)

    Ryan, Patrick; Fife, Daniel; Gifkins, Dina; Knoll, Chris; Friedman, Andrew

    2018-01-01

    Administrative claims and electronic health records are valuable resources for evaluating pharmaceutical effects during pregnancy. However, direct measures of gestational age are generally not available. Establishing a reliable approach to infer the duration and outcome of a pregnancy could improve pharmacovigilance activities. We developed and applied an algorithm to define pregnancy episodes in four observational databases: three US-based claims databases: Truven MarketScan® Commercial Claims and Encounters (CCAE), Truven MarketScan® Multi-state Medicaid (MDCD), and the Optum ClinFormatics® (Optum) database and one non-US database, the United Kingdom (UK) based Clinical Practice Research Datalink (CPRD). Pregnancy outcomes were classified as live births, stillbirths, abortions and ectopic pregnancies. Start dates were estimated using a derived hierarchy of available pregnancy markers, including records such as last menstrual period and nuchal ultrasound dates. Validation included clinical adjudication of 700 electronic Optum and CPRD pregnancy episode profiles to assess the operating characteristics of the algorithm, and a comparison of the algorithm’s Optum pregnancy start estimates to starts based on dates of assisted conception procedures. Distributions of pregnancy outcome types were similar across all four data sources and pregnancy episode lengths found were as expected for all outcomes, excepting term lengths in episodes that used amenorrhea and urine pregnancy tests for start estimation. Validation survey results found highest agreement between reviewer chosen and algorithm operating characteristics for questions assessing pregnancy status and accuracy of outcome category with 99–100% agreement for Optum and CPRD. Outcome date agreement within seven days in either direction ranged from 95–100%, while start date agreement within seven days in either direction ranged from 90–97%. In Optum validation sensitivity analysis, a total of 73% of

  15. Construction of a bibliographic information database for the Nuclear Science and Engineering (IX)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Whan; Oh, Jeong Hun; Choi, Kwang; Keum, Jong Yong [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-12-01

    The major goal of this project is to construct a database in nuclear science and engineering information materials, support the R and D activities of users in this field, finally to support KRISTAL(Korea Research Information for Science and Technology Access Line)'s DB through the KREONet(Korea Research Environment Open Network), as one of the five national information networks. The contents of this project are as follows: 1) Materials selection and collection, 2) Indexing and abstract preparation, 3) Data input and transmission, and 4) document delivery service. In this seventh year, 40,000 records, as total of inputted data, are added to the existing SATURN DB. These records are covered with the articles of nuclear-related core journals, proceedings, seminars, and research reports, etc. And using the Web, this project was important for users to get their needed information itself and to receive the materials of online requested information. And then it will give the chance users not only to promote the the effectiveness of R and D activities, but also to obviate the duplicated research works. 1 fig., 1 tab. (Author)

  16. Construction of a bibliographic information database for the nuclear science and engineering (VIII)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Whan; Kim, Soon Ja; Kim, Young Min; Oh, Jeong Hun; Choi, Kwang [Korea Atomic Energy Research Institute, Taejon (Korea)

    1998-12-01

    The major goal of this project is to construct a database in nuclear science and engineering information materials, support the R and D activities of users in this field, finally to support KRISTAL(Korea Research Information for Science ad Technology Access Line)'s BD through the KREONet (Korea Research Environment Open Network), as one of the five national information networks. The contents of this project are as follows : 1) Materials selection and collection, 2) Indexing and abstract preparation, 3) Data input and transmission, and 4) document delivery service. In this seventh year, 35,000 record as total of inputted data, are added to the existing SATURN DB using the input system. These records are covered with the articles of nuclear-related core journals, proceedings, seminars, and research reports, etc. And using the Web, this project was important for users to get their needed information itself and to receive the materials of online requested information. And then it will give the chance users not only to promote the effectiveness of R and D activities, but also to obviate the duplicated research works. 1 tab. (Author)

  17. Network Analysis Tools: from biological networks to clusters and pathways.

    Science.gov (United States)

    Brohée, Sylvain; Faust, Karoline; Lima-Mendez, Gipsi; Vanderstocken, Gilles; van Helden, Jacques

    2008-01-01

    Network Analysis Tools (NeAT) is a suite of computer tools that integrate various algorithms for the analysis of biological networks: comparison between graphs, between clusters, or between graphs and clusters; network randomization; analysis of degree distribution; network-based clustering and path finding. The tools are interconnected to enable a stepwise analysis of the network through a complete analytical workflow. In this protocol, we present a typical case of utilization, where the tasks above are combined to decipher a protein-protein interaction network retrieved from the STRING database. The results returned by NeAT are typically subnetworks, networks enriched with additional information (i.e., clusters or paths) or tables displaying statistics. Typical networks comprising several thousands of nodes and arcs can be analyzed within a few minutes. The complete protocol can be read and executed in approximately 1 h.

  18. Disbiome database: linking the microbiome to disease.

    Science.gov (United States)

    Janssens, Yorick; Nielandt, Joachim; Bronselaer, Antoon; Debunne, Nathan; Verbeke, Frederick; Wynendaele, Evelien; Van Immerseel, Filip; Vandewynckel, Yves-Paul; De Tré, Guy; De Spiegeleer, Bart

    2018-06-04

    Recent research has provided fascinating indications and evidence that the host health is linked to its microbial inhabitants. Due to the development of high-throughput sequencing technologies, more and more data covering microbial composition changes in different disease types are emerging. However, this information is dispersed over a wide variety of medical and biomedical disciplines. Disbiome is a database which collects and presents published microbiota-disease information in a standardized way. The diseases are classified using the MedDRA classification system and the micro-organisms are linked to their NCBI and SILVA taxonomy. Finally, each study included in the Disbiome database is assessed for its reporting quality using a standardized questionnaire. Disbiome is the first database giving a clear, concise and up-to-date overview of microbial composition differences in diseases, together with the relevant information of the studies published. The strength of this database lies within the combination of the presence of references to other databases, which enables both specific and diverse search strategies within the Disbiome database, and the human annotation which ensures a simple and structured presentation of the available data.

  19. Secure Distributed Databases Using Cryptography

    OpenAIRE

    Ion IVAN; Cristian TOMA

    2006-01-01

    The computational encryption is used intensively by different databases management systems for ensuring privacy and integrity of information that are physically stored in files. Also, the information is sent over network and is replicated on different distributed systems. It is proved that a satisfying level of security is achieved if the rows and columns of tables are encrypted independently of table or computer that sustains the data. Also, it is very important that the SQL - Structured Que...

  20. An information integration system for structured documents, Web, and databases

    OpenAIRE

    Morishima, Atsuyuki

    1998-01-01

    Rapid advance in computer network technology has changed the style of computer utilization. Distributed computing resources over world-wide computer networks are available from our local computers. They include powerful computers and a variety of information sources. This change is raising more advanced requirements. Integration of distributed information sources is one of such requirements. In addition to conventional databases, structured documents have been widely used, and have increasing...

  1. Aircraft Emission Inventories Projected in Year 2015 for a High Speed Civil Transport (HSCT) Universal Airline Network. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Baughcum, S.L.; Henderson, S.C.

    1995-07-01

    This report describes the development of a three-dimensional database of aircraft fuel burn and emissions (fuel burned, NOx, CO, and hydrocarbons) from projected fleets of high speed civil transports (HSCT`s) on a universal airline network. Inventories for 500 and 1000 HSCT fleets, as well as the concurrent subsonic fleets, were calculated. The objective of this work was to evaluate the changes in geographical distribution of the HSCT emissions as the fleet size grew from 500 to 1000 HSCT`s. For this work, a new expanded HSCT network was used and flights projected using a market penetration analysis rather than assuming equal penetration as was done in the earlier studies. Emission inventories on this network were calculated for both Mach 2.0 and Mach 2.4 HSCT fleets with NOx cruise emission indices of approximately 5 and 15 grams NOx/kg fuel. These emissions inventories are available for use by atmospheric scientists conducting the Atmospheric Effects of Stratospheric Aircraft (AESA) modeling studies. Fuel burned and emissions of nitrogen oxides (NOx as NO2), carbon monoxide, and hydrocarbons have been calculated on a 1 degree latitude x 1 degree longitude x 1 kilometer attitude grid and delivered to NASA as electronic files.

  2. Investigation on construction of the database system for research and development of the global environment industry technology; Chikyu kankyo sangyo gijutsu kenkyu kaihatsuyo database system no kochiku ni kansuru chosa

    Energy Technology Data Exchange (ETDEWEB)

    1993-03-01

    This paper studies a concrete plan to introduce a new database system of Research Institute of Innovative Technology for the Earth (RITE) which is necessary to promote the industrial technology development contributing to solution of the global environmental problem. Specifications for system introduction are about maker selection, operation system, detailed schedule for introduction, etc. RITE inhouse database has problems on its operation system and its maintenance cost, and is apt to be high in a construction cost in comparison with a utilization factor. Further study is made on its introduction. Information provided by the inhouse database is only the one owned by the organization, and information outside the organization is provided by the external database. The information is registered and selected by the registerer himself. The access network is set by personal computer network at the beginning and is set to transit to INTERNET in the future. For practical construction of the system, it is necessary to make user`s detailed needs clear for the system design and to adjust functions between hardware systems. 32 figs., 9 tabs.

  3. Armada: a reference model for an evolving database system

    NARCIS (Netherlands)

    F.E. Groffen (Fabian); M.L. Kersten (Martin); S. Manegold (Stefan)

    2006-01-01

    textabstractThe current database deployment palette ranges from networked sensor-based devices to large data/compute Grids. Both extremes present common challenges for distributed DBMS technology. The local storage per device/node/site is severely limited compared to the total data volume being

  4. PAMDB: a comprehensive Pseudomonas aeruginosa metabolome database.

    Science.gov (United States)

    Huang, Weiliang; Brewer, Luke K; Jones, Jace W; Nguyen, Angela T; Marcu, Ana; Wishart, David S; Oglesby-Sherrouse, Amanda G; Kane, Maureen A; Wilks, Angela

    2018-01-04

    The Pseudomonas aeruginosaMetabolome Database (PAMDB, http://pseudomonas.umaryland.edu) is a searchable, richly annotated metabolite database specific to P. aeruginosa. P. aeruginosa is a soil organism and significant opportunistic pathogen that adapts to its environment through a versatile energy metabolism network. Furthermore, P. aeruginosa is a model organism for the study of biofilm formation, quorum sensing, and bioremediation processes, each of which are dependent on unique pathways and metabolites. The PAMDB is modelled on the Escherichia coli (ECMDB), yeast (YMDB) and human (HMDB) metabolome databases and contains >4370 metabolites and 938 pathways with links to over 1260 genes and proteins. The database information was compiled from electronic databases, journal articles and mass spectrometry (MS) metabolomic data obtained in our laboratories. For each metabolite entered, we provide detailed compound descriptions, names and synonyms, structural and physiochemical information, nuclear magnetic resonance (NMR) and MS spectra, enzymes and pathway information, as well as gene and protein sequences. The database allows extensive searching via chemical names, structure and molecular weight, together with gene, protein and pathway relationships. The PAMBD and its future iterations will provide a valuable resource to biologists, natural product chemists and clinicians in identifying active compounds, potential biomarkers and clinical diagnostics. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. The CMS ECAL database services for detector control and monitoring

    International Nuclear Information System (INIS)

    Arcidiacono, Roberta; Marone, Matteo; Badgett, William

    2010-01-01

    In this paper we give a description of the database services for the control and monitoring of the electromagnetic calorimeter of the CMS experiment at LHC. After a general description of the software infrastructure, we present the organization of the tables in the database, that has been designed in order to simplify the development of software interfaces. This feature is achieved including in the database the description of each relevant table. We also give some estimation about the final size and performance of the system.

  6. Accuracy of Veterans Affairs Databases for Diagnoses of Chronic Diseases

    OpenAIRE

    Singh, Jasvinder A.

    2009-01-01

    Introduction Epidemiologic studies usually use database diagnoses or patient self-report to identify disease cohorts, but no previous research has examined the extent to which self-report of chronic disease agrees with database diagnoses in a Veterans Affairs (VA) health care setting. Methods All veterans who had a medical care visit from October 1, 1996, through May 31, 1998, at any of the Veterans Integrated Service Network 13 facilities were surveyed about physician diagnosis of chronic ob...

  7. Multiple k Nearest Neighbor Query Processing in Spatial Network Databases

    DEFF Research Database (Denmark)

    Xuegang, Huang; Jensen, Christian Søndergaard; Saltenis, Simonas

    2006-01-01

    This paper concerns the efficient processing of multiple k nearest neighbor queries in a road-network setting. The assumed setting covers a range of scenarios such as the one where a large population of mobile service users that are constrained to a road network issue nearest-neighbor queries...... for points of interest that are accessible via the road network. Given multiple k nearest neighbor queries, the paper proposes progressive techniques that selectively cache query results in main memory and subsequently reuse these for query processing. The paper initially proposes techniques for the case...... where an upper bound on k is known a priori and then extends the techniques to the case where this is not so. Based on empirical studies with real-world data, the paper offers insight into the circumstances under which the different proposed techniques can be used with advantage for multiple k nearest...

  8. Toward the automated generation of genome-scale metabolic networks in the SEED.

    Science.gov (United States)

    DeJongh, Matthew; Formsma, Kevin; Boillot, Paul; Gould, John; Rycenga, Matthew; Best, Aaron

    2007-04-26

    Current methods for the automated generation of genome-scale metabolic networks focus on genome annotation and preliminary biochemical reaction network assembly, but do not adequately address the process of identifying and filling gaps in the reaction network, and verifying that the network is suitable for systems level analysis. Thus, current methods are only sufficient for generating draft-quality networks, and refinement of the reaction network is still largely a manual, labor-intensive process. We have developed a method for generating genome-scale metabolic networks that produces substantially complete reaction networks, suitable for systems level analysis. Our method partitions the reaction space of central and intermediary metabolism into discrete, interconnected components that can be assembled and verified in isolation from each other, and then integrated and verified at the level of their interconnectivity. We have developed a database of components that are common across organisms, and have created tools for automatically assembling appropriate components for a particular organism based on the metabolic pathways encoded in the organism's genome. This focuses manual efforts on that portion of an organism's metabolism that is not yet represented in the database. We have demonstrated the efficacy of our method by reverse-engineering and automatically regenerating the reaction network from a published genome-scale metabolic model for Staphylococcus aureus. Additionally, we have verified that our method capitalizes on the database of common reaction network components created for S. aureus, by using these components to generate substantially complete reconstructions of the reaction networks from three other published metabolic models (Escherichia coli, Helicobacter pylori, and Lactococcus lactis). We have implemented our tools and database within the SEED, an open-source software environment for comparative genome annotation and analysis. Our method sets the

  9. Toward the automated generation of genome-scale metabolic networks in the SEED

    Directory of Open Access Journals (Sweden)

    Gould John

    2007-04-01

    Full Text Available Abstract Background Current methods for the automated generation of genome-scale metabolic networks focus on genome annotation and preliminary biochemical reaction network assembly, but do not adequately address the process of identifying and filling gaps in the reaction network, and verifying that the network is suitable for systems level analysis. Thus, current methods are only sufficient for generating draft-quality networks, and refinement of the reaction network is still largely a manual, labor-intensive process. Results We have developed a method for generating genome-scale metabolic networks that produces substantially complete reaction networks, suitable for systems level analysis. Our method partitions the reaction space of central and intermediary metabolism into discrete, interconnected components that can be assembled and verified in isolation from each other, and then integrated and verified at the level of their interconnectivity. We have developed a database of components that are common across organisms, and have created tools for automatically assembling appropriate components for a particular organism based on the metabolic pathways encoded in the organism's genome. This focuses manual efforts on that portion of an organism's metabolism that is not yet represented in the database. We have demonstrated the efficacy of our method by reverse-engineering and automatically regenerating the reaction network from a published genome-scale metabolic model for Staphylococcus aureus. Additionally, we have verified that our method capitalizes on the database of common reaction network components created for S. aureus, by using these components to generate substantially complete reconstructions of the reaction networks from three other published metabolic models (Escherichia coli, Helicobacter pylori, and Lactococcus lactis. We have implemented our tools and database within the SEED, an open-source software environment for comparative

  10. Optimal neural networks for protein-structure prediction

    International Nuclear Information System (INIS)

    Head-Gordon, T.; Stillinger, F.H.

    1993-01-01

    The successful application of neural-network algorithms for prediction of protein structure is stymied by three problem areas: the sparsity of the database of known protein structures, poorly devised network architectures which make the input-output mapping opaque, and a global optimization problem in the multiple-minima space of the network variables. We present a simplified polypeptide model residing in two dimensions with only two amino-acid types, A and B, which allows the determination of the global energy structure for all possible sequences of pentamer, hexamer, and heptamer lengths. This model simplicity allows us to compile a complete structural database and to devise neural networks that reproduce the tertiary structure of all sequences with absolute accuracy and with the smallest number of network variables. These optimal networks reveal that the three problem areas are convoluted, but that thoughtful network designs can actually deconvolute these detrimental traits to provide network algorithms that genuinely impact on the ability of the network to generalize or learn the desired mappings. Furthermore, the two-dimensional polypeptide model shows sufficient chemical complexity so that transfer of neural-network technology to more realistic three-dimensional proteins is evident

  11. The CSB Incident Screening Database: description, summary statistics and uses.

    Science.gov (United States)

    Gomez, Manuel R; Casper, Susan; Smith, E Allen

    2008-11-15

    This paper briefly describes the Chemical Incident Screening Database currently used by the CSB to identify and evaluate chemical incidents for possible investigations, and summarizes descriptive statistics from this database that can potentially help to estimate the number, character, and consequences of chemical incidents in the US. The report compares some of the information in the CSB database to roughly similar information available from databases operated by EPA and the Agency for Toxic Substances and Disease Registry (ATSDR), and explores the possible implications of these comparisons with regard to the dimension of the chemical incident problem. Finally, the report explores in a preliminary way whether a system modeled after the existing CSB screening database could be developed to serve as a national surveillance tool for chemical incidents.

  12. Analyzing GAIAN Database (GaianDB) on a Tactical Network

    Science.gov (United States)

    2015-11-30

    databases, and other files, and exposes them as 1 unified structured query language ( SQL )-compliant data source. This “store locally query anywhere...UDP server that could communicate directly with the CSRs via the CSR’s serial port. However, GAIAN has over 800,000 lines of source code. It...management, by which all would have to be modified to communicate with our server and maintain utility. Not only did we quickly realize that this

  13. An approach for optimally extending mathematical models of signaling networks using omics data.

    Science.gov (United States)

    Bianconi, Fortunato; Patiti, Federico; Baldelli, Elisa; Crino, Lucio; Valigi, Paolo

    2015-01-01

    Mathematical modeling is a key process in Systems Biology and the use of computational tools such as Cytoscape for omics data processing, need to be integrated in the modeling activity. In this paper we propose a new methodology for modeling signaling networks by combining ordinary differential equation models and a gene recommender system, GeneMANIA. We started from existing models, that are stored in the BioModels database, and we generated a query to use as input for the GeneMANIA algorithm. The output of the recommender system was then led back to the kinetic reactions that were finally added to the starting model. We applied the proposed methodology to EGFR-IGF1R signal transduction network, which plays an important role in translational oncology and cancer therapy of non small cell lung cancer.

  14. A hierarchical spatial framework and database for the national river fish habitat condition assessment

    Science.gov (United States)

    Wang, L.; Infante, D.; Esselman, P.; Cooper, A.; Wu, D.; Taylor, W.; Beard, D.; Whelan, G.; Ostroff, A.

    2011-01-01

    Fisheries management programs, such as the National Fish Habitat Action Plan (NFHAP), urgently need a nationwide spatial framework and database for health assessment and policy development to protect and improve riverine systems. To meet this need, we developed a spatial framework and database using National Hydrography Dataset Plus (I-.100,000-scale); http://www.horizon-systems.com/nhdplus). This framework uses interconfluence river reaches and their local and network catchments as fundamental spatial river units and a series of ecological and political spatial descriptors as hierarchy structures to allow users to extract or analyze information at spatial scales that they define. This database consists of variables describing channel characteristics, network position/connectivity, climate, elevation, gradient, and size. It contains a series of catchment-natural and human-induced factors that are known to influence river characteristics. Our framework and database assembles all river reaches and their descriptors in one place for the first time for the conterminous United States. This framework and database provides users with the capability of adding data, conducting analyses, developing management scenarios and regulation, and tracking management progresses at a variety of spatial scales. This database provides the essential data needs for achieving the objectives of NFHAP and other management programs. The downloadable beta version database is available at http://ec2-184-73-40-15.compute-1.amazonaws.com/nfhap/main/.

  15. The network researchers' network

    DEFF Research Database (Denmark)

    Henneberg, Stephan C.; Jiang, Zhizhong; Naudé, Peter

    2009-01-01

    The Industrial Marketing and Purchasing (IMP) Group is a network of academic researchers working in the area of business-to-business marketing. The group meets every year to discuss and exchange ideas, with a conference having been held every year since 1984 (there was no meeting in 1987). In thi......The Industrial Marketing and Purchasing (IMP) Group is a network of academic researchers working in the area of business-to-business marketing. The group meets every year to discuss and exchange ideas, with a conference having been held every year since 1984 (there was no meeting in 1987......). In this paper, based upon the papers presented at the 22 conferences held to date, we undertake a Social Network Analysis in order to examine the degree of co-publishing that has taken place between this group of researchers. We identify the different components in this database, and examine the large main...

  16. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  17. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A Completed Reference Database of Lung Nodules on CT Scans

    International Nuclear Information System (INIS)

    2011-01-01

    Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process. Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule≥3 mm,''''nodule<3 mm,'' and ''non-nodule≥3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked ''nodule≥3 mm'' by at least one radiologist, of which 928 (34.7%) received such marks from all

  18. fraud detection in mobile communications networks using user

    African Journals Online (AJOL)

    DEPT OF AGRICULTURAL ENGINEERING

    testing the methods with data from real mobile communications networks. Keywords: Call .... System. Monitoring. Database. Database. Fig. 3: Mobile communication detection tools ..... receiver operating characteristic curve (ROC). ROC is a ...

  19. The EPIC nutrient database project (ENDB): a first attempt to standardize nutrient databases across the 10 European countries participating in the EPIC study

    DEFF Research Database (Denmark)

    Slimani, N.; Deharveng, G.; Unwin, I.

    2007-01-01

    because there is currently no European reference NDB available. Design: A large network involving national compilers, nutritionists and experts on food chemistry and computer science was set up for the 'EPIC Nutrient DataBase' ( ENDB) project. A total of 550-1500 foods derived from about 37 000...... standardized EPIC 24-h dietary recalls (24-HDRS) were matched as closely as possible to foods available in the 10 national NDBs. The resulting national data sets ( NDS) were then successively documented, standardized and evaluated according to common guidelines and using a DataBase Management System...

  20. Meta-connectomics: human brain network and connectivity meta-analyses.

    Science.gov (United States)

    Crossley, N A; Fox, P T; Bullmore, E T

    2016-04-01

    Abnormal brain connectivity or network dysfunction has been suggested as a paradigm to understand several psychiatric disorders. We here review the use of novel meta-analytic approaches in neuroscience that go beyond a summary description of existing results by applying network analysis methods to previously published studies and/or publicly accessible databases. We define this strategy of combining connectivity with other brain characteristics as 'meta-connectomics'. For example, we show how network analysis of task-based neuroimaging studies has been used to infer functional co-activation from primary data on regional activations. This approach has been able to relate cognition to functional network topology, demonstrating that the brain is composed of cognitively specialized functional subnetworks or modules, linked by a rich club of cognitively generalized regions that mediate many inter-modular connections. Another major application of meta-connectomics has been efforts to link meta-analytic maps of disorder-related abnormalities or MRI 'lesions' to the complex topology of the normative connectome. This work has highlighted the general importance of network hubs as hotspots for concentration of cortical grey-matter deficits in schizophrenia, Alzheimer's disease and other disorders. Finally, we show how by incorporating cellular and transcriptional data on individual nodes with network models of the connectome, studies have begun to elucidate the microscopic mechanisms underpinning the macroscopic organization of whole-brain networks. We argue that meta-connectomics is an exciting field, providing robust and integrative insights into brain organization that will likely play an important future role in consolidating network models of psychiatric disorders.

  1. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  2. Final report for the network authentication investigation and pilot.

    Energy Technology Data Exchange (ETDEWEB)

    Eldridge, John M.; Dautenhahn, Nathan; Miller, Marc M.; Wiener, Dallas J; Witzke, Edward L.

    2006-11-01

    New network based authentication mechanisms are beginning to be implemented in industry. This project investigated different authentication technologies to see if and how Sandia might benefit from them. It also investigated how these mechanisms can integrate with the Sandia Two-Factor Authentication Project. The results of these investigations and a network authentication path forward strategy are documented in this report.

  3. VTCdb: a gene co-expression database for the crop species Vitis vinifera (grapevine).

    Science.gov (United States)

    Wong, Darren C J; Sweetman, Crystal; Drew, Damian P; Ford, Christopher M

    2013-12-16

    Gene expression datasets in model plants such as Arabidopsis have contributed to our understanding of gene function and how a single underlying biological process can be governed by a diverse network of genes. The accumulation of publicly available microarray data encompassing a wide range of biological and environmental conditions has enabled the development of additional capabilities including gene co-expression analysis (GCA). GCA is based on the understanding that genes encoding proteins involved in similar and/or related biological processes may exhibit comparable expression patterns over a range of experimental conditions, developmental stages and tissues. We present an open access database for the investigation of gene co-expression networks within the cultivated grapevine, Vitis vinifera. The new gene co-expression database, VTCdb (http://vtcdb.adelaide.edu.au/Home.aspx), offers an online platform for transcriptional regulatory inference in the cultivated grapevine. Using condition-independent and condition-dependent approaches, grapevine co-expression networks were constructed using the latest publicly available microarray datasets from diverse experimental series, utilising the Affymetrix Vitis vinifera GeneChip (16 K) and the NimbleGen Grape Whole-genome microarray chip (29 K), thus making it possible to profile approximately 29,000 genes (95% of the predicted grapevine transcriptome). Applications available with the online platform include the use of gene names, probesets, modules or biological processes to query the co-expression networks, with the option to choose between Affymetrix or Nimblegen datasets and between multiple co-expression measures. Alternatively, the user can browse existing network modules using interactive network visualisation and analysis via CytoscapeWeb. To demonstrate the utility of the database, we present examples from three fundamental biological processes (berry development, photosynthesis and flavonoid biosynthesis

  4. IAEA Illicit Trafficking Database (ITDB)

    International Nuclear Information System (INIS)

    2010-01-01

    The IAEA Illicit Trafficking Database (ITDB) was established in 1995 as a unique network of points of contact connecting 100 states and several international organizations. Information collected from official sources supplemented by open-source reports. The 1994 - GC 38, resolution intensifies the activities through which the Agency is currently supporting Member States in this field. Member states were notified of completed database in 1995 and invited to participate. The purpose of the I TDB is to facilitate exchange of authoritative information among States on incidents of illicit trafficking and other related unauthorized activities involving nuclear and other radioactive materials; to collect, maintain and analyse information on such incidents with a view to identifying common threats, trends, and patterns; use this information for internal planning and prioritisation and provide this information to member states and to provide a reliable source of basic information on such incidents to the media, when appropriate

  5. Feasibility of a clearing house for improved cooperation between telemedicine networks delivering humanitarian services: acceptability to network coordinators

    Directory of Open Access Journals (Sweden)

    Richard Wootton

    2012-10-01

    Full Text Available Background: Telemedicine networks, which deliver humanitarian services, sometimes need to share expertise to find particular experts in other networks. It has been suggested that a mechanism for sharing expertise between networks (a ‘clearing house’ might be useful. Objective: To propose a mechanism for implementing the clearing house concept for sharing expertise, and to confirm its feasibility in terms of acceptability to the relevant networks. Design: We conducted a needs analysis among eight telemedicine networks delivering humanitarian services. A small proportion of consultations (5–10% suggested that networks may experience difficulties in finding the right specialists from within their own resources. With the assistance of key stakeholders, many of whom were network coordinators, various methods of implementing a clearing house were considered. One simple solution is to establish a central database holding information about consultants who have agreed to provide help to other networks; this database could be made available to network coordinators who need a specialist when none was available in their own network. Results: The proposed solution was examined in a desktop simulation exercise, which confirmed its feasibility and probable value. Conclusions: This analysis informs full-scale implementation of a clearing house, and an associated examination of its costs and benefits.

  6. Common hyperspectral image database design

    Science.gov (United States)

    Tian, Lixun; Liao, Ningfang; Chai, Ali

    2009-11-01

    This paper is to introduce Common hyperspectral image database with a demand-oriented Database design method (CHIDB), which comprehensively set ground-based spectra, standardized hyperspectral cube, spectral analysis together to meet some applications. The paper presents an integrated approach to retrieving spectral and spatial patterns from remotely sensed imagery using state-of-the-art data mining and advanced database technologies, some data mining ideas and functions were associated into CHIDB to make it more suitable to serve in agriculture, geological and environmental areas. A broad range of data from multiple regions of the electromagnetic spectrum is supported, including ultraviolet, visible, near-infrared, thermal infrared, and fluorescence. CHIDB is based on dotnet framework and designed by MVC architecture including five main functional modules: Data importer/exporter, Image/spectrum Viewer, Data Processor, Parameter Extractor, and On-line Analyzer. The original data were all stored in SQL server2008 for efficient search, query and update, and some advance Spectral image data Processing technology are used such as Parallel processing in C#; Finally an application case is presented in agricultural disease detecting area.

  7. Design research of uranium mine borehole database

    International Nuclear Information System (INIS)

    Xie Huaming; Hu Guangdao; Zhu Xianglin; Chen Dehua; Chen Miaoshun

    2008-01-01

    With short supply of energy sources, exploration of uranium mine have been enhanced, but data storage, analysis and usage of exploration data of uranium mine are not highly computerized currently in China, the data is poor shared and used that it can not adapt the need of production and research. It will be well done, if the data are stored and managed in a database system. The concept structure design, logic structure design and data integrity checks are discussed according to the demand of applications and the analysis of exploration data of uranium mine. An application of the database is illustrated finally. (authors)

  8. Quality of Service: a study in databases bibliometric international

    Directory of Open Access Journals (Sweden)

    Deosir Flávio Lobo de Castro Junior

    2013-08-01

    Full Text Available The purpose of this article is to serve as a source of references on Quality of Service for future research. After surveying the international databases, EBSCO and ProQuest, the results on the state of the art in this issue are presented. The method used was the bibliometrics, and 132 items from a universe of 13,427 were investigated. The analyzed works cover the period from 1985 to 2011. Among the contributions, results and conclusions for future research are presented: i most cited authors ii most used methodology, dimensions and questionnaire; iii most referenced publications iv international journals with most publications on the subject, v distribution of the number of publications per year; vi authors networks vii educational institutions network; viii terms used in the search in international databases; ix the relationships studied in 132 articles; x criteria for choice of methodology in the research on quality of services; xi most often used paradigm, and xii 160 high impact references.

  9. The AMMA database

    Science.gov (United States)

    Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim

    2010-05-01

    The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can

  10. Implementasi Wireless Monitoring Energi Listrik Berbasis Web Database

    Directory of Open Access Journals (Sweden)

    Irwan Dinata

    2015-03-01

    Full Text Available Web Database based wireless device for monitoring electricity consumption is designed to substitute manual and conventional measurement system. This device consists of sensor, processor, display and network. The sensor consists of current transformer and AC to AC Power Adapter. The processor is Arduino UNO which process sensor output. Liquid crystal device (LCD is used to display real time output. The last part of the device is network composed of Ethernet Shield, 3G Modem for communication with Database Server as data further processing and storage. The testing with nominal total load 120 watt shows that Vrms value on LCD of the device is 218 volt, Vrms value measured with clamp meter is 216 volt. Irms value on LCD of the device is 0,44 ampere, Irms value measured with clamp meter 0,5 ampere. The real power value on LCD of the device is 92 watt, the real power value measured with clamp meter is 84 watt. The power factor value on LCD of the device is 0,97, the power factor value measured with clamp meter is 0,99.

  11. Web-based networking within the framework of ANENT

    International Nuclear Information System (INIS)

    Han, K.W.; Lee, E.J.; Kim, Y.T.; Nam, Y.M.; Kim, H.K.

    2004-01-01

    The Korea Atomic Energy Research Institute (KAERI) is actively participating in the Asian Network for Education in Nuclear Technology (ANENT), which is an IAEA activity to promote nuclear knowledge management. This has led KAERI to conduct a web-based networking for nuclear education and training in Asia. The networking encompasses the establishment of a relevant website and a system for a sustainable operation of the website. The established ANENT website features function as a database providing collected information, a link facilitating a systematic worldwide access to relevant websites, and an activity implementation for supporting the individual tasks of ANENT. The required information is being collected and loaded onto the database, and the website will be improved step by step. Consequently, networking is expected to play an important role, through cooperating with other networks, and thus contributing to a future global network for a sustainable development of nuclear technology. (author)

  12. Database Description - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Database Description General information of database Database... name Yeast Interacting Proteins Database Alternative name - DOI 10.18908/lsdba.nbdc00742-000 Creator C...-ken 277-8561 Tel: +81-4-7136-3989 FAX: +81-4-7136-3979 E-mail : Database classif...s cerevisiae Taxonomy ID: 4932 Database description Information on interactions and related information obta...l Acad Sci U S A. 2001 Apr 10;98(8):4569-74. Epub 2001 Mar 13. External Links: Original website information Database

  13. Geo-scientific database for research and development purposes

    International Nuclear Information System (INIS)

    Tabani, P.; Mangeot, A.; Crabol, V.; Delage, P.; Dewonck, S.; Auriere, C.

    2012-01-01

    Document available in extended abstract form only. The Research and Development Division must manage, secure and reliable manner, a large number of data from scientific disciplines and diverse means of acquisition (observations, measurements, experiments, etc.). This management is particularly important for the Underground research Laboratory, the source of many recording continuous measurements. Thus, from its conception, Andra has implemented two management tools of scientific information, the 'Acquisition System and Data Management' [SAGD] and GEO database with its associated applications. Beyond its own needs, Andra wants to share its achievements with the scientific community, and it therefore provides the data stored in its databases or samples of rock or water when they are available. Acquisition and Data Management (SAGD) This system manages data from sensors installed at several sites. Some sites are on the surface (piezometric, atmospheric and environmental stations), the other are in the Underground Research Laboratory. This system also incorporates data from experiments in which Andra participates in Mont Terri Laboratory in Switzerland. S.A.G.D fulfils these objectives by: - Make available in real time on a single system, with scientists from Andra but also different partners or providers who need it, all experimental data from measurement points - Displaying the recorded data on temporal windows and specific time step, - Allowing remote control of the experimentations, - Ensuring the traceability of all recorded information, - Ensuring data storage in a data base. S.A.G.D has been deployed in the first experimental drift at -445 m in November 2004. It was subsequently extended to the underground Mont Terri laboratory in Switzerland in 2005, to the entire surface logging network of the Meuse / Haute-Marne Center in 2008 and to the environmental network in 2011. All information is acquired, stored and manage by a software called Geoscope. This software

  14. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  15. Creating a High-Frequency Electronic Database in the PICU: The Perpetual Patient.

    Science.gov (United States)

    Brossier, David; El Taani, Redha; Sauthier, Michael; Roumeliotis, Nadia; Emeriaud, Guillaume; Jouvet, Philippe

    2018-04-01

    Our objective was to construct a prospective high-quality and high-frequency database combining patient therapeutics and clinical variables in real time, automatically fed by the information system and network architecture available through fully electronic charting in our PICU. The purpose of this article is to describe the data acquisition process from bedside to the research electronic database. Descriptive report and analysis of a prospective database. A 24-bed PICU, medical ICU, surgical ICU, and cardiac ICU in a tertiary care free-standing maternal child health center in Canada. All patients less than 18 years old were included at admission to the PICU. None. Between May 21, 2015, and December 31, 2016, 1,386 consecutive PICU stays from 1,194 patients were recorded in the database. Data were prospectively collected from admission to discharge, every 5 seconds from monitors and every 30 seconds from mechanical ventilators and infusion pumps. These data were linked to the patient's electronic medical record. The database total volume was 241 GB. The patients' median age was 2.0 years (interquartile range, 0.0-9.0). Data were available for all mechanically ventilated patients (n = 511; recorded duration, 77,678 hr), and respiratory failure was the most frequent reason for admission (n = 360). The complete pharmacologic profile was synched to database for all PICU stays. Following this implementation, a validation phase is in process and several research projects are ongoing using this high-fidelity database. Using the existing bedside information system and network architecture of our PICU, we implemented an ongoing high-fidelity prospectively collected electronic database, preventing the continuous loss of scientific information. This offers the opportunity to develop research on clinical decision support systems and computational models of cardiorespiratory physiology for example.

  16. MicrobesFlux: a web platform for drafting metabolic models from the KEGG database

    Directory of Open Access Journals (Sweden)

    Feng Xueyang

    2012-08-01

    Full Text Available Abstract Background Concurrent with the efforts currently underway in mapping microbial genomes using high-throughput sequencing methods, systems biologists are building metabolic models to characterize and predict cell metabolisms. One of the key steps in building a metabolic model is using multiple databases to collect and assemble essential information about genome-annotations and the architecture of the metabolic network for a specific organism. To speed up metabolic model development for a large number of microorganisms, we need a user-friendly platform to construct metabolic networks and to perform constraint-based flux balance analysis based on genome databases and experimental results. Results We have developed a semi-automatic, web-based platform (MicrobesFlux for generating and reconstructing metabolic models for annotated microorganisms. MicrobesFlux is able to automatically download the metabolic network (including enzymatic reactions and metabolites of ~1,200 species from the KEGG database (Kyoto Encyclopedia of Genes and Genomes and then convert it to a metabolic model draft. The platform also provides diverse customized tools, such as gene knockouts and the introduction of heterologous pathways, for users to reconstruct the model network. The reconstructed metabolic network can be formulated to a constraint-based flux model to predict and analyze the carbon fluxes in microbial metabolisms. The simulation results can be exported in the SBML format (The Systems Biology Markup Language. Furthermore, we also demonstrated the platform functionalities by developing an FBA model (including 229 reactions for a recent annotated bioethanol producer, Thermoanaerobacter sp. strain X514, to predict its biomass growth and ethanol production. Conclusion MicrobesFlux is an installation-free and open-source platform that enables biologists without prior programming knowledge to develop metabolic models for annotated microorganisms in the KEGG

  17. The evolution of a clinical database: from local to standardized clinical languages.

    OpenAIRE

    Prophet, C. M.

    2000-01-01

    For more than twenty years, the University of Iowa Hospitals and Clinics Nursing Informatics (UIHC NI) has been developing a clinical database to support patient care planning and documentation in the INFORMM NIS (Information Network for Online Retrieval & Medical Management Nursing Information System). Beginning in 1992, the database content was revised to standardize orders and to incorporate the Standardized Nursing Languages (SNLs) of the North American Nursing Diagnosis Association (NAND...

  18. FINAL DIGITAL FLOOD INSURANCE RATE MAP DATABASE, SAN DIEGO COUNTY, CALIFORNIA (AND INCORPORATED AREAS)

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  19. Trajectory Based Optimal Segment Computation in Road Network Databases

    DEFF Research Database (Denmark)

    Li, Xiaohui; Ceikute, Vaida; Jensen, Christian S.

    2013-01-01

    Finding a location for a new facility such that the facility attracts the maximal number of customers is a challenging problem. Existing studies either model customers as static sites and thus do not consider customer movement, or they focus on theoretical aspects and do not provide solutions...... that are shown empirically to be scalable. Given a road network, a set of existing facilities, and a collection of customer route traversals, an optimal segment query returns the optimal road network segment(s) for a new facility. We propose a practical framework for computing this query, where each route...... traversal is assigned a score that is distributed among the road segments covered by the route according to a score distribution model. The query returns the road segment(s) with the highest score. To achieve low latency, it is essential to prune the very large search space. We propose two algorithms...

  20. Trajectory Based Optimal Segment Computation in Road Network Databases

    DEFF Research Database (Denmark)

    Li, Xiaohui; Ceikute, Vaida; Jensen, Christian S.

    Finding a location for a new facility such that the facility attracts the maximal number of customers is a challenging problem. Existing studies either model customers as static sites and thus do not consider customer movement, or they focus on theoretical aspects and do not provide solutions...... that are shown empirically to be scalable. Given a road network, a set of existing facilities, and a collection of customer route traversals, an optimal segment query returns the optimal road network segment(s) for a new facility. We propose a practical framework for computing this query, where each route...... traversal is assigned a score that is distributed among the road segments covered by the route according to a score distribution model. The query returns the road segment(s) with the highest score. To achieve low latency, it is essential to prune the very large search space. We propose two algorithms...

  1. The web-enabled ODIN portal - useful databases for the European Nuclear Society

    International Nuclear Information System (INIS)

    Over, H.H.; Wolfart, E.

    2005-01-01

    Materials databases (MDBs) are powerful tools to store, retrieve and present experimental materials data of various categories adapted to specific needs of users. In combination with analysis tools experimental data are necessary for e.g. mechanical design, construction and lifetime predictions of complex components. The effective and efficient handling of large amounts of generic and detailed materials properties data related to e.g. fabrication processes is one of the basic elements of data administration within ongoing European research projects and networks. Over the last 20 years, the JRC/Institute of Energy of the European Commission at Petten has developed and continuously improved a database for experimental materials properties data (Mat-DB). The Mat-DB database structure is oriented to international material standards and recommendations. The database and associated analysis routines are accessible through a web-enabled interface on the On-line Data Information Network (ODIN: http://odin.jrc.nl). ODIN provides controlled access to Mat-DB and other related databases (e.g. the document database DoMa) and thus allows European R and D projects to securely manage and disseminate their experimental test data as well as any type of supporting documentation (e.g. unfiltered raw data, reports, minutes, etc). Using the Internet project partners can instantly access and evaluate data sets entered and validated by one of the members. This paper describes the structure and functionality of Mat-DB and gives examples how these tools can be used for the benefit of European nuclear R and D community. (author)

  2. Key Techniques for Dynamic Updating of National Fundamental Geographic Information Database

    Directory of Open Access Journals (Sweden)

    WANG Donghua

    2015-07-01

    Full Text Available One of the most important missions of fundamental surveying and mapping work is to keep the fundamental geographic information fresh. In this respect, National Administration of Surveying, Mapping and Geoinformation has launched the project of dynamic updating of national fundamental geographic information database since 2012, which aims to update 1:50 000, 1:250 000 and 1:1 000 000 national fundamental geographic information database continuously and quickly, by updating and publishing once a year. This paper introduces the general technical thinking of dynamic updating, states main technical methods, such as dynamic updating of fundamental database, linkage updating of derived databases, and multi-tense database management and service and so on, and finally introduces main technical characteristics and engineering applications.

  3. Improvement of the European thermodynamic database NUCLEA

    Energy Technology Data Exchange (ETDEWEB)

    Brissoneau, L.; Journeau, C.; Piluso, P. [CEA Cadarache, DEN, F-13108 St Paul Les Durance (France); Bakardjieva, S. [Acad Sci Czech Republic, Inst Inorgan Chem, CZ-25068 Rez (Czech Republic); Barrachin, M. [Inst Radioprotect and Surete Nucl, St Paul Les Durance (France); Bechta, S. [NITI, Aleksandrov Res Inst Technol, Sosnovyi Bor (Russian Federation); Bottomley, D. [Commiss European Communities, Joint Res Ctr, Inst Transuranium Elements, D-76125 Karlsruhe (Germany); Cheynet, B.; Fischer, E. [Thermodata, F-38400 St Martin Dheres (France); Kiselova, M. [Nucl Res Inst UJV, Rez 25068 (Czech Republic); Mezentseva, L. [Russian Acad Sci, Inst Silicate Chem, St Petersburg (Russian Federation)

    2010-07-01

    Modelling of corium behaviour during a severe accident requires knowledge of the phases present at equilibrium for a given corium composition, temperature and pressure. The thermodynamic database NUCLEA in combination with a Gibbs Energy minimizer is the European reference tool to achieve this goal. This database has been improved thanks to the analysis of bibliographical data and to EU-funded experiments performed within the SARNET network, PLINIUS as well as the ISTC CORPHAD and EVAN projects. To assess the uncertainty range associated with Energy Dispersive X-ray analyses, a round-robin exercise has been launched in which a UO{sub 2}-containing corium-concrete interaction sample from VULCANO has been analyzed by three European laboratories with satisfactorily small differences. (authors)

  4. Waste Tank Vapor Project: Tank vapor database development

    International Nuclear Information System (INIS)

    Seesing, P.R.; Birn, M.B.; Manke, K.L.

    1994-09-01

    The objective of the Tank Vapor Database (TVD) Development task in FY 1994 was to create a database to store, retrieve, and analyze data collected from the vapor phase of Hanford waste tanks. The data needed to be accessible over the Hanford Local Area Network to users at both Westinghouse Hanford Company (WHC) and Pacific Northwest Laboratory (PNL). The data were restricted to results published in cleared reports from the laboratories analyzing vapor samples. Emphasis was placed on ease of access and flexibility of data formatting and reporting mechanisms. Because of time and budget constraints, a Rapid Application Development strategy was adopted by the database development team. An extensive data modeling exercise was conducted to determine the scope of information contained in the database. a A SUN Sparcstation 1000 was procured as the database file server. A multi-user relational database management system, Sybase reg-sign, was chosen to provide the basic data storage and retrieval capabilities. Two packages were chosen for the user interface to the database: DataPrism reg-sign and Business Objects trademark. A prototype database was constructed to provide the Waste Tank Vapor Project's Toxicology task with summarized and detailed information presented at Vapor Conference 4 by WHC, PNL, Oak Ridge National Laboratory, and Oregon Graduate Institute. The prototype was used to develop a list of reported compounds, and the range of values for compounds reported by the analytical laboratories using different sample containers and analysis methodologies. The prototype allowed a panel of toxicology experts to identify carcinogens and compounds whose concentrations were within the reach of regulatory limits. The database and user documentation was made available for general access in September 1994

  5. Random Deep Belief Networks for Recognizing Emotions from Speech Signals.

    Science.gov (United States)

    Wen, Guihua; Li, Huihui; Huang, Jubing; Li, Danyang; Xun, Eryang

    2017-01-01

    Now the human emotions can be recognized from speech signals using machine learning methods; however, they are challenged by the lower recognition accuracies in real applications due to lack of the rich representation ability. Deep belief networks (DBN) can automatically discover the multiple levels of representations in speech signals. To make full of its advantages, this paper presents an ensemble of random deep belief networks (RDBN) method for speech emotion recognition. It firstly extracts the low level features of the input speech signal and then applies them to construct lots of random subspaces. Each random subspace is then provided for DBN to yield the higher level features as the input of the classifier to output an emotion label. All outputted emotion labels are then fused through the majority voting to decide the final emotion label for the input speech signal. The conducted experimental results on benchmark speech emotion databases show that RDBN has better accuracy than the compared methods for speech emotion recognition.

  6. Random Deep Belief Networks for Recognizing Emotions from Speech Signals

    Directory of Open Access Journals (Sweden)

    Guihua Wen

    2017-01-01

    Full Text Available Now the human emotions can be recognized from speech signals using machine learning methods; however, they are challenged by the lower recognition accuracies in real applications due to lack of the rich representation ability. Deep belief networks (DBN can automatically discover the multiple levels of representations in speech signals. To make full of its advantages, this paper presents an ensemble of random deep belief networks (RDBN method for speech emotion recognition. It firstly extracts the low level features of the input speech signal and then applies them to construct lots of random subspaces. Each random subspace is then provided for DBN to yield the higher level features as the input of the classifier to output an emotion label. All outputted emotion labels are then fused through the majority voting to decide the final emotion label for the input speech signal. The conducted experimental results on benchmark speech emotion databases show that RDBN has better accuracy than the compared methods for speech emotion recognition.

  7. Network-Based Visual Analysis of Tabular Data

    Science.gov (United States)

    Liu, Zhicheng

    2012-01-01

    Tabular data is pervasive in the form of spreadsheets and relational databases. Although tables often describe multivariate data without explicit network semantics, it may be advantageous to explore the data modeled as a graph or network for analysis. Even when a given table design conveys some static network semantics, analysts may want to look…

  8. From Passive to Active in the Design of External Radiotherapy Database at Oncology Institute

    Directory of Open Access Journals (Sweden)

    Valentin Ioan CERNEA

    2009-12-01

    Full Text Available Implementation during 1997 of a computer network at Oncology Institute “Prof. Dr. Ion Chiricuţă" from Cluj-Napoca (OICN opens the era of patient electronic file where the presented database is included. The database developed before 2000, used till December 2006 in all reports of OICN has collected data from primary documents as radiotherapy files. Present level of the computer network permits to change the sense of data from computer to primary document. Now the primary document is built firstly electronically inside the computer, and secondly, after validation is printed as a known document. The paper discusses the issues concerning safety, functionality and access derived.

  9. A C programmer's view of a relational database

    International Nuclear Information System (INIS)

    Clifford, T.; Katz, R.; Griffiths, C.

    1989-01-01

    The AGS Distributed Control System (AGSDCS) uses a relational database (Interbase) for the storage of all data on the host system network. This includes the static data which describes the components of the accelerator complex, as well as data for application program setup and data records that are used in analysis. By creating a mapping of each elation in the database to a C record and providing general tools for relation (record) across, all the data in the database is available in a natural fashion (in structures) to all the C programs on any of the nodes of the control system. In this paper the correspondence between the Interbase elations and the C structure is detailed with examples of C typedefs and relation definitions. It is also shown how the relations can be put into memory and linked (related) together when fast access is needed by programs. 1 ref., 2 tabs

  10. A C programmer's view of a relational database

    International Nuclear Information System (INIS)

    Clifford, T.; Katz, R.; Griffiths, C.

    1990-01-01

    The AGS Distributed Control System (AGSDCS) uses a relational database (Interbase) for the storage of all data on the host system network. This includes the static data which describes the components of the accelerator complex, as well as data for application-program setup and data records that are used in analysis. By creating a mapping of each relation in the database to a C record and providing general tools for relation (record) access, all the data in the database is available in a natural fashion to all the C programs on any of the nodes of the control system. In this paper the correspondence between the Interbase relations and the C structure is detailed with examples of C 'typedefs' and relation definitions. It is also shown how the relations can be put into memory and linked (related) together when fast access is needed by programs. (orig.)

  11. A DICOM based radiotherapy plan database for research collaboration and reporting

    International Nuclear Information System (INIS)

    Westberg, J; Krogh, S; Brink, C; Vogelius, I R

    2014-01-01

    Purpose: To create a central radiotherapy (RT) plan database for dose analysis and reporting, capable of calculating and presenting statistics on user defined patient groups. The goal is to facilitate multi-center research studies with easy and secure access to RT plans and statistics on protocol compliance. Methods: RT institutions are able to send data to the central database using DICOM communications on a secure computer network. The central system is composed of a number of DICOM servers, an SQL database and in-house developed software services to process the incoming data. A web site within the secure network allows the user to manage their submitted data. Results: The RT plan database has been developed in Microsoft .NET and users are able to send DICOM data between RT centers in Denmark. Dose-volume histogram (DVH) calculations performed by the system are comparable to those of conventional RT software. A permission system was implemented to ensure access control and easy, yet secure, data sharing across centers. The reports contain DVH statistics for structures in user defined patient groups. The system currently contains over 2200 patients in 14 collaborations. Conclusions: A central RT plan repository for use in multi-center trials and quality assurance was created. The system provides an attractive alternative to dummy runs by enabling continuous monitoring of protocol conformity and plan metrics in a trial.

  12. A DICOM based radiotherapy plan database for research collaboration and reporting

    Science.gov (United States)

    Westberg, J.; Krogh, S.; Brink, C.; Vogelius, I. R.

    2014-03-01

    Purpose: To create a central radiotherapy (RT) plan database for dose analysis and reporting, capable of calculating and presenting statistics on user defined patient groups. The goal is to facilitate multi-center research studies with easy and secure access to RT plans and statistics on protocol compliance. Methods: RT institutions are able to send data to the central database using DICOM communications on a secure computer network. The central system is composed of a number of DICOM servers, an SQL database and in-house developed software services to process the incoming data. A web site within the secure network allows the user to manage their submitted data. Results: The RT plan database has been developed in Microsoft .NET and users are able to send DICOM data between RT centers in Denmark. Dose-volume histogram (DVH) calculations performed by the system are comparable to those of conventional RT software. A permission system was implemented to ensure access control and easy, yet secure, data sharing across centers. The reports contain DVH statistics for structures in user defined patient groups. The system currently contains over 2200 patients in 14 collaborations. Conclusions: A central RT plan repository for use in multi-center trials and quality assurance was created. The system provides an attractive alternative to dummy runs by enabling continuous monitoring of protocol conformity and plan metrics in a trial.

  13. An Efficient Processing of Join Queries for Sensor Networks Using Column-Oriented Databases

    OpenAIRE

    Kyung-Chang Kim

    2013-01-01

    Recently, the sensor network area is gaining attention both in the industry and academia. Many applications of sensor network such as vehicle tracking and environmental monitoring require joining sensor data scattered over the network. The main performance criterion for queries in a sensor network is to minimize the battery power consumption in each sensor node. Hence, reducing the communication cost of shipping data among sensor nodes is important since it is the main consumer of battery pow...

  14. Biomine: predicting links between biological entities using network models of heterogeneous databases

    Directory of Open Access Journals (Sweden)

    Eronen Lauri

    2012-06-01

    Full Text Available Abstract Background Biological databases contain large amounts of data concerning the functions and associations of genes and proteins. Integration of data from several such databases into a single repository can aid the discovery of previously unknown connections spanning multiple types of relationships and databases. Results Biomine is a system that integrates cross-references from several biological databases into a graph model with multiple types of edges, such as protein interactions, gene-disease associations and gene ontology annotations. Edges are weighted based on their type, reliability, and informativeness. We present Biomine and evaluate its performance in link prediction, where the goal is to predict pairs of nodes that will be connected in the future, based on current data. In particular, we formulate protein interaction prediction and disease gene prioritization tasks as instances of link prediction. The predictions are based on a proximity measure computed on the integrated graph. We consider and experiment with several such measures, and perform a parameter optimization procedure where different edge types are weighted to optimize link prediction accuracy. We also propose a novel method for disease-gene prioritization, defined as finding a subset of candidate genes that cluster together in the graph. We experimentally evaluate Biomine by predicting future annotations in the source databases and prioritizing lists of putative disease genes. Conclusions The experimental results show that Biomine has strong potential for predicting links when a set of selected candidate links is available. The predictions obtained using the entire Biomine dataset are shown to clearly outperform ones obtained using any single source of data alone, when different types of links are suitably weighted. In the gene prioritization task, an established reference set of disease-associated genes is useful, but the results show that under favorable

  15. Update History of This Database - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27 Arabidopsis Phenome Data...base English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...

  16. Data mining the EXFOR database

    International Nuclear Information System (INIS)

    Brown, David; Herman, Michal; Hirdt, John

    2014-01-01

    The EXFOR database contains the largest collection of experimental nuclear reaction data available as well as this data's bibliographic information and experimental details. We created an undirected graph from the EXFOR datasets with graph nodes representing single observables and graph links representing the connections of various types between these observables. This graph is an abstract representation of the connections in EXFOR, similar to graphs of social networks, authorship networks, etc. Analysing this abstract graph, we are able to address very specific questions such as: i) What observables are being used as reference measurements by the experimental community? ii) Are these observables given the attention needed by various standards organisations? iii) Are there classes of observables that are not connected to these reference measurements? In addressing these questions, we propose several (mostly cross-section) observables that should be evaluated and made into reaction reference standards. (authors)

  17. A database for operation logging of the KEK photon factory

    International Nuclear Information System (INIS)

    Pak, C.O.

    1990-01-01

    A prototype database for operation logging of the KEK Photon Factory storage ring has been constructed and tested. This paper describes the basic design of the operation logging system and its performance. Front-end control computers gather various data concerning the operation of the storage ring, and transfer them to a large general-purpose computer through a token-ring network. We have adopted a relational database system so as to save large amounts of data under daily operation. An interactive software tool was developed to retrieve data and to make graphic representations easily. (orig.)

  18. Design of Korean nuclear reliability data-base network using a two-stage Bayesian concept

    International Nuclear Information System (INIS)

    Kim, T.W.; Jeong, K.S.; Chae, S.K.

    1987-01-01

    In an analysis of probabilistic risk, safety, and reliability of a nuclear power plant, the reliability data base (DB) must be established first. As the importance of the reliability data base increases, event reporting systems such as the US Nuclear Regulatory Commission's Licensee Event Report and the International Atomic Energy Agency's Incident Reporting System have been developed. In Korea, however, the systematic reliability data base is not yet available. Therefore, foreign data bases have been directly quoted in reliability analyses of Korean plants. In order to develop a reliability data base for Korean plants, the problem is which methodology is to be used, and the application limits of the selected method must be solved and clarified. After starting the commercial operation of Korea Nuclear Unit-1 (KNU-1) in 1978, six nuclear power plants have begun operation. Of these, only KNU-3 is a Canada Deuterium Uranium pressurized heavy-water reactor, and the others are all pressurized water reactors. This paper describes the proposed reliability data-base network (KNRDS) for Korean nuclear power plants in the context of two-stage Bayesian (TSB) procedure of Kaplan. It describes the concept of TSB to obtain the Korean-specific plant reliability data base, which is updated with the incorporation of both the reported generic reliability data and the operation experiences of similar plants

  19. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  20. Towards a Portuguese database of food microbiological occurrence

    OpenAIRE

    Viegas, Silvia; Machado, Claudia; Dantas, M.Ascenção; Oliveira, Luísa

    2011-01-01

    Aims: To expand the Portuguese Food Information Resource Programme (PortFIR) by building the Portuguese Food Microbiological Information Network (RPIMA) including users, stakeholders, food microbiological data producers that will provide data and information from research, monitoring, epidemiological investigation and disease surveillance. The integration of food data in a national database will improve foodborne risk management. Methods and results Potential members were identified and...

  1. Update History of This Database - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Update History of This Database Date Update contents 2017/03/13 SKIP Stemcell Database... English archive site is opened. 2013/03/29 SKIP Stemcell Database ( https://www.skip.med.k...eio.ac.jp/SKIPSearch/top?lang=en ) is opened. About This Database Database Description Download License Update History of This Databa...se Site Policy | Contact Us Update History of This Database - SKIP Stemcell Database | LSDB Archive ...

  2. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A Completed Reference Database of Lung Nodules on CT Scans

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-02-15

    Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process. Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule{>=}3 mm,''''nodule<3 mm,'' and ''non-nodule{>=}3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked &apos

  3. TBC2health: a database of experimentally validated health-beneficial effects of tea bioactive compounds.

    Science.gov (United States)

    Zhang, Shihua; Xuan, Hongdong; Zhang, Liang; Fu, Sicong; Wang, Yijun; Yang, Hua; Tai, Yuling; Song, Youhong; Zhang, Jinsong; Ho, Chi-Tang; Li, Shaowen; Wan, Xiaochun

    2017-09-01

    Tea is one of the most consumed beverages in the world. Considerable studies show the exceptional health benefits (e.g. antioxidation, cancer prevention) of tea owing to its various bioactive components. However, data from these extensively published papers had not been made available in a central database. To lay a foundation in improving the understanding of healthy tea functions, we established a TBC2health database that currently documents 1338 relationships between 497 tea bioactive compounds and 206 diseases (or phenotypes) manually culled from over 300 published articles. Each entry in TBC2health contains comprehensive information about a bioactive relationship that can be accessed in three aspects: (i) compound information, (ii) disease (or phenotype) information and (iii) evidence and reference. Using the curated bioactive relationships, a bipartite network was reconstructed and the corresponding network (or sub-network) visualization and topological analyses are provided for users. This database has a user-friendly interface for entry browse, search and download. In addition, TBC2health provides a submission page and several useful tools (e.g. BLAST, molecular docking) to facilitate use of the database. Consequently, TBC2health can serve as a valuable bioinformatics platform for the exploration of beneficial effects of tea on human health. TBC2health is freely available at http://camellia.ahau.edu.cn/TBC2health. © The Author 2016. Published by Oxford University Press.

  4. Social Network: a Cytoscape app for visualizing co-authorship networks.

    Science.gov (United States)

    Kofia, Victor; Isserlin, Ruth; Buchan, Alison M J; Bader, Gary D

    2015-01-01

    Networks that represent connections between individuals can be valuable analytic tools. The Social Network Cytoscape app is capable of creating a visual summary of connected individuals automatically. It does this by representing relationships as networks where each node denotes an individual and an edge linking two individuals represents a connection. The app focuses on creating visual summaries of individuals connected by co-authorship links in academia, created from bibliographic databases like PubMed, Scopus and InCites. The resulting co-authorship networks can be visualized and analyzed to better understand collaborative research networks or to communicate the extent of collaboration and publication productivity among a group of researchers, like in a grant application or departmental review report. It can also be useful as a research tool to identify important research topics, researchers and papers in a subject area.

  5. A VBA Desktop Database for Proposal Processing at National Optical Astronomy Observatories

    Science.gov (United States)

    Brown, Christa L.

    National Optical Astronomy Observatories (NOAO) has developed a relational Microsoft Windows desktop database using Microsoft Access and the Microsoft Office programming language, Visual Basic for Applications (VBA). The database is used to track data relating to observing proposals from original receipt through the review process, scheduling, observing, and final statistical reporting. The database has automated proposal processing and distribution of information. It allows NOAO to collect and archive data so as to query and analyze information about our science programs in new ways.

  6. vhv supply networks, problems of network structure

    Energy Technology Data Exchange (ETDEWEB)

    Raimbault, J

    1966-04-01

    The present and future power requirements of the Paris area and the structure of the existing networks are discussed. The various limitations that will have to be allowed for to lay down the structure of a regional transmission network leading in the power of the large national transmission network to within the Paris built up area are described. The theoretical solution that has been adopted, and the features of its final achievement, which is planned for about the year 2000, and the intermediate stages are given. The problem of the structure of the National Power Transmission network which is to supply the regional network was studied. To solve this problem, a 730 kV voltage network will have to be introduced.

  7. End-System Network Interface Controller for 100 Gb/s Wide Area Networks: Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Wen, Jesse [Acadia Optronics LLC, Rockville, MD (United States)

    2013-08-30

    In recent years, network bandwidth requirements have scaled multiple folds, pushing the need for the development of data exchange mechanisms at 100 Gb/s and beyond. High performance computing, climate modeling, large-scale storage, and collaborative scientific research are examples of applications that can greatly benefit by leveraging high bandwidth capabilities of the order of 100 Gb/s. Such requirements and advances in IEEE Ethernet standards, Optical Transport Unit4 (OTU4), and host-system interconnects demand a network infrastructure supporting throughput rates of the order of 100 Gb/s with a single wavelength. To address such a demand Acadia Optronics in collaboration with the University of New Mexico, proposed and developed a end-system Network Interface Controller (NIC) for the 100Gbps WANs. Acadia’s 100G NIC employs an FPGA based system with a high-performance processor interconnect (PCIe 3.0) and a high capacity optical transmission link (CXP) to provide data transmission at the rate of 100 Gbps.

  8. Geoscientific (GEO) database of the Andra Meuse / Haute-Marne research center

    International Nuclear Information System (INIS)

    Tabani, P.; Hemet, P.; Hermand, G.; Delay, J.; Auriere, C.

    2010-01-01

    Document available in extended abstract form only. The GEO database (geo-scientific database of the Meuse/Haute-Marne Center) is a tool developed by Andra, with a view to group in a secured computer form all data related to the acquisition of in situ and laboratory measurements made on solid and fluid samples. This database has three main functions: - Acquisition and management of data and computer files related to geological, geomechanical, hydrogeological and geochemical measurements on solid and fluid samples and in situ measurements (logging, on sample measurements, geological logs, etc). - Available consultation by the staff on Andra's intranet network for selective viewing of data linked to a borehole and/or a sample and for making computations and graphs on sets of laboratory measurements related to a sample. - Physical management of fluid and solid samples stored in a 'core library' in order to localize a sample, follow-up its movement out of the 'core library' to an organization, and carry out regular inventories. The GEO database is a relational Oracle data base. It is installed on a data server which stores information and manages the users' transactions. The users can consult, download and exploit data from any computer connected to the Andra network or Internet. Management of the access rights is made through a login/ password. Four geo-scientific explanations are linked to the Geo database, they are: - The Geosciences portal: The Geosciences portal is a web Intranet application accessible from the ANDRA network. It does not require a particular installation from the client and is accessible through the Internet navigator. A SQL Server Express database manages the users and access rights to the application. This application is used for the acquisition of hydrogeological and geochemical data collected on the field and on fluid samples, as well as data related to scientific work carried out at surface level or in drifts

  9. Firewall Architectures for High-Speed Networks: Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Errin W. Fulp

    2007-08-20

    Firewalls are a key component for securing networks that are vital to government agencies and private industry. They enforce a security policy by inspecting and filtering traffic arriving or departing from a secure network. While performing these critical security operations, firewalls must act transparent to legitimate users, with little or no effect on the perceived network performance (QoS). Packets must be inspected and compared against increasingly complex rule sets and tables, which is a time-consuming process. As a result, current firewall systems can introduce significant delays and are unable to maintain QoS guarantees. Furthermore, firewalls are susceptible to Denial of Service (DoS) attacks that merely overload/saturate the firewall with illegitimate traffic. Current firewall technology only offers a short-term solution that is not scalable; therefore, the \\textbf{objective of this DOE project was to develop new firewall optimization techniques and architectures} that meet these important challenges. Firewall optimization concerns decreasing the number of comparisons required per packet, which reduces processing time and delay. This is done by reorganizing policy rules via special sorting techniques that maintain the original policy integrity. This research is important since it applies to current and future firewall systems. Another method for increasing firewall performance is with new firewall designs. The architectures under investigation consist of multiple firewalls that collectively enforce a security policy. Our innovative distributed systems quickly divide traffic across different levels based on perceived threat, allowing traffic to be processed in parallel (beyond current firewall sandwich technology). Traffic deemed safe is transmitted to the secure network, while remaining traffic is forwarded to lower levels for further examination. The result of this divide-and-conquer strategy is lower delays for legitimate traffic, higher throughput

  10. Web Exploration Tools for a Fast Federated Optical Survey Database

    Science.gov (United States)

    Humphreys, Roberta M.

    2000-01-01

    especially for galaxies, using a new median centroider and integrated magnitudes for galaxies with an improved density-to-intensity calibration with a "sky" background subtraction. In the original version of StarBase the object classification fainter than 19.5-20.0 mag., was an extrapolation of the networks trained on brighter objects. We have used a new catalog of galaxies at the NGP to train a neural network on objects fainter than 20th mag. This improved classification is used in the new version of StarBase. We have also added a FITS table option for the returned data from queries on the object catalog. The APS image database includes images in both colors so we have added a tool for querying the image database in both colors simultaneously. The images can be displayed in parallel or blinked for comparison.

  11. Development of a prosodic database for standard Arabic

    International Nuclear Information System (INIS)

    Chouireb, F.; Nail, M.; Dimeh, Y.; Guerti, M.

    2007-01-01

    The quality of a Text-To-Speech (Tts) synthesis system depends on the naturalness and the intelligibility of the generated speech. For this reason, a high quality automatic generator of prosody is necessary. Among the most recent developments in this field, there is a growing interest for machine learning techniques, such as neural networks, classification and regression trees, Hidden Markov Models (Hamm) and other stochastic methods. All these techniques are based on the analysis of phonetically and prosodic ally labeled speech corpora. The objective of our research is to realize a prosodic a speech database for Arabic such as those available for other languages (the database Timet for English and Bosons for French etc.). (author)

  12. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  13. Practical characterization of large networks using neighborhood information

    KAUST Repository

    Wang, Pinghui; Zhao, Junzhou; Ribeiro, Bruno; Lui, John C. S.; Towsley, Don; Guan, Xiaohong

    2018-01-01

    querying a node also reveals partial structural information about its neighbors. Our methods are optimized for NoSQL graph databases (if the database can be accessed directly), or utilize Web APIs available on most major large networks for graph sampling

  14. IAEA nuclear databases for applications

    International Nuclear Information System (INIS)

    Schwerer, Otto

    2003-01-01

    The Nuclear Data Section (NDS) of the International Atomic Energy Agency (IAEA) provides nuclear data services to scientists on a worldwide scale with particular emphasis on developing countries. More than 100 data libraries are made available cost-free by Internet, CD-ROM and other media. These databases are used for practically all areas of nuclear applications as well as basic research. An overview is given of the most important nuclear reaction and nuclear structure databases, such as EXFOR, CINDA, ENDF, NSR, ENSDF, NUDAT, and of selected special purpose libraries such as FENDL, RIPL, RNAL, the IAEA Photonuclear Data Library, and the IAEA charged-particle cross section database for medical radioisotope production. The NDS also coordinates two international nuclear data centre networks and is involved in data development activities (to create new or improve existing data libraries when the available data are inadequate) and in technology transfer to developing countries, e.g. through the installation and support of the mirror web site of the IAEA Nuclear Data Services at IPEN (operational since March 2000) and by organizing nuclear-data related workshops. By encouraging their participation in IAEA Co-ordinated Research Projects and also by compiling their experimental results in databases such as EXFOR, the NDS helps to make developing countries' contributions to nuclear science visible and conveniently available. The web address of the IAEA Nuclear Data Services is http://www.nds.iaea.org and the NDS mirror service at IPEN (Brasil) can be accessed at http://www.nds.ipen.br/ (author)

  15. Data Mining on Distributed Medical Databases: Recent Trends and Future Directions

    Science.gov (United States)

    Atilgan, Yasemin; Dogan, Firat

    As computerization in healthcare services increase, the amount of available digital data is growing at an unprecedented rate and as a result healthcare organizations are much more able to store data than to extract knowledge from it. Today the major challenge is to transform these data into useful information and knowledge. It is important for healthcare organizations to use stored data to improve quality while reducing cost. This paper first investigates the data mining applications on centralized medical databases, and how they are used for diagnostic and population health, then introduces distributed databases. The integration needs and issues of distributed medical databases are described. Finally the paper focuses on data mining studies on distributed medical databases.

  16. The network researchers' network: A social network analysis of the IMP Group 1985-2006

    DEFF Research Database (Denmark)

    Henneberg, Stephan C. M.; Ziang, Zhizhong; Naudé, Peter

    The Industrial Marketing and Purchasing (IMP) Group is a network of academic researchers working in the area of business-to-business marketing. The group meets every year to discuss and exchange ideas, with a conference having been held every year since 1984 (there was no meeting in 1987......). In this paper, based upon the papers presented at the 22 conferences held to date, we undertake a Social Network Analysis in order to examine the degree of co-publishing that has taken place between this group of researchers. We identify the different components in this database, and examine the large main...

  17. Description of geological data in SKBs database GEOTAB

    International Nuclear Information System (INIS)

    Sehlstedt, S.; Stark, T.

    1991-01-01

    Since 1977 the Swedish Nuclear Fuel and Waste Management Co, SKB, has been performing a research and development programme for final disposal of spent nuclear fuel. The purpose of the programme is to acquire knowledge and data of radioactive waste. Measurement for the characterisation of geological, geophysical, hydrogeological and hydrochemical conditions are performed in specific site investigations as well as for geoscientific projects. Large data volumes have been produced since the start of the programme, both raw data and results. During the years these data were stored in various formats by the different institutions and companies that performed the investigations. It was therefore decided that all data from the research and development programme should be gathered in a database. The database, called GEOTAB, is a relational database. The database comprises six main groups of data volumes. These are: Background information, geological data, geophysical data, hydrological and meteorological data, hydrochemical data, and tracer tests. This report deals with geological data and described the dataflow from the measurements at the sites to the result tables in the database. The geological investigations have been divided into three categories, and each category is stored separately in the database. They are: Surface fractures, core mapping, and chemical analyses. (authors)

  18. A Database of EPO-Patenting Firms in Denmark

    DEFF Research Database (Denmark)

    Nielsen, Anders Østergaard

    1998-01-01

    The first section gives a brief introduction of the basic stages to be observed by the patent applicant from idea to the patent is granted. Section two presents three examples of how patents are registered in the online patent database INPADOC. Section three accounts for the initial analysis...... of the existing patent stock issued to firms with domicile in Denmark. Sections four and five report the basic characteristics of the EPO-patent sample and the procedures for linking the patent statistics to accounting data at the firm level, and finally they present the basic properties of the resulting database...

  19. Database Description - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Open TG-GATEs Pathological Image Database Database Description General information of database Database... name Open TG-GATEs Pathological Image Database Alternative name - DOI 10.18908/lsdba.nbdc00954-0...iomedical Innovation 7-6-8, Saito-asagi, Ibaraki-city, Osaka 567-0085, Japan TEL:81-72-641-9826 Email: Database... classification Toxicogenomics Database Organism Taxonomy Name: Rattus norvegi... Article title: Author name(s): Journal: External Links: Original website information Database

  20. Implementation of medical monitor system based on networks

    Science.gov (United States)

    Yu, Hui; Cao, Yuzhen; Zhang, Lixin; Ding, Mingshi

    2006-11-01

    In this paper, the development trend of medical monitor system is analyzed and portable trend and network function become more and more popular among all kinds of medical monitor devices. The architecture of medical network monitor system solution is provided and design and implementation details of medical monitor terminal, monitor center software, distributed medical database and two kind of medical information terminal are especially discussed. Rabbit3000 system is used in medical monitor terminal to implement security administration of data transfer on network, human-machine interface, power management and DSP interface while DSP chip TMS5402 is used in signal analysis and data compression. Distributed medical database is designed for hospital center according to DICOM information model and HL7 standard. Pocket medical information terminal based on ARM9 embedded platform is also developed to interactive with center database on networks. Two kernels based on WINCE are customized and corresponding terminal software are developed for nurse's routine care and doctor's auxiliary diagnosis. Now invention patent of the monitor terminal is approved and manufacture and clinic test plans are scheduled. Applications for invention patent are also arranged for two medical information terminals.

  1. Formalization of Database Systems -- and a Formal Definition of {IMS}

    DEFF Research Database (Denmark)

    Bjørner, Dines; Løvengreen, Hans Henrik

    1982-01-01

    Drawing upon an analogy between Programming Language Systems and Database Systems we outline the requirements that architectural specifications of database systems must futfitl, and argue that only formal, mathematical definitions may 6atisfy these. Then we illustrate home aspects and touch upon...... come ueee of formal definitions of data models and databaee management systems. A formal model of INS will carry this discussion. Finally we survey some of the exkting literature on formal definitions of database systems. The emphasis will be on constructive definitions in the denotationul semantics...... style of the VCM: Vienna Development Nethd. The role of formal definitions in international standardiaation efforts is briefly mentioned....

  2. Report from the 2nd Workshop on Extremely Large Databases

    Directory of Open Access Journals (Sweden)

    Jacek Becla

    2009-03-01

    Full Text Available The complexity and sophistication of large scale analytics in science and industry have advanced dramatically in recent years. Analysts are struggling to use complex techniques such as time series analysis and classification algorithms because their familiar, powerful tools are not scalable and cannot effectively use scalable database systems. The 2nd Extremely Large Databases (XLDB workshop was organized to understand these issues, examine their implications, and brainstorm possible solutions. The design of a new open source science database, SciDB that emerged from the first workshop in this series was also debated. This paper is the final report of the discussions and activities at this workshop.

  3. Network cohesion

    OpenAIRE

    Cavalcanti, Tiago V. V.; Giannitsarou, Chryssi; Johnson, Charles R.

    2016-01-01

    This is the final version of the article. It first appeared from Springer via http://dx.doi.org/10.1007/s00199-016-0992-1 We define a measure of network cohesion and show how it arises naturally in a broad class of dynamic models of endogenous perpetual growth with network externalities. Via a standard growth model, we show why network cohesion is crucial for conditional convergence and explain that as cohesion increases, convergence is faster. We prove properties of network cohesion and d...

  4. Psychology and social networks: a dynamic network theory perspective.

    Science.gov (United States)

    Westaby, James D; Pfaff, Danielle L; Redding, Nicholas

    2014-04-01

    Research on social networks has grown exponentially in recent years. However, despite its relevance, the field of psychology has been relatively slow to explain the underlying goal pursuit and resistance processes influencing social networks in the first place. In this vein, this article aims to demonstrate how a dynamic network theory perspective explains the way in which social networks influence these processes and related outcomes, such as goal achievement, performance, learning, and emotional contagion at the interpersonal level of analysis. The theory integrates goal pursuit, motivation, and conflict conceptualizations from psychology with social network concepts from sociology and organizational science to provide a taxonomy of social network role behaviors, such as goal striving, system supporting, goal preventing, system negating, and observing. This theoretical perspective provides psychologists with new tools to map social networks (e.g., dynamic network charts), which can help inform the development of change interventions. Implications for social, industrial-organizational, and counseling psychology as well as conflict resolution are discussed, and new opportunities for research are highlighted, such as those related to dynamic network intelligence (also known as cognitive accuracy), levels of analysis, methodological/ethical issues, and the need to theoretically broaden the study of social networking and social media behavior. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  5. Comparison and validation of community structures in complex networks

    Science.gov (United States)

    Gustafsson, Mika; Hörnquist, Michael; Lombardi, Anna

    2006-07-01

    The issue of partitioning a network into communities has attracted a great deal of attention recently. Most authors seem to equate this issue with the one of finding the maximum value of the modularity, as defined by Newman. Since the problem formulated this way is believed to be NP-hard, most effort has gone into the construction of search algorithms, and less to the question of other measures of community structures, similarities between various partitionings and the validation with respect to external information. Here we concentrate on a class of computer generated networks and on three well-studied real networks which constitute a bench-mark for network studies; the karate club, the US college football teams and a gene network of yeast. We utilize some standard ways of clustering data (originally not designed for finding community structures in networks) and show that these classical methods sometimes outperform the newer ones. We discuss various measures of the strength of the modular structure, and show by examples features and drawbacks. Further, we compare different partitions by applying some graph-theoretic concepts of distance, which indicate that one of the quality measures of the degree of modularity corresponds quite well with the distance from the true partition. Finally, we introduce a way to validate the partitionings with respect to external data when the nodes are classified but the network structure is unknown. This is here possible since we know everything of the computer generated networks, as well as the historical answer to how the karate club and the football teams are partitioned in reality. The partitioning of the gene network is validated by use of the Gene Ontology database, where we show that a community in general corresponds to a biological process.

  6. Complex Networks

    CERN Document Server

    Evsukoff, Alexandre; González, Marta

    2013-01-01

    In the last decade we have seen the emergence of a new inter-disciplinary field focusing on the understanding of networks which are dynamic, large, open, and have a structure sometimes called random-biased. The field of Complex Networks is helping us better understand many complex phenomena such as the spread of  deseases, protein interactions, social relationships, to name but a few. Studies in Complex Networks are gaining attention due to some major scientific breakthroughs proposed by network scientists helping us understand and model interactions contained in large datasets. In fact, if we could point to one event leading to the widespread use of complex network analysis is the availability of online databases. Theories of Random Graphs from Erdös and Rényi from the late 1950s led us to believe that most networks had random characteristics. The work on large online datasets told us otherwise. Starting with the work of Barabási and Albert as well as Watts and Strogatz in the late 1990s, we now know th...

  7. Hidden neural networks

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose; Riis, Søren Kamaric

    1999-01-01

    A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability...... parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum...... likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear...

  8. The organisational structure of protein networks: revisiting the centrality-lethality hypothesis.

    Science.gov (United States)

    Raman, Karthik; Damaraju, Nandita; Joshi, Govind Krishna

    2014-03-01

    Protein networks, describing physical interactions as well as functional associations between proteins, have been unravelled for many organisms in the recent past. Databases such as the STRING provide excellent resources for the analysis of such networks. In this contribution, we revisit the organisation of protein networks, particularly the centrality-lethality hypothesis, which hypothesises that nodes with higher centrality in a network are more likely to produce lethal phenotypes on removal, compared to nodes with lower centrality. We consider the protein networks of a diverse set of 20 organisms, with essentiality information available in the Database of Essential Genes and assess the relationship between centrality measures and lethality. For each of these organisms, we obtained networks of high-confidence interactions from the STRING database, and computed network parameters such as degree, betweenness centrality, closeness centrality and pairwise disconnectivity indices. We observe that the networks considered here are predominantly disassortative. Further, we observe that essential nodes in a network have a significantly higher average degree and betweenness centrality, compared to the network average. Most previous studies have evaluated the centrality-lethality hypothesis for Saccharomyces cerevisiae and Escherichia coli; we here observe that the centrality-lethality hypothesis hold goods for a large number of organisms, with certain limitations. Betweenness centrality may also be a useful measure to identify essential nodes, but measures like closeness centrality and pairwise disconnectivity are not significantly higher for essential nodes.

  9. Nuclear Data and Reaction Rate Databases in Nuclear Astrophysics

    Science.gov (United States)

    Lippuner, Jonas

    2018-06-01

    Astrophysical simulations and models require a large variety of micro-physics data, such as equation of state tables, atomic opacities, properties of nuclei, and nuclear reaction rates. Some of the required data is experimentally accessible, but the extreme conditions present in many astrophysical scenarios cannot be reproduced in the laboratory and thus theoretical models are needed to supplement the empirical data. Collecting data from various sources and making them available as a database in a unified format is a formidable task. I will provide an overview of the data requirements in astrophysics with an emphasis on nuclear astrophysics. I will then discuss some of the existing databases, the science they enable, and their limitations. Finally, I will offer some thoughts on how to design a useful database.

  10. Investigating core genetic-and-epigenetic cell cycle networks for stemness and carcinogenic mechanisms, and cancer drug design using big database mining and genome-wide next-generation sequencing data.

    Science.gov (United States)

    Li, Cheng-Wei; Chen, Bor-Sen

    2016-10-01

    Recent studies have demonstrated that cell cycle plays a central role in development and carcinogenesis. Thus, the use of big databases and genome-wide high-throughput data to unravel the genetic and epigenetic mechanisms underlying cell cycle progression in stem cells and cancer cells is a matter of considerable interest. Real genetic-and-epigenetic cell cycle networks (GECNs) of embryonic stem cells (ESCs) and HeLa cancer cells were constructed by applying system modeling, system identification, and big database mining to genome-wide next-generation sequencing data. Real GECNs were then reduced to core GECNs of HeLa cells and ESCs by applying principal genome-wide network projection. In this study, we investigated potential carcinogenic and stemness mechanisms for systems cancer drug design by identifying common core and specific GECNs between HeLa cells and ESCs. Integrating drug database information with the specific GECNs of HeLa cells could lead to identification of multiple drugs for cervical cancer treatment with minimal side-effects on the genes in the common core. We found that dysregulation of miR-29C, miR-34A, miR-98, and miR-215; and methylation of ANKRD1, ARID5B, CDCA2, PIF1, STAMBPL1, TROAP, ZNF165, and HIST1H2AJ in HeLa cells could result in cell proliferation and anti-apoptosis through NFκB, TGF-β, and PI3K pathways. We also identified 3 drugs, methotrexate, quercetin, and mimosine, which repressed the activated cell cycle genes, ARID5B, STK17B, and CCL2, in HeLa cells with minimal side-effects.

  11. LeishCyc: a biochemical pathways database for Leishmania major

    Directory of Open Access Journals (Sweden)

    Doyle Maria A

    2009-06-01

    Full Text Available Abstract Background Leishmania spp. are sandfly transmitted protozoan parasites that cause a spectrum of diseases in more than 12 million people worldwide. Much research is now focusing on how these parasites adapt to the distinct nutrient environments they encounter in the digestive tract of the sandfly vector and the phagolysosome compartment of mammalian macrophages. While data mining and annotation of the genomes of three Leishmania species has provided an initial inventory of predicted metabolic components and associated pathways, resources for integrating this information into metabolic networks and incorporating data from transcript, protein, and metabolite profiling studies is currently lacking. The development of a reliable, expertly curated, and widely available model of Leishmania metabolic networks is required to facilitate systems analysis, as well as discovery and prioritization of new drug targets for this important human pathogen. Description The LeishCyc database was initially built from the genome sequence of Leishmania major (v5.2, based on the annotation published by the Wellcome Trust Sanger Institute. LeishCyc was manually curated to remove errors, correct automated predictions, and add information from the literature. The ongoing curation is based on public sources, literature searches, and our own experimental and bioinformatics studies. In a number of instances we have improved on the original genome annotation, and, in some ambiguous cases, collected relevant information from the literature in order to help clarify gene or protein annotation in the future. All genes in LeishCyc are linked to the corresponding entry in GeneDB (Wellcome Trust Sanger Institute. Conclusion The LeishCyc database describes Leishmania major genes, gene products, metabolites, their relationships and biochemical organization into metabolic pathways. LeishCyc provides a systematic approach to organizing the evolving information about Leishmania

  12. A database of volcanic hazards and their physical impacts to critical infrastructure

    Science.gov (United States)

    Wilson, Grant; Wilson, Thomas; Deligne, Natalia

    2013-04-01

    Approximately 10% of the world's population lives within 100 km of historically active volcanoes. Consequently, considerable critical infrastructure is at risk of being affected by volcanic eruptions, where critical infrastructure includes: electricity and wastewater networks; water supply systems; transport routes; communications; and buildings. Appropriate risk management strategies are required to minimise the risk to infrastructure, which necessitates detailed understanding of both volcanic hazards and infrastructure parameters and vulnerabilities. To address this, we are developing a database of the physical impacts and vulnerability of critical infrastructure observed during/following historic eruptions, placed in the context of event-specific volcanic hazard and infrastructure parameters. Our database considers: volcanic hazard parameters for each case study eruption (tephra thickness, dynamic pressure of PDCs, etc.); inventory of infrastructure elements present within the study area (geographical extent, age, etc.); the type and number of impacts and disruption caused to particular infrastructure sectors; and the quantified assessment of the vulnerability of built environments. Data have been compiled from a wide range of literature, focussing in particular on impact assessment studies which document in detail the damage sustained by critical infrastructure during a given eruption. We are creating a new vulnerability ranking to quantify the vulnerability of built environments affected by volcanic eruptions. The ranking is based upon a range of physical impacts and service disruption criteria, and is assigned to each case study. This ranking will permit comparison of vulnerabilities between case studies as well as indicate expected vulnerability during future eruptions. We are also developing hazard intensity thresholds indicating when specific damage states are expected for different critical infrastructure sectors. Finally, we have developed a data quality

  13. 16th East-European Conference on Advances in Databases and Information Systems (ADBIS 2012)

    CERN Document Server

    Härder, Theo; Wrembel, Robert; Advances in Databases and Information Systems

    2013-01-01

    This volume is the second one of the 16th East-European Conference on Advances in Databases and Information Systems (ADBIS 2012), held on September 18-21, 2012, in Poznań, Poland. The first one has been published in the LNCS series.   This volume includes 27 research contributions, selected out of 90. The contributions cover a wide spectrum of topics in the database and information systems field, including: database foundation and theory, data modeling and database design, business process modeling, query optimization in relational and object databases, materialized view selection algorithms, index data structures, distributed systems, system and data integration, semi-structured data and databases, semantic data management, information retrieval, data mining techniques, data stream processing, trust and reputation in the Internet, and social networks. Thus, the content of this volume covers the research areas from fundamentals of databases, through still hot topic research problems (e.g., data mining, XML ...

  14. IRIS Toxicological Review of Acrolein (2003 Final)

    Science.gov (United States)

    EPA announced the release of the final report, Toxicological Review of Acrolein: in support of the Integrated Risk Information System (IRIS). The updated Summary for Acrolein and accompanying toxicological review have been added to the IRIS Database.

  15. Human health risk assessment database, "the NHSRC toxicity value database": supporting the risk assessment process at US EPA's National Homeland Security Research Center.

    Science.gov (United States)

    Moudgal, Chandrika J; Garrahan, Kevin; Brady-Roberts, Eletha; Gavrelis, Naida; Arbogast, Michelle; Dun, Sarah

    2008-11-15

    The toxicity value database of the United States Environmental Protection Agency's (EPA) National Homeland Security Research Center has been in development since 2004. The toxicity value database includes a compilation of agent property, toxicity, dose-response, and health effects data for 96 agents: 84 chemical and radiological agents and 12 biotoxins. The database is populated with multiple toxicity benchmark values and agent property information from secondary sources, with web links to the secondary sources, where available. A selected set of primary literature citations and associated dose-response data are also included. The toxicity value database offers a powerful means to quickly and efficiently gather pertinent toxicity and dose-response data for a number of agents that are of concern to the nation's security. This database, in conjunction with other tools, will play an important role in understanding human health risks, and will provide a means for risk assessors and managers to make quick and informed decisions on the potential health risks and determine appropriate responses (e.g., cleanup) to agent release. A final, stand alone MS ACESSS working version of the toxicity value database was completed in November, 2007.

  16. Human health risk assessment database, 'the NHSRC toxicity value database': Supporting the risk assessment process at US EPA's National Homeland Security Research Center

    International Nuclear Information System (INIS)

    Moudgal, Chandrika J.; Garrahan, Kevin; Brady-Roberts, Eletha; Gavrelis, Naida; Arbogast, Michelle; Dun, Sarah

    2008-01-01

    The toxicity value database of the United States Environmental Protection Agency's (EPA) National Homeland Security Research Center has been in development since 2004. The toxicity value database includes a compilation of agent property, toxicity, dose-response, and health effects data for 96 agents: 84 chemical and radiological agents and 12 biotoxins. The database is populated with multiple toxicity benchmark values and agent property information from secondary sources, with web links to the secondary sources, where available. A selected set of primary literature citations and associated dose-response data are also included. The toxicity value database offers a powerful means to quickly and efficiently gather pertinent toxicity and dose-response data for a number of agents that are of concern to the nation's security. This database, in conjunction with other tools, will play an important role in understanding human health risks, and will provide a means for risk assessors and managers to make quick and informed decisions on the potential health risks and determine appropriate responses (e.g., cleanup) to agent release. A final, stand alone MS ACESSS working version of the toxicity value database was completed in November, 2007

  17. Update History of This Database - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Update History of This Database Date Update contents 201...0/03/29 Yeast Interacting Proteins Database English archive site is opened. 2000/12/4 Yeast Interacting Proteins Database...( http://itolab.cb.k.u-tokyo.ac.jp/Y2H/ ) is released. About This Database Database Description... Download License Update History of This Database Site Policy | Contact Us Update History of This Database... - Yeast Interacting Proteins Database | LSDB Archive ...

  18. Creating and analyzing pathway and protein interaction compendia for modelling signal transduction networks

    Directory of Open Access Journals (Sweden)

    Kirouac Daniel C

    2012-05-01

    Full Text Available Abstract Background Understanding the information-processing capabilities of signal transduction networks, how those networks are disrupted in disease, and rationally designing therapies to manipulate diseased states require systematic and accurate reconstruction of network topology. Data on networks central to human physiology, such as the inflammatory signalling networks analyzed here, are found in a multiplicity of on-line resources of pathway and interactome databases (Cancer CellMap, GeneGo, KEGG, NCI-Pathway Interactome Database (NCI-PID, PANTHER, Reactome, I2D, and STRING. We sought to determine whether these databases contain overlapping information and whether they can be used to construct high reliability prior knowledge networks for subsequent modeling of experimental data. Results We have assembled an ensemble network from multiple on-line sources representing a significant portion of all machine-readable and reconcilable human knowledge on proteins and protein interactions involved in inflammation. This ensemble network has many features expected of complex signalling networks assembled from high-throughput data: a power law distribution of both node degree and edge annotations, and topological features of a “bow tie” architecture in which diverse pathways converge on a highly conserved set of enzymatic cascades focused around PI3K/AKT, MAPK/ERK, JAK/STAT, NFκB, and apoptotic signaling. Individual pathways exhibit “fuzzy” modularity that is statistically significant but still involving a majority of “cross-talk” interactions. However, we find that the most widely used pathway databases are highly inconsistent with respect to the actual constituents and interactions in this network. Using a set of growth factor signalling networks as examples (epidermal growth factor, transforming growth factor-beta, tumor necrosis factor, and wingless, we find a multiplicity of network topologies in which receptors couple to downstream

  19. 7th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2015)

    CERN Document Server

    Nguyen, Ngoc; Batubara, John; New Trends in Intelligent Information and Database Systems

    2015-01-01

    Intelligent information and database systems are two closely related subfields of modern computer science which have been known for over thirty years. They focus on the integration of artificial intelligence and classic database technologies to create the class of next generation information systems. The book focuses on new trends in intelligent information and database systems and discusses topics addressed to the foundations and principles of data, information, and knowledge models, methodologies for intelligent information and database systems analysis, design, and implementation, their validation, maintenance and evolution. They cover a broad spectrum of research topics discussed both from the practical and theoretical points of view such as: intelligent information retrieval, natural language processing, semantic web, social networks, machine learning, knowledge discovery, data mining, uncertainty management and reasoning under uncertainty, intelligent optimization techniques in information systems, secu...

  20. The MetaCyc database of metabolic pathways and enzymes and the BioCyc collection of pathway/genome databases

    Science.gov (United States)

    Caspi, Ron; Altman, Tomer; Dale, Joseph M.; Dreher, Kate; Fulcher, Carol A.; Gilham, Fred; Kaipa, Pallavi; Karthikeyan, Athikkattuvalasu S.; Kothari, Anamika; Krummenacker, Markus; Latendresse, Mario; Mueller, Lukas A.; Paley, Suzanne; Popescu, Liviu; Pujar, Anuradha; Shearer, Alexander G.; Zhang, Peifen; Karp, Peter D.

    2010-01-01

    The MetaCyc database (MetaCyc.org) is a comprehensive and freely accessible resource for metabolic pathways and enzymes from all domains of life. The pathways in MetaCyc are experimentally determined, small-molecule metabolic pathways and are curated from the primary scientific literature. With more than 1400 pathways, MetaCyc is the largest collection of metabolic pathways currently available. Pathways reactions are linked to one or more well-characterized enzymes, and both pathways and enzymes are annotated with reviews, evidence codes, and literature citations. BioCyc (BioCyc.org) is a collection of more than 500 organism-specific Pathway/Genome Databases (PGDBs). Each BioCyc PGDB contains the full genome and predicted metabolic network of one organism. The network, which is predicted by the Pathway Tools software using MetaCyc as a reference, consists of metabolites, enzymes, reactions and metabolic pathways. BioCyc PGDBs also contain additional features, such as predicted operons, transport systems, and pathway hole-fillers. The BioCyc Web site offers several tools for the analysis of the PGDBs, including Omics Viewers that enable visualization of omics datasets on two different genome-scale diagrams and tools for comparative analysis. The BioCyc PGDBs generated by SRI are offered for adoption by any party interested in curation of metabolic, regulatory, and genome-related information about an organism. PMID:19850718

  1. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...19&lang=en Whole data download - Referenced database Rice Expression Database (RED) Rice full-length cDNA Database... (KOME) Rice Genome Integrated Map Database (INE) Rice Mutant Panel Database (Tos17) Rice Genome Annotation Database

  2. IRIS Toxicological Review of Chloroform (Final Report)

    Science.gov (United States)

    EPA is announcing the release of the final report, Toxicological Review of Chloroform: in support of the Integrated Risk Information System (IRIS). The updated Summary for Chloroform and accompanying Quickview have also been added to the IRIS Database.

  3. RefDB: The Reference Database for CMS Monte Carlo Production

    CERN Document Server

    Lefébure, V

    2003-01-01

    RefDB is the CMS Monte Carlo Reference Database. It is used for recording and managing all details of physics simulation, reconstruction and analysis requests, for coordinating task assignments to world-wide distributed Regional Centers, Grid-enabled or not, and trace their progress rate. RefDB is also the central database that the workflow-planner contacts in order to get task instructions. It is automatically and asynchronously updated with book-keeping run summaries. Finally it is the end-user interface to data catalogues.

  4. Construction of In-house Databases in a Corporation

    Science.gov (United States)

    Tamura, Haruki; Mezaki, Koji

    This paper describes fundamental idea of technical information management in Mitsubishi Heavy Industries, Ltd., and present status of the activities. Then it introduces the background and history of the development, problems and countermeasures against them regarding Mitsubishi Heavy Industries Technical Information Retrieval System (called MARON) which started its service in May, 1985. The system deals with databases which cover information common to the whole company (in-house research and technical reports, holding information of books, journals and so on), and local information held in each business division or department. Anybody from any division can access to these databases through the company-wide network. The in-house interlibrary loan subsystem called Orderentry is available, which supports acquiring operation of original materials.

  5. Practical characterization of large networks using neighborhood information

    KAUST Repository

    Wang, Pinghui

    2018-02-14

    Characterizing large complex networks such as online social networks through node querying is a challenging task. Network service providers often impose severe constraints on the query rate, hence limiting the sample size to a small fraction of the total network of interest. Various ad hoc subgraph sampling methods have been proposed, but many of them give biased estimates and no theoretical basis on the accuracy. In this work, we focus on developing sampling methods for large networks where querying a node also reveals partial structural information about its neighbors. Our methods are optimized for NoSQL graph databases (if the database can be accessed directly), or utilize Web APIs available on most major large networks for graph sampling. We show that our sampling method has provable convergence guarantees on being an unbiased estimator, and it is more accurate than state-of-the-art methods. We also explore methods to uncover shortest paths between a subset of nodes and detect high degree nodes by sampling only a small fraction of the network of interest. Our results demonstrate that utilizing neighborhood information yields methods that are two orders of magnitude faster than state-of-the-art methods.

  6. Dead end metabolites--defining the known unknowns of the E. coli metabolic network.

    Directory of Open Access Journals (Sweden)

    Amanda Mackie

    Full Text Available The EcoCyc database is an online scientific database which provides an integrated view of the metabolic and regulatory network of the bacterium Escherichia coli K-12 and facilitates computational exploration of this important model organism. We have analysed the occurrence of dead end metabolites within the database--these are metabolites which lack the requisite reactions (either metabolic or transport that would account for their production or consumption within the metabolic network. 127 dead end metabolites were identified from the 995 compounds that are contained within the EcoCyc metabolic network. Their presence reflects either a deficit in our representation of the network or in our knowledge of E. coli metabolism. Extensive literature searches resulted in the addition of 38 transport reactions and 3 metabolic reactions to the database and led to an improved representation of the pathway for Vitamin B12 salvage. 39 dead end metabolites were identified as components of reactions that are not physiologically relevant to E. coli K-12--these reactions are properties of purified enzymes in vitro that would not be expected to occur in vivo. Our analysis led to improvements in the software that underpins the database and to the program that finds dead end metabolites within EcoCyc. The remaining dead end metabolites in the EcoCyc database likely represent deficiencies in our knowledge of E. coli metabolism.

  7. Automated Identification of Core Regulatory Genes in Human Gene Regulatory Networks.

    Directory of Open Access Journals (Sweden)

    Vipin Narang

    Full Text Available Human gene regulatory networks (GRN can be difficult to interpret due to a tangle of edges interconnecting thousands of genes. We constructed a general human GRN from extensive transcription factor and microRNA target data obtained from public databases. In a subnetwork of this GRN that is active during estrogen stimulation of MCF-7 breast cancer cells, we benchmarked automated algorithms for identifying core regulatory genes (transcription factors and microRNAs. Among these algorithms, we identified K-core decomposition, pagerank and betweenness centrality algorithms as the most effective for discovering core regulatory genes in the network evaluated based on previously known roles of these genes in MCF-7 biology as well as in their ability to explain the up or down expression status of up to 70% of the remaining genes. Finally, we validated the use of K-core algorithm for organizing the GRN in an easier to interpret layered hierarchy where more influential regulatory genes percolate towards the inner layers. The integrated human gene and miRNA network and software used in this study are provided as supplementary materials (S1 Data accompanying this manuscript.

  8. A Study on the Korea Database Industry Promotion Act Legislation

    Directory of Open Access Journals (Sweden)

    Bae, Seoung-Hun

    2013-09-01

    Full Text Available The Database Industry Promotion Act was proposed at the National Assembly plenary session on July 26, 2012 and since then it has been in the process of enactment in consultation with all the governmental departments concerned. The recent trend of economic globalization and smart device innovation suggests a new opportunity and challenges for all industries. The database industry is also facing a new phase in an era of smart innovation. Korea is in a moment of opportunity to take an innovative approach to promoting the database industry. Korea should set up a national policy to promote the database industry for citizens, government, and research institutions, as well as enterprises. Above all, the Database Industry Promotion Act could play a great role in promoting the social infrastructure to enhance the capacity of small and medium-sized enterprises. This article discusses the background of the development of the Database Industry Promotion Act and its legislative processes in order to clarify its legal characteristics, including the meaning of the act. In addition, this article explains individual items related to the overall structure of the Database Industry Promotion Act. Finally, this article reviews the economic effects of the database industry for now and the future.

  9. Resource Aware Intelligent Network Services (RAINS) Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, Tom; Yang, Xi

    2018-01-16

    The Resource Aware Intelligent Network Services (RAINS) project conducted research and developed technologies in the area of cyberinfrastructure resource modeling and computation. The goal of this work was to provide a foundation to enable intelligent, software defined services which spanned the network AND the resources which connect to the network. A Multi-Resource Service Plane (MRSP) was defined, which allows resource owners/managers to locate and place themselves from a topology and service availability perspective within the dynamic networked cyberinfrastructure ecosystem. The MRSP enables the presentation of integrated topology views and computation results which can include resources across the spectrum of compute, storage, and networks. The RAINS project developed MSRP includes the following key components: i) Multi-Resource Service (MRS) Ontology/Multi-Resource Markup Language (MRML), ii) Resource Computation Engine (RCE), iii) Modular Driver Framework (to allow integration of a variety of external resources). The MRS/MRML is a general and extensible modeling framework that allows for resource owners to model, or describe, a wide variety of resource types. All resources are described using three categories of elements: Resources, Services, and Relationships between the elements. This modeling framework defines a common method for the transformation of cyberinfrastructure resources into data in the form of MRML models. In order to realize this infrastructure datification, the RAINS project developed a model based computation system, i.e. “RAINS Computation Engine (RCE)”. The RCE has the ability to ingest, process, integrate, and compute based on automatically generated MRML models. The RCE interacts with the resources thru system drivers which are specific to the type of external network or resource controller. The RAINS project developed a modular and pluggable driver system which facilities a variety of resource controllers to automatically generate

  10. The establishment of the Blacknest seismological database on the Rutherford Laboratory system 360/195 computer

    International Nuclear Information System (INIS)

    Blamey, C.

    1977-01-01

    In order to assess the problems which might arise from monitoring a comprehensive test ban treaty by seismological methods, an experimental monitoring operation is being conducted. This work has involved the establishment of a database on the Rutherford Laboratory 360/195 system computer. The database can be accessed in the UK over the public telephone network and in the USA via ARPANET. (author)

  11. Engineering method to build the composite structure ply database

    Directory of Open Access Journals (Sweden)

    Qinghua Shi

    Full Text Available In this paper, a new method to build a composite ply database with engineering design constraints is proposed. This method has two levels: the core stacking sequence design and the whole stacking sequence design. The core stacking sequences are obtained by the full permutation algorithm considering the ply ratio requirement and the dispersion character which characterizes the dispersion of ply angles. The whole stacking sequences are the combinations of the core stacking sequences. By excluding the ply sequences which do not meet the engineering requirements, the final ply database is obtained. One example with the constraints that the total layer number is 100 and the ply ratio is 30:60:10 is presented to validate the method. This method provides a new way to set up the ply database based on the engineering requirements without adopting intelligent optimization algorithms. Keywords: Composite ply database, VBA program, Structure design, Stacking sequence

  12. The Copenhagen primary care differential count (CopDiff) database

    DEFF Research Database (Denmark)

    Andersen, Christen Bertel L; Siersma, V.; Karlslund, W.

    2014-01-01

    BACKGROUND: The differential blood cell count provides valuable information about a person's state of health. Together with a variety of biochemical variables, these analyses describe important physiological and pathophysiological relations. There is a need for research databases to explore assoc...... the construction of the Copenhagen Primary Care Differential Count database as well as the distribution of characteristics of the population it covers and the variables that are recorded. Finally, it gives examples of its use as an inspiration to peers for collaboration.......BACKGROUND: The differential blood cell count provides valuable information about a person's state of health. Together with a variety of biochemical variables, these analyses describe important physiological and pathophysiological relations. There is a need for research databases to explore...... Practitioners' Laboratory has registered all analytical results since July 1, 2000. The Copenhagen Primary Care Differential Count database contains all differential blood cell count results (n=1,308,022) from July 1, 2000 to January 25, 2010 requested by general practitioners, along with results from analysis...

  13. Translation from the collaborative OSM database to cartography

    Science.gov (United States)

    Hayat, Flora

    2018-05-01

    The OpenStreetMap (OSM) database includes original items very useful for geographical analysis and for creating thematic maps. Contributors record in the open database various themes regarding amenities, leisure, transports, buildings and boundaries. The Michelin mapping department develops map prototypes to test the feasibility of mapping based on OSM. To translate the OSM database structure into a database structure fitted with Michelin graphic guidelines a research project is in development. It aims at defining the right structure for the Michelin uses. The research project relies on the analysis of semantic and geometric heterogeneities in OSM data. In that order, Michelin implements methods to transform the input geographical database into a cartographic image dedicated for specific uses (routing and tourist maps). The paper focuses on the mapping tools available to produce a personalised spatial database. Based on processed data, paper and Web maps can be displayed. Two prototypes are described in this article: a vector tile web map and a mapping method to produce paper maps on a regional scale. The vector tile mapping method offers an easy navigation within the map and within graphic and thematic guide- lines. Paper maps can be partly automatically drawn. The drawing automation and data management are part of the mapping creation as well as the final hand-drawing phase. Both prototypes have been set up using the OSM technical ecosystem.

  14. ARACHNID: A prototype object-oriented database tool for distributed systems

    Science.gov (United States)

    Younger, Herbert; Oreilly, John; Frogner, Bjorn

    1994-01-01

    This paper discusses the results of a Phase 2 SBIR project sponsored by NASA and performed by MIMD Systems, Inc. A major objective of this project was to develop specific concepts for improved performance in accessing large databases. An object-oriented and distributed approach was used for the general design, while a geographical decomposition was used as a specific solution. The resulting software framework is called ARACHNID. The Faint Source Catalog developed by NASA was the initial database testbed. This is a database of many giga-bytes, where an order of magnitude improvement in query speed is being sought. This database contains faint infrared point sources obtained from telescope measurements of the sky. A geographical decomposition of this database is an attractive approach to dividing it into pieces. Each piece can then be searched on individual processors with only a weak data linkage between the processors being required. As a further demonstration of the concepts implemented in ARACHNID, a tourist information system is discussed. This version of ARACHNID is the commercial result of the project. It is a distributed, networked, database application where speed, maintenance, and reliability are important considerations. This paper focuses on the design concepts and technologies that form the basis for ARACHNID.

  15. DB2 9 for Linux, UNIX, and Windows Advanced Database Administration Certification Certification Study Guide

    CERN Document Server

    Sanders, Roger E

    2008-01-01

    Database administrators versed in DB2 wanting to learn more about advanced database administration activities and students wishing to gain knowledge to help them pass the DB2 9 UDB Advanced DBA certification exam will find this exhaustive reference invaluable. Written by two individuals who were part of the team that developed the certification exam, this comprehensive study guide prepares the student for challenging questions on database design; data partitioning and clustering; high availability diagnostics; performance and scalability; security and encryption; connectivity and networking; a

  16. Input vector optimization of feed-forward neural networks for fitting ab initio potential-energy databases

    Science.gov (United States)

    Malshe, M.; Raff, L. M.; Hagan, M.; Bukkapatnam, S.; Komanduri, R.

    2010-05-01

    The variation in the fitting accuracy of neural networks (NNs) when used to fit databases comprising potential energies obtained from ab initio electronic structure calculations is investigated as a function of the number and nature of the elements employed in the input vector to the NN. Ab initio databases for H2O2, HONO, Si5, and H2CCHBr were employed in the investigations. These systems were chosen so as to include four-, five-, and six-body systems containing first, second, third, and fourth row elements with a wide variety of chemical bonding and whose conformations cover a wide range of structures that occur under high-energy machining conditions and in chemical reactions involving cis-trans isomerizations, six different types of two-center bond ruptures, and two different three-center dissociation reactions. The ab initio databases for these systems were obtained using density functional theory/B3LYP, MP2, and MP4 methods with extended basis sets. A total of 31 input vectors were investigated. In each case, the elements of the input vector were chosen from interatomic distances, inverse powers of the interatomic distance, three-body angles, and dihedral angles. Both redundant and nonredundant input vectors were investigated. The results show that among all the input vectors investigated, the set employed in the Z-matrix specification of the molecular configurations in the electronic structure calculations gave the lowest NN fitting accuracy for both Si5 and vinyl bromide. The underlying reason for this result appears to be the discontinuity present in the dihedral angle for planar geometries. The use of trigometric functions of the angles as input elements produced significantly improved fitting accuracy as this choice eliminates the discontinuity. The most accurate fitting was obtained when the elements of the input vector were taken to have the form Rij-n, where the Rij are the interatomic distances. When the Levenberg-Marquardt procedure was modified

  17. Database Description - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...e databases - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description... Links: Original website information Database maintenance site The Molecular Profiling Research Center for D...stration Not available About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - SAHG | LSDB Archive ...

  18. LiHe{sup +} IN THE EARLY UNIVERSE: A FULL ASSESSMENT OF ITS REACTION NETWORK AND FINAL ABUNDANCES

    Energy Technology Data Exchange (ETDEWEB)

    Bovino, Stefano; Tacconi, Mario; Gianturco, Francesco A. [Department of Chemistry, Universita degli Studi di Roma ' La Sapienza' , Piazzale A. Moro 5, 00185 Roma (Italy); Curik, Roman [J. Heyrovsky Institute of Physical Chemistry, Dolejskova 3, Prague (Czech Republic); Galli, Daniele, E-mail: fa.gianturco@caspur.it [INAF-Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, 50125 Firenze (Italy)

    2012-06-10

    We present the results of quantum calculations based on entirely ab initio methods for a variety of molecular processes and chemical reactions involving the LiHe{sup +} ionic polar molecule. With the aid of these calculations, we derive accurate reaction rates and fitting expressions valid over a range of gas temperatures representative of the typical conditions of the pregalactic gas. With the help of a full chemical network, we then compute the evolution of the abundance of LiHe{sup +} as function of redshift in the early universe. Finally, we compare the relative abundance of LiHe{sup +} with that of other polar cations formed in the same redshift interval.

  19. Corporate Data Network (CDN) data requirements task. Enterprise Model. Volume 1

    International Nuclear Information System (INIS)

    1985-11-01

    The NRC has initiated a multi-year program to centralize its information processing in a Corporate Data Network (CDN). The new information processing environment will include shared databases, telecommunications, office automation tools, and state-of-the-art software. Touche Ross and Company was contracted with to perform a general data requirements analysis for shared databases and to develop a preliminary plan for implementation of the CDN concept. The Enterprise Model (Vol. 1) provided the NRC with agency-wide information requirements in the form of data entities and organizational demand patterns as the basis for clustering the entities into logical groups. The Data Dictionary (Vol. 2) provided the NRC with definitions and example attributes and properties for each entity. The Data Model (Vol. 3) defined logical databases and entity relationships within and between databases. The Preliminary Strategic Data Plan (Vol. 4) prioritized the development of databases and included a workplan and approach for implementation of the shared database component of the Corporate Data Network

  20. An adaptive wavelet-network model for forecasting daily total solar-radiation

    International Nuclear Information System (INIS)

    Mellit, A.; Benghanem, M.; Kalogirou, S.A.

    2006-01-01

    The combination of wavelet theory and neural networks has lead to the development of wavelet networks. Wavelet-networks are feed-forward networks using wavelets as activation functions. Wavelet-networks have been used successfully in various engineering applications such as classification, identification and control problems. In this paper, the use of adaptive wavelet-network architecture in finding a suitable forecasting model for predicting the daily total solar-radiation is investigated. Total solar-radiation is considered as the most important parameter in the performance prediction of renewable energy systems, particularly in sizing photovoltaic (PV) power systems. For this purpose, daily total solar-radiation data have been recorded during the period extending from 1981 to 2001, by a meteorological station in Algeria. The wavelet-network model has been trained by using either the 19 years of data or one year of the data. In both cases the total solar radiation data corresponding to year 2001 was used for testing the model. The network was trained to accept and handle a number of unusual cases. Results indicate that the model predicts daily total solar-radiation values with a good accuracy of approximately 97% and the mean absolute percentage error is not more than 6%. In addition, the performance of the model was compared with different neural network structures and classical models. Training algorithms for wavelet-networks require smaller numbers of iterations when compared with other neural networks. The model can be used to fill missing data in weather databases. Additionally, the proposed model can be generalized and used in different locations and for other weather data, such as sunshine duration and ambient temperature. Finally, an application using the model for sizing a PV-power system is presented in order to confirm the validity of this model

  1. A comparative gene expression database for invertebrates

    Directory of Open Access Journals (Sweden)

    Ormestad Mattias

    2011-08-01

    Full Text Available Abstract Background As whole genome and transcriptome sequencing gets cheaper and faster, a great number of 'exotic' animal models are emerging, rapidly adding valuable data to the ever-expanding Evo-Devo field. All these new organisms serve as a fantastic resource for the research community, but the sheer amount of data, some published, some not, makes detailed comparison of gene expression patterns very difficult to summarize - a problem sometimes even noticeable within a single lab. The need to merge existing data with new information in an organized manner that is publicly available to the research community is now more necessary than ever. Description In order to offer a homogenous way of storing and handling gene expression patterns from a variety of organisms, we have developed the first web-based comparative gene expression database for invertebrates that allows species-specific as well as cross-species gene expression comparisons. The database can be queried by gene name, developmental stage and/or expression domains. Conclusions This database provides a unique tool for the Evo-Devo research community that allows the retrieval, analysis and comparison of gene expression patterns within or among species. In addition, this database enables a quick identification of putative syn-expression groups that can be used to initiate, among other things, gene regulatory network (GRN projects.

  2. Solar Sail Propulsion Technology Readiness Level Database

    Science.gov (United States)

    Adams, Charles L.

    2004-01-01

    The NASA In-Space Propulsion Technology (ISPT) Projects Office has been sponsoring 2 solar sail system design and development hardware demonstration activities over the past 20 months. Able Engineering Company (AEC) of Goleta, CA is leading one team and L Garde, Inc. of Tustin, CA is leading the other team. Component, subsystem and system fabrication and testing has been completed successfully. The goal of these activities is to advance the technology readiness level (TRL) of solar sail propulsion from 3 towards 6 by 2006. These activities will culminate in the deployment and testing of 20-meter solar sail system ground demonstration hardware in the 30 meter diameter thermal-vacuum chamber at NASA Glenn Plum Brook in 2005. This paper will describe the features of a computer database system that documents the results of the solar sail development activities to-date. Illustrations of the hardware components and systems, test results, analytical models, relevant space environment definition and current TRL assessment, as stored and manipulated within the database are presented. This database could serve as a central repository for all data related to the advancement of solar sail technology sponsored by the ISPT, providing an up-to-date assessment of the TRL of this technology. Current plans are to eventually make the database available to the Solar Sail community through the Space Transportation Information Network (STIN).

  3. Materials, processes, and environmental engineering network

    Science.gov (United States)

    White, Margo M.

    1993-01-01

    The Materials, Processes, and Environmental Engineering Network (MPEEN) was developed as a central holding facility for materials testing information generated by the Materials and Processes Laboratory. It contains information from other NASA centers and outside agencies, and also includes the NASA Environmental Information System (NEIS) and Failure Analysis Information System (FAIS) data. Environmental replacement materials information is a newly developed focus of MPEEN. This database is the NASA Environmental Information System, NEIS, which is accessible through MPEEN. Environmental concerns are addressed regarding materials identified by the NASA Operational Environment Team, NOET, to be hazardous to the environment. An environmental replacement technology database is contained within NEIS. Environmental concerns about materials are identified by NOET, and control or replacement strategies are formed. This database also contains the usage and performance characteristics of these hazardous materials. In addition to addressing environmental concerns, MPEEN contains one of the largest materials databases in the world. Over 600 users access this network on a daily basis. There is information available on failure analysis, metals and nonmetals testing, materials properties, standard and commercial parts, foreign alloy cross-reference, Long Duration Exposure Facility (LDEF) data, and Materials and Processes Selection List data.

  4. Database Description - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PSCDB Alternative n...rial Science and Technology (AIST) Takayuki Amemiya E-mail: Database classification Structure Databases - Protein structure Database...554-D558. External Links: Original website information Database maintenance site Graduate School of Informat...available URL of Web services - Need for user registration Not available About This Database Database Descri...ption Download License Update History of This Database Site Policy | Contact Us Database Description - PSCDB | LSDB Archive ...

  5. Database Description - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ASTRA Alternative n...tics Journal Search: Contact address Database classification Nucleotide Sequence Databases - Gene structure,...3702 Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The database represents classified p...(10):1211-6. External Links: Original website information Database maintenance site National Institute of Ad... for user registration Not available About This Database Database Description Dow

  6. Research@ARL: Network Sciences

    Science.gov (United States)

    2013-03-01

    function in concert. Consider the behavior of social insects, such as bees and ants. Fish and birds are other examples of animals whose collective...Tropical Watershed, Springer/Kluwer, 83–95, 2005. Lehner, B. and Döll, P.: Development and validation of a global database of lakes, reservoirs and wetlands ...what it would be in an unperturbed network. A biological network with this sensitivity to error would not survive for very long in the wild . For

  7. Computational infrastructure for law enforcement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Lades, M.; Kunz, C.; Strikos, I.

    1997-02-01

    This project planned to demonstrate the leverage of enhanced computational infrastructure for law enforcement by demonstrating the face recognition capability at LLNL. The project implemented a face finder module extending the segmentation capabilities of the current face recognition so it was capable of processing different image formats and sizes and create the pilot of a network-accessible image database for the demonstration of face recognition capabilities. The project was funded at $40k (2 man-months) for a feasibility study. It investigated several essential components of a networked face recognition system which could help identify, apprehend, and convict criminals.

  8. Database Description - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RPD Alternative name Rice Proteome Database...titute of Crop Science, National Agriculture and Food Research Organization Setsuko Komatsu E-mail: Database... classification Proteomics Resources Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database... description Rice Proteome Database contains information on protei...and entered in the Rice Proteome Database. The database is searchable by keyword,

  9. Database Description - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PLACE Alternative name A Database...Kannondai, Tsukuba, Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Databas...e classification Plant databases Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database...99, Vol.27, No.1 :297-300 External Links: Original website information Database maintenance site National In...- Need for user registration Not available About This Database Database Descripti

  10. Yeast Augmented Network Analysis (YANA: a new systems approach to identify therapeutic targets for human genetic diseases [v1; ref status: indexed, http://f1000r.es/3gk

    Directory of Open Access Journals (Sweden)

    David J. Wiley

    2014-06-01

    Full Text Available Genetic interaction networks that underlie most human diseases are highly complex and poorly defined. Better-defined networks will allow identification of a greater number of therapeutic targets. Here we introduce our Yeast Augmented Network Analysis (YANA approach and test it with the X-linked spinal muscular atrophy (SMA disease gene UBA1. First, we express UBA1 and a mutant variant in fission yeast and use high-throughput methods to identify fission yeast genetic modifiers of UBA1. Second, we analyze available protein-protein interaction network databases in both fission yeast and human to construct UBA1 genetic networks. Third, from these networks we identified potential therapeutic targets for SMA. Finally, we validate one of these targets in a vertebrate (zebrafish SMA model. This study demonstrates the power of combining synthetic and chemical genetics with a simple model system to identify human disease gene networks that can be exploited for treating human diseases.

  11. Self-aligning and compressed autosophy video databases

    Science.gov (United States)

    Holtz, Klaus E.

    1993-04-01

    Autosophy, an emerging new science, explains `self-assembling structures,' such as crystals or living trees, in mathematical terms. This research provides a new mathematical theory of `learning' and a new `information theory' which permits the growing of self-assembling data network in a computer memory similar to the growing of `data crystals' or `data trees' without data processing or programming. Autosophy databases are educated very much like a human child to organize their own internal data storage. Input patterns, such as written questions or images, are converted to points in a mathematical omni dimensional hyperspace. The input patterns are then associated with output patterns, such as written answers or images. Omni dimensional information storage will result in enormous data compression because each pattern fragment is only stored once. Pattern recognition in the text or image files is greatly simplified by the peculiar omni dimensional storage method. Video databases will absorb input images from a TV camera and associate them with textual information. The `black box' operations are totally self-aligning where the input data will determine their own hyperspace storage locations. Self-aligning autosophy databases may lead to a new generation of brain-like devices.

  12. Database Description - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name JSNP Alternative nam...n Science and Technology Agency Creator Affiliation: Contact address E-mail : Database...sapiens Taxonomy ID: 9606 Database description A database of about 197,000 polymorphisms in Japanese populat...1):605-610 External Links: Original website information Database maintenance site Institute of Medical Scien...er registration Not available About This Database Database Description Download License Update History of This Database

  13. Database Description - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti... Article title: Rice Expression Database: the gateway to rice functional genomics...nt Science (2002) Dec 7 (12):563-564 External Links: Original website information Database maintenance site

  14. Database Description - ConfC | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ConfC Alternative name Database...amotsu Noguchi Tel: 042-495-8736 E-mail: Database classification Structure Database...s - Protein structure Structure Databases - Small molecules Structure Databases - Nucleic acid structure Database... services - Need for user registration - About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Database Description - ConfC | LSDB Archive ...

  15. Karst database development in Minnesota: Design and data assembly

    Science.gov (United States)

    Gao, Y.; Alexander, E.C.; Tipping, R.G.

    2005-01-01

    The Karst Feature Database (KFD) of Minnesota is a relational GIS-based Database Management System (DBMS). Previous karst feature datasets used inconsistent attributes to describe karst features in different areas of Minnesota. Existing metadata were modified and standardized to represent a comprehensive metadata for all the karst features in Minnesota. Microsoft Access 2000 and ArcView 3.2 were used to develop this working database. Existing county and sub-county karst feature datasets have been assembled into the KFD, which is capable of visualizing and analyzing the entire data set. By November 17 2002, 11,682 karst features were stored in the KFD of Minnesota. Data tables are stored in a Microsoft Access 2000 DBMS and linked to corresponding ArcView applications. The current KFD of Minnesota has been moved from a Windows NT server to a Windows 2000 Citrix server accessible to researchers and planners through networked interfaces. ?? Springer-Verlag 2005.

  16. SynechoNET: integrated protein-protein interaction database of a model cyanobacterium Synechocystis sp. PCC 6803

    OpenAIRE

    Kim, Woo-Yeon; Kang, Sungsoo; Kim, Byoung-Chul; Oh, Jeehyun; Cho, Seongwoong; Bhak, Jong; Choi, Jong-Soon

    2008-01-01

    Background Cyanobacteria are model organisms for studying photosynthesis, carbon and nitrogen assimilation, evolution of plant plastids, and adaptability to environmental stresses. Despite many studies on cyanobacteria, there is no web-based database of their regulatory and signaling protein-protein interaction networks to date. Description We report a database and website SynechoNET that provides predicted protein-protein interactions. SynechoNET shows cyanobacterial domain-domain interactio...

  17. ZeBase: an open-source relational database for zebrafish laboratories.

    Science.gov (United States)

    Hensley, Monica R; Hassenplug, Eric; McPhail, Rodney; Leung, Yuk Fai

    2012-03-01

    Abstract ZeBase is an open-source relational database for zebrafish inventory. It is designed for the recording of genetic, breeding, and survival information of fish lines maintained in a single- or multi-laboratory environment. Users can easily access ZeBase through standard web-browsers anywhere on a network. Convenient search and reporting functions are available to facilitate routine inventory work; such functions can also be automated by simple scripting. Optional barcode generation and scanning are also built-in for easy access to the information related to any fish. Further information of the database and an example implementation can be found at http://zebase.bio.purdue.edu.

  18. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  19. Social Networks and the Environment

    OpenAIRE

    Julio Videras

    2013-01-01

    This review discusses empirical research on social networks and the environment; it summarizes findings from representative studies and the conceptual frameworks social scientists use to examine the role of social networks. The article presents basic concepts in social network analysis, summarizes common challenges of empirical research on social networks, and outlines areas for future research. Finally, the article discusses the normative and positive meanings of social networks.

  20. Performance Parameters Analysis of an XD3P Peugeot Engine Using Artificial Neural Networks (ANN) Concept in MATLAB

    Science.gov (United States)

    Rangaswamy, T.; Vidhyashankar, S.; Madhusudan, M.; Bharath Shekar, H. R.

    2015-04-01

    The current trends of engineering follow the basic rule of innovation in mechanical engineering aspects. For the engineers to be efficient, problem solving aspects need to be viewed in a multidimensional perspective. One such methodology implemented is the fusion of technologies from other disciplines in order to solve the problems. This paper mainly deals with the application of Neural Networks in order to analyze the performance parameters of an XD3P Peugeot engine (used in Ministry of Defence). The basic propaganda of the work is divided into two main working stages. In the former stage, experimentation of an IC engine is carried out in order to obtain the primary data. In the latter stage the primary database formed is used to design and implement a predictive neural network in order to analyze the output parameters variation with respect to each other. A mathematical governing equation for the neural network is obtained. The obtained polynomial equation describes the characteristic behavior of the built neural network system. Finally, a comparative study of the results is carried out.

  1. Databases and coordinated research projects at the IAEA on atomic processes in plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Braams, Bastiaan J.; Chung, Hyun-Kyung [Nuclear Data Section, NAPC Division, International Atomic Energy Agency P. O. Box 100, Vienna International Centre, AT-1400 Vienna (Austria)

    2012-05-25

    The Atomic and Molecular Data Unit at the IAEA works with a network of national data centres to encourage and coordinate production and dissemination of fundamental data for atomic, molecular and plasma-material interaction (A+M/PMI) processes that are relevant to the realization of fusion energy. The Unit maintains numerical and bibliographical databases and has started a Wiki-style knowledge base. The Unit also contributes to A+M database interface standards and provides a search engine that offers a common interface to multiple numerical A+M/PMI databases. Coordinated Research Projects (CRPs) bring together fusion energy researchers and atomic, molecular and surface physicists for joint work towards the development of new data and new methods. The databases and current CRPs on A+M/PMI processes are briefly described here.

  2. Databases and coordinated research projects at the IAEA on atomic processes in plasmas

    Science.gov (United States)

    Braams, Bastiaan J.; Chung, Hyun-Kyung

    2012-05-01

    The Atomic and Molecular Data Unit at the IAEA works with a network of national data centres to encourage and coordinate production and dissemination of fundamental data for atomic, molecular and plasma-material interaction (A+M/PMI) processes that are relevant to the realization of fusion energy. The Unit maintains numerical and bibliographical databases and has started a Wiki-style knowledge base. The Unit also contributes to A+M database interface standards and provides a search engine that offers a common interface to multiple numerical A+M/PMI databases. Coordinated Research Projects (CRPs) bring together fusion energy researchers and atomic, molecular and surface physicists for joint work towards the development of new data and new methods. The databases and current CRPs on A+M/PMI processes are briefly described here.

  3. Databases and coordinated research projects at the IAEA on atomic processes in plasmas

    International Nuclear Information System (INIS)

    Braams, Bastiaan J.; Chung, Hyun-Kyung

    2012-01-01

    The Atomic and Molecular Data Unit at the IAEA works with a network of national data centres to encourage and coordinate production and dissemination of fundamental data for atomic, molecular and plasma-material interaction (A+M/PMI) processes that are relevant to the realization of fusion energy. The Unit maintains numerical and bibliographical databases and has started a Wiki-style knowledge base. The Unit also contributes to A+M database interface standards and provides a search engine that offers a common interface to multiple numerical A+M/PMI databases. Coordinated Research Projects (CRPs) bring together fusion energy researchers and atomic, molecular and surface physicists for joint work towards the development of new data and new methods. The databases and current CRPs on A+M/PMI processes are briefly described here.

  4. MetReS, an Efficient Database for Genomic Applications.

    Science.gov (United States)

    Vilaplana, Jordi; Alves, Rui; Solsona, Francesc; Mateo, Jordi; Teixidó, Ivan; Pifarré, Marc

    2018-02-01

    MetReS (Metabolic Reconstruction Server) is a genomic database that is shared between two software applications that address important biological problems. Biblio-MetReS is a data-mining tool that enables the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the processes of interest and their function. The main goal of this work was to identify the areas where the performance of the MetReS database performance could be improved and to test whether this improvement would scale to larger datasets and more complex types of analysis. The study was started with a relational database, MySQL, which is the current database server used by the applications. We also tested the performance of an alternative data-handling framework, Apache Hadoop. Hadoop is currently used for large-scale data processing. We found that this data handling framework is likely to greatly improve the efficiency of the MetReS applications as the dataset and the processing needs increase by several orders of magnitude, as expected to happen in the near future.

  5. A New Reversible Database Watermarking Approach with Firefly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Mustafa Bilgehan Imamoglu

    2017-01-01

    Full Text Available Up-to-date information is crucial in many fields such as medicine, science, and stock market, where data should be distributed to clients from a centralized database. Shared databases are usually stored in data centers where they are distributed over insecure public access network, the Internet. Sharing may result in a number of problems such as unauthorized copies, alteration of data, and distribution to unauthorized people for reuse. Researchers proposed using watermarking to prevent problems and claim digital rights. Many methods are proposed recently to watermark databases to protect digital rights of owners. Particularly, optimization based watermarking techniques draw attention, which results in lower distortion and improved watermark capacity. Difference expansion watermarking (DEW with Firefly Algorithm (FFA, a bioinspired optimization technique, is proposed to embed watermark into relational databases in this work. Best attribute values to yield lower distortion and increased watermark capacity are selected efficiently by the FFA. Experimental results indicate that FFA has reduced complexity and results in less distortion and improved watermark capacity compared to similar works reported in the literature.

  6. Development of the biosphere code BIOMOD: final report

    International Nuclear Information System (INIS)

    Kane, P.

    1983-05-01

    Final report to DoE on the development of the biosphere code BIOMOD. The work carried out under the contract is itemised. Reference is made to the six documents issued along with the final report. These consist of two technical notes issued as interim consultative documents, a user's guide and a programmer's guide to BIOMOD, a database description, program test document and a technical note entitled ''BIOMOD - preliminary findings''. (author)

  7. Exploring Vietnamese co-authorship patterns in social sciences with basic network measures of 2008-2017 Scopus data.

    Science.gov (United States)

    Ho, Tung Manh; Nguyen, Ha Viet; Vuong, Thu-Trang; Dam, Quang-Minh; Pham, Hiep-Hung; Vuong, Quan-Hoang

    2017-01-01

    Background: Collaboration is a common occurrence among Vietnamese scientists; however, insights into Vietnamese scientific collaborations have been scarce. On the other hand, the application of social network analysis in studying science collaboration has gained much attention all over the world. The technique could be employed to explore Vietnam's scientific community. Methods: This paper employs network theory to explore characteristics of a network of 412 Vietnamese social scientists whose papers can be found indexed in the Scopus database. Two basic network measures, density and clustering coefficient, were taken, and the entire network was studied in comparison with two of its largest components. Results: The networks connections are very sparse, with a density of only 0.47%, while the clustering coefficient is very high (58.64%). This suggests an inefficient dissemination of information, knowledge, and expertise in the network. Secondly, the disparity in levels of connection among individuals indicates that the network would easily fall apart if a few highly-connected nodes are removed. Finally, the two largest components of the network were found to differ from the entire networks in terms of measures and were both led by the most productive and well-connected researchers. Conclusions: High clustering and low density seems to be tied to inefficient dissemination of expertise among Vietnamese social scientists, and consequently low scientific output. Also low in robustness, the network shows the potential of an intellectual elite composed of well-connected, productive, and socially significant individuals.

  8. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RMG Alternative name ...raki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database... classification Nucleotide Sequence Databases Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database...rnal: Mol Genet Genomics (2002) 268: 434–445 External Links: Original website information Database...available URL of Web services - Need for user registration Not available About This Database Database Descri

  9. Social networking sites and older users - a systematic review

    OpenAIRE

    Nef, Tobias; Ganea, Raluca L.; Müri, René M.; Mosimann, Urs P.

    2017-01-01

    BACKGROUND Social networking sites can be beneficial for senior citizens to promote social participation and to enhance intergenerational communication. Particularly for older adults with impaired mobility, social networking sites can help them to connect with family members and other active social networking users. The aim of this systematic review is to give an overview of existing scientific literature on social networking in older users. METHODS Computerized databases were sea...

  10. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  11. The LAILAPS search engine: a feature model for relevance ranking in life science databases.

    Science.gov (United States)

    Lange, Matthias; Spies, Karl; Colmsee, Christian; Flemming, Steffen; Klapperstück, Matthias; Scholz, Uwe

    2010-03-25

    Efficient and effective information retrieval in life sciences is one of the most pressing challenge in bioinformatics. The incredible growth of life science databases to a vast network of interconnected information systems is to the same extent a big challenge and a great chance for life science research. The knowledge found in the Web, in particular in life-science databases, are a valuable major resource. In order to bring it to the scientist desktop, it is essential to have well performing search engines. Thereby, not the response time nor the number of results is important. The most crucial factor for millions of query results is the relevance ranking. In this paper, we present a feature model for relevance ranking in life science databases and its implementation in the LAILAPS search engine. Motivated by the observation of user behavior during their inspection of search engine result, we condensed a set of 9 relevance discriminating features. These features are intuitively used by scientists, who briefly screen database entries for potential relevance. The features are both sufficient to estimate the potential relevance, and efficiently quantifiable. The derivation of a relevance prediction function that computes the relevance from this features constitutes a regression problem. To solve this problem, we used artificial neural networks that have been trained with a reference set of relevant database entries for 19 protein queries. Supporting a flexible text index and a simple data import format, this concepts are implemented in the LAILAPS search engine. It can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases. LAILAPS is publicly available for SWISSPROT data at http://lailaps.ipk-gatersleben.de.

  12. Database Description - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DGBY Alternative name Database...EL: +81-29-838-8066 E-mail: Database classification Microarray Data and other Gene Expression Databases Orga...nism Taxonomy Name: Saccharomyces cerevisiae Taxonomy ID: 4932 Database descripti...-called phenomics). We uploaded these data on this website which is designated DGBY(Database for Gene expres...ma J, Ando A, Takagi H. Journal: Yeast. 2008 Mar;25(3):179-90. External Links: Original website information Database

  13. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name KOME Alternative nam... Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice ...Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description Information about approximately ...Hayashizaki Y, Kikuchi S. Journal: PLoS One. 2007 Nov 28; 2(11):e1235. External Links: Original website information Database...OS) Rice mutant panel database (Tos17) A Database of Plant Cis-acting Regulatory

  14. An Analysis of Weakly Consistent Replication Systems in an Active Distributed Network

    OpenAIRE

    Amit Chougule; Pravin Ghewari

    2011-01-01

    With the sudden increase in heterogeneity and distribution of data in wide-area networks, more flexible, efficient and autonomous approaches for management and data distribution are needed. In recent years, the proliferation of inter-networks and distributed applications has increased the demand for geographically-distributed replicated databases. The architecture of Bayou provides features that address the needs of database storage of world-wide applications. Key is the use of weak consisten...

  15. [The therapeutic drug monitoring network server of tacrolimus for Chinese renal transplant patients].

    Science.gov (United States)

    Deng, Chen-Hui; Zhang, Guan-Min; Bi, Shan-Shan; Zhou, Tian-Yan; Lu, Wei

    2011-07-01

    This study is to develop a therapeutic drug monitoring (TDM) network server of tacrolimus for Chinese renal transplant patients, which can facilitate doctor to manage patients' information and provide three levels of predictions. Database management system MySQL was employed to build and manage the database of patients and doctors' information, and hypertext mark-up language (HTML) and Java server pages (JSP) technology were employed to construct network server for database management. Based on the population pharmacokinetic model of tacrolimus for Chinese renal transplant patients, above program languages were used to construct the population prediction and subpopulation prediction modules. Based on Bayesian principle and maximization of the posterior probability function, an objective function was established, and minimized by an optimization algorithm to estimate patient's individual pharmacokinetic parameters. It is proved that the network server has the basic functions for database management and three levels of prediction to aid doctor to optimize the regimen of tacrolimus for Chinese renal transplant patients.

  16. Construction of a PIXE database for supporting PIXE studies. Database of experimental conditions and elemental concentration for various samples

    International Nuclear Information System (INIS)

    Itoh, J.; Saitoh, Y.; Futatsugawa, S.; Sera, K.; Ishii, K.

    2007-01-01

    A database of PIXE data, which have been accumulated at NMCC, has been constructed. In order to fill up the database, data are newly obtained as many as possible for the kind of samples whose number is small. In addition, the data for different measuring conditions are obtained for several samples. As the number of γ-ray spectrum obtained with a HPGe detector for the purpose of analyzing light elements such as fluorine, is overwhelmingly small in comparison with that of usual PIXE spectra, γ-ray spectrum and elemental concentration of fluorine are obtained as many as possible for food, environmental and hair samples. In addition, the data taken with an in-air PIXE system have been obtained for various samples. As a result, the database involving contents over various research fields is constructed, and it is expected to be useful for researches who make use of analytical techniques. It is expected that this work will give a start to many researchers to participate in the database and to make calibration with each other in order to establish reliable analytical techniques. Moreover, the final goal of the database is to establish the control concentration values for typical samples. As the first step of establishing the control values, average elemental concentration and its standard deviations in hair samples taken from 405 healthy Japanese are obtained and tabulated according to their sex and age. (author)

  17. Complex networks-based energy-efficient evolution model for wireless sensor networks

    Energy Technology Data Exchange (ETDEWEB)

    Zhu Hailin [Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, P.O. Box 106, Beijing 100876 (China)], E-mail: zhuhailin19@gmail.com; Luo Hong [Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, P.O. Box 106, Beijing 100876 (China); Peng Haipeng; Li Lixiang; Luo Qun [Information Secure Center, State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, P.O. Box 145, Beijing 100876 (China)

    2009-08-30

    Based on complex networks theory, we present two self-organized energy-efficient models for wireless sensor networks in this paper. The first model constructs the wireless sensor networks according to the connectivity and remaining energy of each sensor node, thus it can produce scale-free networks which have a performance of random error tolerance. In the second model, we not only consider the remaining energy, but also introduce the constraint of links to each node. This model can make the energy consumption of the whole network more balanced. Finally, we present the numerical experiments of the two models.

  18. Complex networks-based energy-efficient evolution model for wireless sensor networks

    International Nuclear Information System (INIS)

    Zhu Hailin; Luo Hong; Peng Haipeng; Li Lixiang; Luo Qun

    2009-01-01

    Based on complex networks theory, we present two self-organized energy-efficient models for wireless sensor networks in this paper. The first model constructs the wireless sensor networks according to the connectivity and remaining energy of each sensor node, thus it can produce scale-free networks which have a performance of random error tolerance. In the second model, we not only consider the remaining energy, but also introduce the constraint of links to each node. This model can make the energy consumption of the whole network more balanced. Finally, we present the numerical experiments of the two models.

  19. The Environmental Data Exchange Network for Inland Water (EDEN-IW) project

    DEFF Research Database (Denmark)

    Stjernholm, M.; Sortkjær, O.; Papageorgiou, A.

    The final report describe development of the EDEN-IW prototype for user access to heterogenous databases of inland water quality data through the EDEN-IW system......The final report describe development of the EDEN-IW prototype for user access to heterogenous databases of inland water quality data through the EDEN-IW system...

  20. Integration and visualization of non-coding RNA and protein interaction networks

    OpenAIRE

    Junge, Alexander; Refsgaard, Jan Christian; Garde, Christian; Pan, Xiaoyong; Santos Delgado, Alberto; Anthon, Christian; Alkan, Ferhat; von Mering, Christian; Workman, Christopher; Jensen, Lars Juhl; Gorodkin, Jan

    2015-01-01

    Non-coding RNAs (ncRNAs) fulfill a diverse set of biological functions relying on interactions with other molecular entities. The advent of new experimental and computational approaches makes it possible to study ncRNAs and their associations on an unprecedented scale. We present RAIN (RNA Association and Interaction Networks) - a database that combines ncRNA-ncRNA, ncRNA-mRNA and ncRNA-protein interactions with large-scale protein association networks available in the STRING database. By int...

  1. Implementation of the BDFGEOTHERM Database (Geothermal Fluids in Switzerland) on Google Earth - Final report

    Energy Technology Data Exchange (ETDEWEB)

    Sonney, R.; Vuataz, F.-D.; Cattin, S.

    2008-12-15

    The database BDFGeotherm compiled in 2007 on ACCESS code was modified to improve its availability and attractivity by using Google Earth free software and the CREGE web site. This database allows gathering existing geothermal data, generally widely dispersed and often difficult to reach, towards a user's friendly tool. Downloading the file 'BDFGeotherm.kmz' from the CREGE web site makes possible to visualize a total of 84 geothermal sites from Switzerland and neighbouring areas. Each one is represented with a pinpoint of different colour, for different temperature ranges. A large majority of sites is located in the northern part of the Jura Mountain and in the upper Rhone Valley. General information about water use, geology, flow rate, temperature and mineralization are given in a small window by clicking on the desired pinpoint. Moreover, two links to an Internet address are available for each site in each window, allowing returning to the CREGE web site and providing more details on each sampling point such as: geographical description, reservoir geology, hydraulics, hydrochemistry, isotopes and geothermal parameters. For a limited number of sites, photos and a geological log can be viewed and exported. (author)

  2. Cisco Networking All-in-One For Dummies

    CERN Document Server

    Tetz, Edward

    2011-01-01

    A helpful guide on all things Cisco Do you wish that the complex topics of routers, switches, and networking could be presented in a simple, understandable presentation? With Cisco Networking All-in-One For Dummies, they are! This expansive reference is packed with all the information you need to learn to use Cisco routers and switches to develop and manage secure Cisco networks. This straightforward-by-fun guide offers expansive coverage of Cisco and breaks down intricate subjects such as networking, virtualization, and database technologies into easily digestible pieces. Drills down complex

  3. Final report of the 'Nordic thermal-hydraulic and safety network (NOTNET)' - Project

    International Nuclear Information System (INIS)

    Tuunanen, J.; Tuomainen, M.

    2005-04-01

    A Nordic network for thermal-hydraulics and nuclear safety research was started. The idea of the network is to combine the resources of different research teams in order to carry out more ambitious and extensive research programs than would be possible for the individual teams. From the very beginning, the end users of the research results have been integrated to the network. Aim of the network is to benefit the partners involved in nuclear energy in the Nordic Countries (power companies, reactor vendors, safety regulators, research units). First task within the project was to describe the resources (personnel, know-how, simulation tools, test facilities) of the various teams. Next step was to discuss with the end users about their research needs. Based on these steps, few most important research topics with defined goals were selected, and coarse road maps were prepared for reaching the targets. These road maps will be used as a starting point for planning the actual research projects in the future. The organisation and work plan for the network were established. National coordinators were appointed, as well as contact persons in each participating organisation, whether research unit or end user. This organisation scheme is valid for the short-term operation of NOTNET when only Nordic organisations take part in the work. Later on, it is possible to enlarge the network e.g. within EC framework programme. The network can now start preparing project proposals and searching funding for the first common research projects. (au)

  4. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  5. Using crowdsourcing to prioritize bicycle network improvements : final report.

    Science.gov (United States)

    2016-04-01

    Effort to improve the bicycle route network using crowdsourced data is a powerful means : of incorporating citizens in infrastructure improvement decisions, which will improve : livability by maximizing the benefit of the bicycle infrastructure fundi...

  6. Social contagions on correlated multiplex networks

    Science.gov (United States)

    Wang, Wei; Cai, Meng; Zheng, Muhua

    2018-06-01

    The existence of interlayer degree correlations has been disclosed by abundant multiplex network analysis. However, how they impose on the dynamics of social contagions are remain largely unknown. In this paper, we propose a non-Markovian social contagion model in multiplex networks with inter-layer degree correlations to delineate the behavior spreading, and develop an edge-based compartmental (EBC) theory to describe the model. We find that multiplex networks promote the final behavior adoption size. Remarkably, it can be observed that the growth pattern of the final behavior adoption size, versus the behavioral information transmission probability, changes from discontinuous to continuous once decreasing the behavior adoption threshold in one layer. We finally unravel that the inter-layer degree correlations play a role on the final behavior adoption size but have no effects on the growth pattern, which is coincidence with our prediction by using the suggested theory.

  7. Development of a computational database for probabilistic safety assessment of nuclear research reactors

    Energy Technology Data Exchange (ETDEWEB)

    Macedo, Vagner S.; Oliveira, Patricia S. Pagetti de; Andrade, Delvonei Alves de, E-mail: vagner.macedo@usp.br, E-mail: patricia@ipen.br, E-mail: delvonei@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    The objective of this work is to describe the database being developed at IPEN - CNEN / SP for application in the Probabilistic Safety Assessment of nuclear research reactors. The database can be accessed by means of a computational program installed in the corporate computer network, named IPEN Intranet, and this access will be allowed only to professionals previously registered. Data updating, editing and searching tasks will be controlled by a system administrator according to IPEN Intranet security rules. The logical model and the physical structure of the database can be represented by an Entity Relationship Model, which is based on the operational routines performed by IPEN - CNEN / SP users. The web application designed for the management of the database is named PSADB. It is being developed with MySQL database software and PHP programming language is being used. Data stored in this database are divided into modules that refer to technical specifications, operating history, maintenance history and failure events associated with the main components of the nuclear facilities. (author)

  8. Development of a computational database for probabilistic safety assessment of nuclear research reactors

    International Nuclear Information System (INIS)

    Macedo, Vagner S.; Oliveira, Patricia S. Pagetti de; Andrade, Delvonei Alves de

    2015-01-01

    The objective of this work is to describe the database being developed at IPEN - CNEN / SP for application in the Probabilistic Safety Assessment of nuclear research reactors. The database can be accessed by means of a computational program installed in the corporate computer network, named IPEN Intranet, and this access will be allowed only to professionals previously registered. Data updating, editing and searching tasks will be controlled by a system administrator according to IPEN Intranet security rules. The logical model and the physical structure of the database can be represented by an Entity Relationship Model, which is based on the operational routines performed by IPEN - CNEN / SP users. The web application designed for the management of the database is named PSADB. It is being developed with MySQL database software and PHP programming language is being used. Data stored in this database are divided into modules that refer to technical specifications, operating history, maintenance history and failure events associated with the main components of the nuclear facilities. (author)

  9. Emotion and Social Network Perceptions: How Does Anger Bias Perceptions of Networks?

    Science.gov (United States)

    2013-03-01

    indicate the extent to which they felt angry because previous research suggests that labeling emotions may reduce their impact (Lerner & Keltner , 2000...AFRL-AFOSR-UK-TR-2013-0009 Emotion and Social Network Perceptions: How Does Anger Bias Perceptions of Networks? Professor...REPORT TYPE Final Report 3. DATES COVERED (From – To) 26 August 2011 – 23 February 2013 4. TITLE AND SUBTITLE Emotion and Social Network

  10. Bell Inequalities for Complex Networks

    Science.gov (United States)

    2015-10-26

    AFRL-AFOSR-VA-TR-2015-0355 YIP Bell Inequalities for Complex Networks Greg Ver Steeg UNIVERSITY OF SOUTHERN CALIFORNIA LOS ANGELES Final Report 10/26...performance report PI: Greg Ver Steeg Young Investigator Award Grant Title: Bell Inequalities for Complex Networks Grant #: FA9550-12-1-0417 Reporting...October 20, 2015 Final Report for “Bell Inequalities for Complex Networks” Greg Ver Steeg Abstract This effort studied new methods to understand the effect

  11. Database Description - SSBD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SSBD Alternative nam...ss 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe 650-0047, Japan, RIKEN Quantitative Biology Center Shuichi Onami E-mail: Database... classification Other Molecular Biology Databases Database classification Dynamic databa...elegans Taxonomy ID: 6239 Taxonomy Name: Escherichia coli Taxonomy ID: 562 Database description Systems Scie...i Onami Journal: Bioinformatics/April, 2015/Volume 31, Issue 7 External Links: Original website information Database

  12. Database Description - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name GETDB Alternative n...ame Gal4 Enhancer Trap Insertion Database DOI 10.18908/lsdba.nbdc00236-000 Creator Creator Name: Shigeo Haya... Chuo-ku, Kobe 650-0047 Tel: +81-78-306-3185 FAX: +81-78-306-3183 E-mail: Database classification Expression... Invertebrate genome database Organism Taxonomy Name: Drosophila melanogaster Taxonomy ID: 7227 Database des...riginal website information Database maintenance site Drosophila Genetic Resource

  13. JICST Factual DatabaseJICST Chemical Substance Safety Regulation Database

    Science.gov (United States)

    Abe, Atsushi; Sohma, Tohru

    JICST Chemical Substance Safety Regulation Database is based on the Database of Safety Laws for Chemical Compounds constructed by Japan Chemical Industry Ecology-Toxicology & Information Center (JETOC) sponsored by the Sience and Technology Agency in 1987. JICST has modified JETOC database system, added data and started the online service through JOlS-F (JICST Online Information Service-Factual database) in January 1990. JICST database comprises eighty-three laws and fourteen hundred compounds. The authors outline the database, data items, files and search commands. An example of online session is presented.

  14. A comparative study of six European databases of medically oriented Web resources.

    Science.gov (United States)

    Abad García, Francisca; González Teruel, Aurora; Bayo Calduch, Patricia; de Ramón Frias, Rosa; Castillo Blasco, Lourdes

    2005-10-01

    The paper describes six European medically oriented databases of Web resources, pertaining to five quality-controlled subject gateways, and compares their performance. The characteristics, coverage, procedure for selecting Web resources, record structure, searching possibilities, and existence of user assistance were described for each database. Performance indicators for each database were obtained by means of searches carried out using the key words, "myocardial infarction." Most of the databases originated in the 1990s in an academic or library context and include all types of Web resources of an international nature. Five databases use Medical Subject Headings. The number of fields per record varies between three and nineteen. The language of the search interfaces is mostly English, and some of them allow searches in other languages. In some databases, the search can be extended to Pubmed. Organizing Medical Networked Information, Catalogue et Index des Sites Médicaux Francophones, and Diseases, Disorders and Related Topics produced the best results. The usefulness of these databases as quick reference resources is clear. In addition, their lack of content overlap means that, for the user, they complement each other. Their continued survival faces three challenges: the instability of the Internet, maintenance costs, and lack of use in spite of their potential usefulness.

  15. Evolution of the social network of scientific collaborations

    Science.gov (United States)

    Barabási, A. L.; Jeong, H.; Néda, Z.; Ravasz, E.; Schubert, A.; Vicsek, T.

    2002-08-01

    The co-authorship network of scientists represents a prototype of complex evolving networks. In addition, it offers one of the most extensive database to date on social networks. By mapping the electronic database containing all relevant journals in mathematics and neuro-science for an 8-year period (1991-98), we infer the dynamic and the structural mechanisms that govern the evolution and topology of this complex system. Three complementary approaches allow us to obtain a detailed characterization. First, empirical measurements allow us to uncover the topological measures that characterize the network at a given moment, as well as the time evolution of these quantities. The results indicate that the network is scale-free, and that the network evolution is governed by preferential attachment, affecting both internal and external links. However, in contrast with most model predictions the average degree increases in time, and the node separation decreases. Second, we propose a simple model that captures the network's time evolution. In some limits the model can be solved analytically, predicting a two-regime scaling in agreement with the measurements. Third, numerical simulations are used to uncover the behavior of quantities that could not be predicted analytically. The combined numerical and analytical results underline the important role internal links play in determining the observed scaling behavior and network topology. The results and methodologies developed in the context of the co-authorship network could be useful for a systematic study of other complex evolving networks as well, such as the world wide web, Internet, or other social networks.

  16. A network pharmacology approach to investigate the pharmacological effects of Guizhi Fuling Wan on uterine fibroids.

    Science.gov (United States)

    Zeng, Liuting; Yang, Kailin; Liu, Huiping; Zhang, Guomin

    2017-11-01

    To investigate the pharmacological mechanism of Guizhi Fuling Wan (GFW) in the treatment of uterine fibroids, a network pharmacology approach was used. Information on GFW compounds was collected from traditional Chinese medicine (TCM) databases, and input into PharmMapper to identify the compound targets. Genes associated with uterine fibroids genes were then obtained from the GeneCards and Online Mendelian Inheritance in Man databases. The interaction data of the targets and other human proteins was also collected from the STRING and IntAct databases. The target data were input into the Database for Annotation, Visualization and Integrated Discovery for gene ontology (GO) and pathway enrichment analyses. Networks of the above information were constructed and analyzed using Cytoscape. The following networks were compiled: A compound-compound target network of GFW; a herb-compound target-uterine fibroids target network of GWF; and a compound target-uterine fibroids target-other human proteins protein-protein interaction network, which were subjected to GO and pathway enrichment analyses. According to this approach, a number of novel signaling pathways and biological processes underlying the effects of GFW on uterine fibroids were identified, including the negative regulation of smooth muscle cell proliferation, apoptosis, and the Ras, wingless-type, epidermal growth factor and insulin-like growth factor-1 signaling pathways. This network pharmacology approach may aid the systematical study of herbal formulae and make TCM drug discovery more predictable.

  17. [Establishment of the database of the 3D facial models for the plastic surgery based on network].

    Science.gov (United States)

    Liu, Zhe; Zhang, Hai-Lin; Zhang, Zheng-Guo; Qiao, Qun

    2008-07-01

    To collect the three-dimensional (3D) facial data of 30 facial deformity patients by the 3D scanner and establish a professional database based on Internet. It can be helpful for the clinical intervention. The primitive point data of face topography were collected by the 3D scanner. Then the 3D point cloud was edited by reverse engineering software to reconstruct the 3D model of the face. The database system was divided into three parts, including basic information, disease information and surgery information. The programming language of the web system is Java. The linkages between every table of the database are credibility. The query operation and the data mining are convenient. The users can visit the database via the Internet and use the image analysis system to observe the 3D facial models interactively. In this paper we presented a database and a web system adapt to the plastic surgery of human face. It can be used both in clinic and in basic research.

  18. Phylogenetically informed logic relationships improve detection of biological network organization

    Science.gov (United States)

    2011-01-01

    Background A "phylogenetic profile" refers to the presence or absence of a gene across a set of organisms, and it has been proven valuable for understanding gene functional relationships and network organization. Despite this success, few studies have attempted to search beyond just pairwise relationships among genes. Here we search for logic relationships involving three genes, and explore its potential application in gene network analyses. Results Taking advantage of a phylogenetic matrix constructed from the large orthologs database Roundup, we invented a method to create balanced profiles for individual triplets of genes that guarantee equal weight on the different phylogenetic scenarios of coevolution between genes. When we applied this idea to LAPP, the method to search for logic triplets of genes, the balanced profiles resulted in significant performance improvement and the discovery of hundreds of thousands more putative triplets than unadjusted profiles. We found that logic triplets detected biological network organization and identified key proteins and their functions, ranging from neighbouring proteins in local pathways, to well separated proteins in the whole pathway, and to the interactions among different pathways at the system level. Finally, our case study suggested that the directionality in a logic relationship and the profile of a triplet could disclose the connectivity between the triplet and surrounding networks. Conclusion Balanced profiles are superior to the raw profiles employed by traditional methods of phylogenetic profiling in searching for high order gene sets. Gene triplets can provide valuable information in detection of biological network organization and identification of key genes at different levels of cellular interaction. PMID:22172058

  19. Implementation of a database on drugs into a university hospital Intranet.

    Science.gov (United States)

    François, M; Joubert, M; Fieschi, D; Fieschi, M

    1998-01-01

    Several databases on drugs have been developed worldwide for drug information functions whose sources are now electronically available. Our objective was to implement one of them in our University hospitals information system. Thériaque is a database which contains information on all the drugs available in France. Before its implementation we modeled its content (chemical classes, active components, excipients, indications, contra-indications, side effects, and so on) following an object-oriented method. From this model we designed dynamic HTML pages according to the Microsoft's Internet Database Connector (IDC) technics. This allowed a fast implementation and does not imply to port a client application on the thousands of workstations over the network of the University hospitals. This interface provides end-users with an easy-to-use and natural way to access information related to drugs in an Intranet environment.

  20. Database Description - KAIKOcDNA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us KAIKOcDNA Database Description General information of database Database name KAIKOcDNA Alter...National Institute of Agrobiological Sciences Akiya Jouraku E-mail : Database cla...ssification Nucleotide Sequence Databases Organism Taxonomy Name: Bombyx mori Taxonomy ID: 7091 Database des...rnal: G3 (Bethesda) / 2013, Sep / vol.9 External Links: Original website information Database maintenance si...available URL of Web services - Need for user registration Not available About This Database Database