WorldWideScience

Sample records for integrated database system

  1. Distributed Access View Integrated Database (DAVID) system

    Science.gov (United States)

    Jacobs, Barry E.

    1991-01-01

    The Distributed Access View Integrated Database (DAVID) System, which was adopted by the Astrophysics Division for their Astrophysics Data System, is a solution to the system heterogeneity problem. The heterogeneous components of the Astrophysics problem is outlined. The Library and Library Consortium levels of the DAVID approach are described. The 'books' and 'kits' level is discussed. The Universal Object Typer Management System level is described. The relation of the DAVID project with the Small Business Innovative Research (SBIR) program is explained.

  2. Nuclear integrated database and design advancement system

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Jae Joo; Jeong, Kwang Sub; Kim, Seung Hwan; Choi, Sun Young

    1997-01-01

    The objective of NuIDEAS is to computerize design processes through an integrated database by eliminating the current work style of delivering hardcopy documents and drawings. The major research contents of NuIDEAS are the advancement of design processes by computerization, the establishment of design database and 3 dimensional visualization of design data. KSNP (Korea Standard Nuclear Power Plant) is the target of legacy database and 3 dimensional model, so that can be utilized in the next plant design. In the first year, the blueprint of NuIDEAS is proposed, and its prototype is developed by applying the rapidly revolutionizing computer technology. The major results of the first year research were to establish the architecture of the integrated database ensuring data consistency, and to build design database of reactor coolant system and heavy components. Also various softwares were developed to search, share and utilize the data through networks, and the detailed 3 dimensional CAD models of nuclear fuel and heavy components were constructed, and walk-through simulation using the models are developed. This report contains the major additions and modifications to the object oriented database and associated program, using methods and Javascript.. (author). 36 refs., 1 tab., 32 figs.

  3. Database specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB)

    Energy Technology Data Exchange (ETDEWEB)

    Faby, E.Z.; Fluker, J.; Hancock, B.R.; Grubb, J.W.; Russell, D.L. [Univ. of Tennessee, Knoxville, TN (United States); Loftis, J.P.; Shipe, P.C.; Truett, L.F. [Oak Ridge National Lab., TN (United States)

    1994-03-01

    This Database Specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB) describes the database organization and storage allocation, provides the detailed data model of the logical and physical designs, and provides information for the construction of parts of the database such as tables, data elements, and associated dictionaries and diagrams.

  4. Integrity control in relational database systems - an overview

    NARCIS (Netherlands)

    Grefen, Paul W.P.J.; Apers, Peter M.G.

    1993-01-01

    This paper gives an overview of research regarding integrity control or integrity constraint handling in relational database management systems. The topic of constraint handling is discussed from two points of view. First, constraint handling is discussed by identifying a number of important researc

  5. A New Integrated System of Logic Programming and Relational Database

    Institute of Scientific and Technical Information of China (English)

    邓铁清; 吴泉源; 等

    1993-01-01

    Based on the study of the current two methods-interpretation and compilation-for the integration of logic programming and relational database,a new precompilation-based interpretive approach is proposed.It inherits the advantages of both methods,but overcomes the drawbacks of theirs.A new integrated system based on this approach is presented,which has been implemented on Micro VAX Ⅱ and applied to practise as the kernel of the GKBMS knowledge base management system.Also discussed are the key implementation techniques,including the coupling of logic and relational database systems,the compound of logic and relational database languages,the partial evaluation and static optimization of user's programs,fact scheduling and version management in problem-solving.

  6. The PIR integrated protein databases and data retrieval system

    Directory of Open Access Journals (Sweden)

    H Huang

    2006-01-01

    Full Text Available The Protein Information Resource (PIR provides many databases and tools to support genomic and proteomic research. PIR is a member of UniProt—Universal Protein Resource—the central repository of protein sequence and function, which maintains UniProt Knowledgebase with extensively curated annotation, UniProt Reference databases to speed sequence searches, and UniProt Archive to reflect sequence history. PIR also provides PIRSF family classification system based on evolutionary relationships of full-length proteins, and iProClass integrated database of protein family, function, and structure. These databases are easily accessible from PIR web site using a centralized data retrieval system for information retrieval and knowledge discovery.

  7. Coordinate Systems Integration for Craniofacial Database from Multimodal Devices

    Directory of Open Access Journals (Sweden)

    Deni Suwardhi

    2005-05-01

    Full Text Available This study presents a data registration method for craniofacial spatial data of different modalities. The data consists of three dimensional (3D vector and raster data models. The data is stored in object relational database. The data capture devices are Laser scanner, CT (Computed Tomography scan and CR (Close Range Photogrammetry. The objective of the registration is to transform the data from various coordinate systems into a single 3-D Cartesian coordinate system. The standard error of the registration obtained from multimodal imaging devices using 3D affine transformation is in the ranged of 1-2 mm. This study is a step forward for storing the craniofacial spatial data in one reference system in database.

  8. Coordinate systems integration for development of malaysian craniofacial database.

    Science.gov (United States)

    Rajion, Zainul; Suwardhi, Deni; Setan, Halim; Chong, Albert; Majid, Zulkepli; Ahmad, Anuar; Rani Samsudin, Ab; Aziz, Izhar; Wan Harun, W A R

    2005-01-01

    This study presents a data registration method for craniofacial spatial data of different modalities. The data consists of three dimensional (3D) vector and raster data models. The data is stored in object relational database. The data capture devices are Laser scanner, CT (Computed Tomography) scan and CR (Close Range) Photogrammetry. The objective of the registration is to transform the data from various coordinate systems into a single 3-D Cartesian coordinate system. The standard error of the registration obtained from multimodal imaging devices using 3D affine transformation is in the ranged of 1-2 mm. This study is a step forward for storing the spatial craniofacial data in one reference system in database.

  9. Representing clinical communication knowledge through database management system integration.

    Science.gov (United States)

    Khairat, Saif; Craven, Catherine; Gong, Yang

    2012-01-01

    Clinical communication failures are considered the leading cause of medical errors [1]. The complexity of the clinical culture and the significant variance in training and education levels form a challenge to enhancing communication within the clinical team. In order to improve communication, a comprehensive understanding of the overall communication process in health care is required. In an attempt to further understand clinical communication, we conducted a thorough methodology literature review to identify strengths and limitations of previous approaches [2]. Our research proposes a new data collection method to study the clinical communication activities among Intensive Care Unit (ICU) clinical teams with a primary focus on the attending physician. In this paper, we present the first ICU communication instrument, and, we introduce the use of database management system to aid in discovering patterns and associations within our ICU communications data repository.

  10. An Approach for Integrating Data Mining with Saudi Universities Database Systems: Case Study

    OpenAIRE

    Mohamed Osman Hegazi; Mohammad Alhawarat; Anwer Hilal

    2016-01-01

    This paper presents an approach for integrating data mining algorithms within Saudi university’s database system, viz., Prince Sattam Bin Abdulaziz University (PSAU) as a case study. The approach based on a bottom-up methodology; it starts by providing a data mining application that represents a solution to one of the problems that face Saudi Universities’ systems. After that, it integrates and implements the solution inside the university’s database system. This process is then repeated to e...

  11. OrientX: An Integrated, Schema Based Native XML Database System

    Institute of Scientific and Technical Information of China (English)

    MENG Xiaofeng; WANG Xiaofeng; XIE Min; ZHANG Xin; ZHOU Junfeng

    2006-01-01

    The increasing number of XML repositories has stimulated the design of systems that can store and query XML data efficiently. OrientX, a native XML database system, is designed to meet this requirement. In this paper, we described the system structure and design of OrientX, an integrated, schema-based native XML database. The main contributions of OrientX are: a)We have implemented an integrated native XML database system, which supports native storage of XML data, and based on it we can handle XPath& XQuery efficiently; b)In our OrientX system, schema information is fully explored to guide the storage, optimization and query processing.

  12. Integrated Controlling System and Unified Database for High Throughput Protein Crystallography Experiments

    Science.gov (United States)

    Gaponov, Yu. A.; Igarashi, N.; Hiraki, M.; Sasajima, K.; Matsugaki, N.; Suzuki, M.; Kosuge, T.; Wakatsuki, S.

    2004-05-01

    An integrated controlling system and a unified database for high throughput protein crystallography experiments have been developed. Main features of protein crystallography experiments (purification, crystallization, crystal harvesting, data collection, data processing) were integrated into the software under development. All information necessary to perform protein crystallography experiments is stored (except raw X-ray data that are stored in a central data server) in a MySQL relational database. The database contains four mutually linked hierarchical trees describing protein crystals, data collection of protein crystal and experimental data processing. A database editor was designed and developed. The editor supports basic database functions to view, create, modify and delete user records in the database. Two search engines were realized: direct search of necessary information in the database and object oriented search. The system is based on TCP/IP secure UNIX sockets with four predefined sending and receiving behaviors, which support communications between all connected servers and clients with remote control functions (creating and modifying data for experimental conditions, data acquisition, viewing experimental data, and performing data processing). Two secure login schemes were designed and developed: a direct method (using the developed Linux clients with secure connection) and an indirect method (using the secure SSL connection using secure X11 support from any operating system with X-terminal and SSH support). A part of the system has been implemented on a new MAD beam line, NW12, at the Photon Factory Advanced Ring for general user experiments.

  13. Functional description for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB)

    Energy Technology Data Exchange (ETDEWEB)

    Truett, L.F.; Rollow, J.P.; Shipe, P.C. [Oak Ridge National Lab., TN (United States); Faby, E.Z.; Fluker, J.; Hancock, W.R.; Grubb, J.W.; Russell, D.L. [Univ. of Tennessee, Knoxville, TN (United States); Ferguson, R.A. [SAIC, Oak Ridge, TN (United States)

    1995-12-15

    This Functional Description for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB) documents the purpose of and requirements for the ICDB in order to ensure a mutual understanding between the development group and the user group of the system. This Functional Description defines ICDB and provides a clear statement of the initial operational capability to be developed.

  14. Air Force Integrated Readiness Measurement System (AFIRMS). Wing Database Specification.

    Science.gov (United States)

    1985-09-30

    Description, Contract No. F33700-83-G-002005701, 8 April 1983. (Unclassified) z . AFR 700-3, Information Systems Requirements Processing, 30 November 1984. W...STATUS 3 4 72 1i1N AIRCRAFT PRESELECT INDICATOR 4 80 1920 11iN AIRCRAFT GEERATION FACTOR 4 80 1920 lip AIRCRAFT TAIL NMBER 5 80 2400 a󈧐 AIRMAN LAST NA...ACRFT + I ACRW + 34 RESOURCE) 20 38 4560 71E RESOURCE QUANTITY REQUIRED POR TOTAL ORDER 4 2280 54720 71F SORTIE AIRCRAFT RATE (60 DAY z 3 MW3) 4 180

  15. SYSTOMONAS — an integrated database for systems biology analysis of Pseudomonas

    OpenAIRE

    Choi, Claudia; Münch, Richard; Leupold, Stefan; Klein, Johannes; Siegel, Inga; Thielen, Bernhard; Benkert, Beatrice; Kucklick, Martin; Schobert, Max; Barthelmes, Jens; Ebeling, Christian; Haddad, Isam; Scheer, Maurice; Grote, Andreas; Hiller, Karsten

    2007-01-01

    To provide an integrated bioinformatics platform for a systems biology approach to the biology of pseudomonads in infection and biotechnology the database SYSTOMONAS (SYSTems biology of pseudOMONAS) was established. Besides our own experimental metabolome, proteome and transcriptome data, various additional predictions of cellular processes, such as gene-regulatory networks were stored. Reconstruction of metabolic networks in SYSTOMONAS was achieved via comparative genomics. Broad data integr...

  16. An integrated medical image database and retrieval system using a web application server.

    Science.gov (United States)

    Cao, Pengyu; Hashiba, Masao; Akazawa, Kouhei; Yamakawa, Tomoko; Matsuto, Takayuki

    2003-08-01

    We developed an Integrated Medical Image Database and Retrieval System (INIS) for easy access by medical staff. The INIS mainly consisted of four parts: specific servers to save medical images from multi-vendor modalities of CT, MRI, CR, ECG and endoscopy; an integrated image database (DB) server to save various kinds of images in a DICOM format; a Web application server to connect clients to the integrated image DB and the Web browser terminals connected to an HIS system. The INIS provided a common screen design to retrieve CT, MRI, CR, endoscopic and ECG images, and radiological reports, which would allow doctors to retrieve radiological images and corresponding reports, or ECG images of a patient simultaneously on a screen. Doctors working in internal medicine on average accessed information 492 times a month. Doctors working in cardiological and gastroenterological accessed information 308 times a month. Using the INIS, medical staff could browse all or parts of a patient's medical images and reports.

  17. Planning the future of JPL's management and administrative support systems around an integrated database

    Science.gov (United States)

    Ebersole, M. M.

    1983-01-01

    JPL's management and administrative support systems have been developed piece meal and without consistency in design approach over the past twenty years. These systems are now proving to be inadequate to support effective management of tasks and administration of the Laboratory. New approaches are needed. Modern database management technology has the potential for providing the foundation for more effective administrative tools for JPL managers and administrators. Plans for upgrading JPL's management and administrative systems over a six year period evolving around the development of an integrated management and administrative data base are discussed.

  18. Planning the future of JPL's management and administrative support systems around an integrated database

    Science.gov (United States)

    Ebersole, M. M.

    1983-01-01

    JPL's management and administrative support systems have been developed piece meal and without consistency in design approach over the past twenty years. These systems are now proving to be inadequate to support effective management of tasks and administration of the Laboratory. New approaches are needed. Modern database management technology has the potential for providing the foundation for more effective administrative tools for JPL managers and administrators. Plans for upgrading JPL's management and administrative systems over a six year period evolving around the development of an integrated management and administrative data base are discussed.

  19. Integrating a modern knowledge-based system architecture with a legacy VA database: the ATHENA and EON projects at Stanford.

    Science.gov (United States)

    Advani, A; Tu, S; O'Connor, M; Coleman, R; Goldstein, M K; Musen, M

    1999-01-01

    We present a methodology and database mediator tool for integrating modern knowledge-based systems, such as the Stanford EON architecture for automated guideline-based decision-support, with legacy databases, such as the Veterans Health Information Systems & Technology Architecture (VISTA) systems, which are used nation-wide. Specifically, we discuss designs for database integration in ATHENA, a system for hypertension care based on EON, at the VA Palo Alto Health Care System. We describe a new database mediator that affords the EON system both physical and logical data independence from the legacy VA database. We found that to achieve our design goals, the mediator requires two separate mapping levels and must itself involve a knowledge-based component.

  20. Object Oriented Approach for Integration of Heterogeneous Databases in a Multidatabase System and Local Schemas Modifications Propagation

    CERN Document Server

    Ali, Mohammad Ghulam

    2009-01-01

    One of the challenging problems in the multidatabase systems is to find the most viable solution to the problem of interoperability of distributed heterogeneous autonomous local component databases. This has resulted in the creation of a global schema over set of these local component database schemas to provide a uniform representation of local schemas. The aim of this paper is to use object oriented approach to integrate schemas of distributed heterogeneous autonomous local component database schemas into a global schema. The resulting global schema provides a uniform interface and high level of location transparency for retrieval of data from the local component databases. A set of integration operators are defined to integrate local schemas based on the semantic relevance of their classes and to provide a model independent representation of virtual classes of the global schema. The schematic representation and heterogeneity is also taken into account in the integration process. Justifications about Object...

  1. The RIKEN integrated database of mammals.

    Science.gov (United States)

    Masuya, Hiroshi; Makita, Yuko; Kobayashi, Norio; Nishikata, Koro; Yoshida, Yuko; Mochizuki, Yoshiki; Doi, Koji; Takatsuki, Terue; Waki, Kazunori; Tanaka, Nobuhiko; Ishii, Manabu; Matsushima, Akihiro; Takahashi, Satoshi; Hijikata, Atsushi; Kozaki, Kouji; Furuichi, Teiichi; Kawaji, Hideya; Wakana, Shigeharu; Nakamura, Yukio; Yoshiki, Atsushi; Murata, Takehide; Fukami-Kobayashi, Kaoru; Mohan, Sujatha; Ohara, Osamu; Hayashizaki, Yoshihide; Mizoguchi, Riichiro; Obata, Yuichi; Toyoda, Tetsuro

    2011-01-01

    The RIKEN integrated database of mammals (http://scinets.org/db/mammal) is the official undertaking to integrate its mammalian databases produced from multiple large-scale programs that have been promoted by the institute. The database integrates not only RIKEN's original databases, such as FANTOM, the ENU mutagenesis program, the RIKEN Cerebellar Development Transcriptome Database and the Bioresource Database, but also imported data from public databases, such as Ensembl, MGI and biomedical ontologies. Our integrated database has been implemented on the infrastructure of publication medium for databases, termed SciNetS/SciNeS, or the Scientists' Networking System, where the data and metadata are structured as a semantic web and are downloadable in various standardized formats. The top-level ontology-based implementation of mammal-related data directly integrates the representative knowledge and individual data records in existing databases to ensure advanced cross-database searches and reduced unevenness of the data management operations. Through the development of this database, we propose a novel methodology for the development of standardized comprehensive management of heterogeneous data sets in multiple databases to improve the sustainability, accessibility, utility and publicity of the data of biomedical information.

  2. Missing semantic annotation in databases. The root cause for data integration and migration problems in information systems.

    Science.gov (United States)

    Dugas, M

    2014-01-01

    Data integration is a well-known grand challenge in information systems. It is highly relevant in medicine because of the multitude of patient data sources. Semantic annotations of data items regarding concept and value domain, based on comprehensive terminologies can facilitate data integration and migration. Therefore it should be implemented in databases from the very beginning.

  3. An Integrated Photogrammetric and Spatial Database Management System for Producing Fully Structured Data Using Aerial and Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Farshid Farnood Ahmadi

    2009-03-01

    Full Text Available 3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs; direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS is presented.

  4. The Integration Of The LHC Cryogenics Control System Data Into The CERN Layout Database

    CERN Document Server

    Fortescue-Beck, E; Gomes, P

    2011-01-01

    The Large Hadron Collider’s Cryogenic Control System makes extensive use of several databases to manage data appertaining to over 34,000 cryogenic instrumentation channels. This data is essential for populating the software of the PLCs which are responsible for maintaining the LHC at the appropriate temperature. In order to reduce the number of data sources and the overall complexity of the system, the databases have been rationalised and the automatic tool, that extracts data for the control software, has been simplified. This paper describes the main improvements that have been made and considers the success of the project.

  5. HPD: an online integrated human pathway database enabling systems biology studies.

    Science.gov (United States)

    Chowbina, Sudhir R; Wu, Xiaogang; Zhang, Fan; Li, Peter M; Pandey, Ragini; Kasamsetty, Harini N; Chen, Jake Y

    2009-10-08

    Pathway-oriented experimental and computational studies have led to a significant accumulation of biological knowledge concerning three major types of biological pathway events: molecular signaling events, gene regulation events, and metabolic reaction events. A pathway consists of a series of molecular pathway events that link molecular entities such as proteins, genes, and metabolites. There are approximately 300 biological pathway resources as of April 2009 according to the Pathguide database; however, these pathway databases generally have poor coverage or poor quality, and are difficult to integrate, due to syntactic-level and semantic-level data incompatibilities. We developed the Human Pathway Database (HPD) by integrating heterogeneous human pathway data that are either curated at the NCI Pathway Interaction Database (PID), Reactome, BioCarta, KEGG or indexed from the Protein Lounge Web sites. Integration of pathway data at syntactic, semantic, and schematic levels was based on a unified pathway data model and data warehousing-based integration techniques. HPD provides a comprehensive online view that connects human proteins, genes, RNA transcripts, enzymes, signaling events, metabolic reaction events, and gene regulatory events. At the time of this writing HPD includes 999 human pathways and more than 59,341 human molecular entities. The HPD software provides both a user-friendly Web interface for online use and a robust relational database backend for advanced pathway querying. This pathway tool enables users to 1) search for human pathways from different resources by simply entering genes/proteins involved in pathways or words appearing in pathway names, 2) analyze pathway-protein association, 3) study pathway-pathway similarity, and 4) build integrated pathway networks. We demonstrated the usage and characteristics of the new HPD through three breast cancer case studies. HPD http://bio.informatics.iupui.edu/HPD is a new resource for searching, managing

  6. EVpedia: an integrated database of high-throughput data for systemic analyses of extracellular vesicles

    Directory of Open Access Journals (Sweden)

    Dae-Kyum Kim

    2013-03-01

    Full Text Available Secretion of extracellular vesicles is a general cellular activity that spans the range from simple unicellular organisms (e.g. archaea; Gram-positive and Gram-negative bacteria to complex multicellular ones, suggesting that this extracellular vesicle-mediated communication is evolutionarily conserved. Extracellular vesicles are spherical bilayered proteolipids with a mean diameter of 20–1,000 nm, which are known to contain various bioactive molecules including proteins, lipids, and nucleic acids. Here, we present EVpedia, which is an integrated database of high-throughput datasets from prokaryotic and eukaryotic extracellular vesicles. EVpedia provides high-throughput datasets of vesicular components (proteins, mRNAs, miRNAs, and lipids present on prokaryotic, non-mammalian eukaryotic, and mammalian extracellular vesicles. In addition, EVpedia also provides an array of tools, such as the search and browse of vesicular components, Gene Ontology enrichment analysis, network analysis of vesicular proteins and mRNAs, and a comparison of vesicular datasets by ortholog identification. Moreover, publications on extracellular vesicle studies are listed in the database. This free web-based database of EVpedia (http://evpedia.info might serve as a fundamental repository to stimulate the advancement of extracellular vesicle studies and to elucidate the novel functions of these complex extracellular organelles.

  7. An Integrative Database System of Agro-Ecology for the Black Soil Region of China

    Directory of Open Access Journals (Sweden)

    Cuiping Ge

    2007-12-01

    Full Text Available The comprehensive database system of the Northeast agro-ecology of black soil (CSDB_BL is user-friendly software designed to store and manage large amounts of data on agriculture. The data was collected in an efficient and systematic way by long-term experiments and observations of black land and statistics information. It is based on the ORACLE database management system and the interface is written in PB language. The database has the following main facilities:(1 runs on Windows platforms; (2 facilitates data entry from *.dbf to ORACLE or creates ORACLE tables directly; (3has a metadata facility that describes the methods used in the laboratory or in the observations; (4 data can be transferred to an expert system for simulation analysis and estimates made by Visual C++ and Visual Basic; (5 can be connected with GIS, so it is easy to analyze changes in land use ; and (6 allows metadata and data entity to be shared on the internet. The following datasets are included in CSDB_BL: long-term experiments and observations of water, soil, climate, biology, special research projects, and a natural resource survey of Hailun County in the 1980s; images from remote sensing, graphs of vectors and grids, and statistics from Northeast of China. CSDB_BL can be used in the research and evaluation of agricultural sustainability nationally, regionally, or locally. Also, it can be used as a tool to assist the government in planning for agricultural development. Expert systems connected with CSDB_BL can give farmers directions for farm planting management.

  8. MPlus Database system

    Energy Technology Data Exchange (ETDEWEB)

    1989-01-20

    The MPlus Database program was developed to keep track of mail received. This system was developed by TRESP for the Department of Energy/Oak Ridge Operations. The MPlus Database program is a PC application, written in dBase III+'' and compiled with Clipper'' into an executable file. The files you need to run the MPLus Database program can be installed on a Bernoulli, or a hard drive. This paper discusses the use of this database.

  9. On Simplification of Database Integrity Constraints

    DEFF Research Database (Denmark)

    Christiansen, Henning; Martinenghi, Davide

    2006-01-01

    is strictly related to query containment; in fact, an ideal simplification procedure can only exist in database languages for which query containment is decidable. However, simplifications that do not qualify as ideal may also be relevant for practical purposes. We present a concrete approach based......Without proper simplification techniques, database integrity checking can be prohibitively time consuming. Several methods have been developed for producing simplified incremental checks for each update but none until now of sufficient quality and generality for providing a true practical impact...... take place before the execution of the update, so that only consistency-preserving updates are eventually given to the database. The extension to more expressive languages and the application of the framework to other contexts, such as data integration and concurrent database systems, are also...

  10. Integrity Constraint Checking in Federated Databases

    NARCIS (Netherlands)

    Grefen, Paul; Widom, Jennifer

    1996-01-01

    A federated database is comprised of multiple interconnected databases that cooperate in an autonomous fashion. Global integrity constraints are very useful in federated databases, but the lack of global queries, global transaction mechanisms, and global concurrency control renders traditional const

  11. Optimal database locks for efficient integrity checking

    DEFF Research Database (Denmark)

    Martinenghi, Davide

    2004-01-01

    In concurrent database systems, correctness of update transactions refers to the equivalent effects of the execution schedule and some serial schedule over the same set of transactions. Integrity constraints add further semantic requirements to the correctness of the database states reached upon...... the execution of update transactions. Several methods for efficient integrity checking and enforcing exist. We show in this paper how to apply one such method to automatically extend update transactions with locks and simplified consistency tests on the locked entities. All schedules produced in this way...

  12. Integrating Relational Databases and Constraint Languages

    DEFF Research Database (Denmark)

    Hansen, Michael Reichhardt; Hansen, Bo S.; Lucas, Peter

    1989-01-01

    for a seamless integration of data and rules. The paper focuses on equational rules. A number of potentially useful algebraic laws are stated. Examples will demonstrate the use of these algebraic laws in query evaluation and optimization.The paper elaborates the Rules/Database system from a programming language...

  13. Building a medical multimedia database system to integrate clinical information: an application of high-performance computing and communications technology.

    Science.gov (United States)

    Lowe, H J; Buchanan, B G; Cooper, G F; Vries, J K

    1995-01-01

    The rapid growth of diagnostic-imaging technologies over the past two decades has dramatically increased the amount of nontextual data generated in clinical medicine. The architecture of traditional, text-oriented, clinical information systems has made the integration of digitized clinical images with the patient record problematic. Systems for the classification, retrieval, and integration of clinical images are in their infancy. Recent advances in high-performance computing, imaging, and networking technology now make it technologically and economically feasible to develop an integrated, multimedia, electronic patient record. As part of The National Library of Medicine's Biomedical Applications of High-Performance Computing and Communications program, we plan to develop Image Engine, a prototype microcomputer-based system for the storage, retrieval, integration, and sharing of a wide range of clinically important digital images. Images stored in the Image Engine database will be indexed and organized using the Unified Medical Language System Metathesaurus and will be dynamically linked to data in a text-based, clinical information system. We will evaluate Image Engine by initially implementing it in three clinical domains (oncology, gastroenterology, and clinical pathology) at the University of Pittsburgh Medical Center.

  14. A Quality System Database

    Science.gov (United States)

    Snell, William H.; Turner, Anne M.; Gifford, Luther; Stites, William

    2010-01-01

    A quality system database (QSD), and software to administer the database, were developed to support recording of administrative nonconformance activities that involve requirements for documentation of corrective and/or preventive actions, which can include ISO 9000 internal quality audits and customer complaints.

  15. IMPLEMENTATION OF 2-PHASE COMMIT BY INTEGRATING THE DATABASE & THE FILE SYSTEM

    Directory of Open Access Journals (Sweden)

    V. Raghavendra Prasad,

    2010-10-01

    Full Text Available Transaction is a series of data manipulation statements that must either fully complete or fully fail, leaving the system in a consistent state, Transactions are the key to reliable software applications In J2EE Business layer components accesses transactional resource managers like RDBMS/Messaging provider. From the databasepoint of view, the Resource Managers coordinates along with the Transaction Manager to perform the work which is transparent to the developer; however this is not possible with regards to a important resource, i.e. file system. Moreover, DBMS has the capacity to commit or roll back a transaction but this in independent of the file system. In this paper, we integrate the two-phase commit protocol of the RDBMS with the file system by using Java. As Java IO does not provide transactional support and hence it requires the developer to implement the transaction support manually in their application, this paper aims to develop a transaction aware resource manager for manipulation of files in Java.

  16. The NCBI BioSystems database.

    Science.gov (United States)

    Geer, Lewis Y; Marchler-Bauer, Aron; Geer, Renata C; Han, Lianyi; He, Jane; He, Siqian; Liu, Chunlei; Shi, Wenyao; Bryant, Stephen H

    2010-01-01

    The NCBI BioSystems database, found at http://www.ncbi.nlm.nih.gov/biosystems/, centralizes and cross-links existing biological systems databases, increasing their utility and target audience by integrating their pathways and systems into NCBI resources. This integration allows users of NCBI's Entrez databases to quickly categorize proteins, genes and small molecules by metabolic pathway, disease state or other BioSystem type, without requiring time-consuming inference of biological relationships from the literature or multiple experimental datasets.

  17. [Research and development of medical case database: a novel medical case information system integrating with biospecimen management].

    Science.gov (United States)

    Pan, Shiyang; Mu, Yuan; Wang, Hong; Wang, Tong; Huang, Peijun; Ma, Jianfeng; Jiang, Li; Zhang, Jie; Gu, Bing; Yi, Lujiang

    2010-04-01

    To meet the needs of management of medical case information and biospecimen simultaneously, we developed a novel medical case information system integrating with biospecimen management. The database established by MS SQL Server 2000 covered, basic information, clinical diagnosis, imaging diagnosis, pathological diagnosis and clinical treatment of patient; physicochemical property, inventory management and laboratory analysis of biospecimen; users log and data maintenance. The client application developed by Visual C++ 6.0 was used to implement medical case and biospecimen management, which was based on Client/Server model. This system can perform input, browse, inquest, summary of case and related biospecimen information, and can automatically synthesize case-records based on the database. Management of not only a long-term follow-up on individual, but also of grouped cases organized according to the aim of research can be achieved by the system. This system can improve the efficiency and quality of clinical researches while biospecimens are used coordinately. It realizes synthesized and dynamic management of medical case and biospecimen, which may be considered as a new management platform.

  18. Towards Sensor Database Systems

    DEFF Research Database (Denmark)

    Bonnet, Philippe; Gehrke, Johannes; Seshadri, Praveen

    2001-01-01

    Sensor networks are being widely deployed for measurement, detection and surveillance applications. In these new applications, users issue long-running queries over a combination of stored data and sensor data. Most existing applications rely on a centralized system for collecting sensor data....... These systems lack flexibility because data is extracted in a predefined way; also, they do not scale to a large number of devices because large volumes of raw data are transferred regardless of the queries that are submitted. In our new concept of sensor database system, queries dictate which data is extracted...... from the sensors. In this paper, we define the concept of sensor databases mixing stored data represented as relations and sensor data represented as time series. Each long-running query formulated over a sensor database defines a persistent view, which is maintained during a given time interval. We...

  19. Database and Expert Systems Applications

    DEFF Research Database (Denmark)

    Viborg Andersen, Kim; Debenham, John; Wagner, Roland

    submissions. The papers are organized in topical sections on workflow automation, database queries, data classification and recommendation systems, information retrieval in multimedia databases, Web applications, implementational aspects of databases, multimedia databases, XML processing, security, XML...... schemata, query evaluation, semantic processing, information retrieval, temporal and spatial databases, querying XML, organisational aspects of databases, natural language processing, ontologies, Web data extraction, semantic Web, data stream management, data extraction, distributed database systems...

  20. Protocols for Integrity Constraint Checking in Federated Databases

    NARCIS (Netherlands)

    Grefen, Paul; Widom, Jennifer

    1997-01-01

    A federated database is comprised of multiple interconnected database systems that primarily operate independently but cooperate to a certain extent. Global integrity constraints can be very useful in federated databases, but the lack of global queries, global transaction mechanisms, and global conc

  1. Security Issues in Distributed Database System Model

    Directory of Open Access Journals (Sweden)

    MD.TABREZ QUASIM

    2013-12-01

    Full Text Available This paper reviews the most common as well as emerging security mechanism used in distributed database system. As distributed database became more popular, the need for improvement in distributed database management system become even more important. The most important issue is security that may arise and possibly compromise the access control and the integrity of the system. In this paper, we propose some solution for some security aspects such as multi-level access control, confidentiality, reliability, integrity and recovery that pertain to a distributed database system.

  2. An automated system for terrain database construction

    Science.gov (United States)

    Johnson, L. F.; Fretz, R. K.; Logan, T. L.; Bryant, N. A.

    1987-01-01

    An automated Terrain Database Preparation System (TDPS) for the construction and editing of terrain databases used in computerized wargaming simulation exercises has been developed. The TDPS system operates under the TAE executive, and it integrates VICAR/IBIS image processing and Geographic Information System software with CAD/CAM data capture and editing capabilities. The terrain database includes such features as roads, rivers, vegetation, and terrain roughness.

  3. Security Issues in Distributed Database System Model

    OpenAIRE

    MD.TABREZ QUASIM

    2013-01-01

    This paper reviews the most common as well as emerging security mechanism used in distributed database system. As distributed database became more popular, the need for improvement in distributed database management system become even more important. The most important issue is security that may arise and possibly compromise the access control and the integrity of the system. In this paper, we propose some solution for some security aspects such as multi-level access control, ...

  4. Database systems for knowledge-based discovery.

    Science.gov (United States)

    Jagarlapudi, Sarma A R P; Kishan, K V Radha

    2009-01-01

    Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery.

  5. Issues in Big-Data Database Systems

    Science.gov (United States)

    2014-06-01

    that big data will not be manageable using conventional relational database technology, and it is true that alternative paradigms, such as NoSQL systems...conventional relational database technology, and it is true that alternative paradigms, such as NoSQL systems and search engines, have much to offer...scale well, and because integration with external data sources is so difficult. NoSQL systems are more open to this integration, and provide excellent

  6. Prototype of an Integrated Hurricane Information System for Research: Design and Implementation of the Database and Web Portal

    Science.gov (United States)

    Li, P. P.; Knosp, B.; Vu, Q. A.; Hristova-Veleva, S.; Chao, Y.; Vane, D.; Lambrigtsen, B.; Su, H.; Dang, V.; Fovell, R.; Willis, J.; Tanelli, S.; Fishbein, E.; Ao, C. O.; Poulsen, W. L.; Park, K. J.; Fetzer, E.; Vazquez, J.; Callahan, P. S.; Marcus, S.; Garay, M.; Kahn, R.; Haddad, Z.

    2007-12-01

    location placemark to see the time, location, and the intensity of the hurricane. Large scale datasets, such as SST or aerosol optical depth can be overlaid on top of the hurricane track in Google Map. In addition, available satellite and in-situ data during the hurricane period are displayed as little bars in a time line organized by datasets. When clicking a little bar, pre-generated plots for the selected dataset will be displayed in a separate window together with all other datasets co-located around the same time. The raw data in user-specified format can be downloaded for further analysis or model integration. As for the 3D model data, Live Access Server (LAS) is used to provide custom subsets and on-the-fly visualization. The site is dynamically configured using a backend relational database that is designed to let users easily browse through the website to find data and plots that are pertinent to their research. In this presentation, we will describe the current status of the integrated hurricane information system prototype, the design and the implementation of the hurricane database and portal, and future enhancements.

  7. Integration of Biodiversity Databases in Taiwan and Linkage to Global Databases

    Directory of Open Access Journals (Sweden)

    Kwang-Tsao Shao

    2007-03-01

    Full Text Available The biodiversity databases in Taiwan were dispersed to various institutions and colleges with limited amount of data by 2001. The Natural Resources and Ecology GIS Database sponsored by the Council of Agriculture, which is part of the National Geographic Information System planned by the Ministry of Interior, was the most well established biodiversity database in Taiwan. But thisThis database was, however, mainly collectingcollected the distribution data of terrestrial animals and plants within the Taiwan area. In 2001, GBIF was formed, and Taiwan joined as one of the an Associate Participant and started, starting the establishment and integration of animal and plant species databases; therefore, TaiBIF was able to co-operate with GBIF. The information of Catalog of Life, specimens, and alien species were integrated by the Darwin core. The standard. These metadata standards allowed the biodiversity information of Taiwan to connect with global databases.

  8. A development and integration of database code-system with a compilation of comparator, k0 and absolute methods for INAA using microsoft access

    Science.gov (United States)

    Hoh, Siew Sin; Rapie, Nurul Nadiah; Lim, Edwin Suh Wen; Tan, Chun Yuan; Yavar, Alireza; Sarmani, Sukiman; Majid, Amran Ab.; Khoo, Kok Siong

    2013-05-01

    Instrumental Neutron Activation Analysis (INAA) is often used to determine and calculate the elemental concentrations of a sample at The National University of Malaysia (UKM) typically in Nuclear Science Programme, Faculty of Science and Technology. The objective of this study was to develop a database code-system based on Microsoft Access 2010 which could help the INAA users to choose either comparator method, k0-method or absolute method for calculating the elemental concentrations of a sample. This study also integrated k0data, Com-INAA, k0Concent, k0-Westcott and Abs-INAA to execute and complete the ECC-UKM database code-system. After the integration, a study was conducted to test the effectiveness of the ECC-UKM database code-system by comparing the concentrations between the experiments and the code-systems. 'Triple Bare Monitor' Zr-Au and Cr-Mo-Au were used in k0Concent, k0-Westcott and Abs-INAA code-systems as monitors to determine the thermal to epithermal neutron flux ratio (f). Calculations involved in determining the concentration were net peak area (Np), measurement time (tm), irradiation time (tirr), k-factor (k), thermal to epithermal neutron flux ratio (f), parameters of the neutron flux distribution epithermal (α) and detection efficiency (ɛp). For Com-INAA code-system, certified reference material IAEA-375 Soil was used to calculate the concentrations of elements in a sample. Other CRM and SRM were also used in this database codesystem. Later, a verification process to examine the effectiveness of the Abs-INAA code-system was carried out by comparing the sample concentrations between the code-system and the experiment. The results of the experimental concentration values of ECC-UKM database code-system were performed with good accuracy.

  9. An organic database system

    NARCIS (Netherlands)

    M.L. Kersten (Martin); A.P.J.M. Siebes (Arno)

    1999-01-01

    textabstractThe pervasive penetration of database technology may suggest that we have reached the end of the database research era. The contrary is true. Emerging technology, in hardware, software, and connectivity, brings a wealth of opportunities to push technology to a new level of maturity.

  10. Cloud Database Management System (CDBMS

    Directory of Open Access Journals (Sweden)

    Snehal B. Shende

    2015-10-01

    Full Text Available Cloud database management system is a distributed database that delivers computing as a service. It is sharing of web infrastructure for resources, software and information over a network. The cloud is used as a storage location and database can be accessed and computed from anywhere. The large number of web application makes the use of distributed storage solution in order to scale up. It enables user to outsource the resource and services to the third party server. This paper include, the recent trend in cloud service based on database management system and offering it as one of the services in cloud. The advantages and disadvantages of database as a service will let you to decide either to use database as a service or not. This paper also will highlight the architecture of cloud based on database management system.

  11. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  12. Integrity Checking and Maintenance with Active Rules in XML Databases

    DEFF Research Database (Denmark)

    Christiansen, Henning; Rekouts, Maria

    2007-01-01

    for the purpose are still rather untested in XML databases. We present the first steps towards a methodology for design and verification of triggers that maintain integrity in XML databases. Starting from a specification of the integrity constraints plus a collection of XPath expressions describing the possible...... updates, the method indicates trigger conditions and correctness criteria to be met by the trigger code supplied by a developer or possibly automatic methods. We show examples developed in the Sedna XML database system which provides a running implementation of XML triggers....

  13. THE INTEGRATED SPATIAL DATABASES OF GEOSTAR

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    GeoStar is the registered trademark of GIS software made by WTUSM in China.By means of the GeoStar,multi-scale images,DEMs,graphics and attributes integrated in very large seamless databases can be created,and the multi-dimensional dynamic visualization and information extraction are also available.This paper describes the fundamental characteristics of such huge integrated databases,for instance,the data models,database structures and the spatial index strategies.At last,the typical applications of GeoStar for a few pilot projects like the Shanghai CyberCity and the Guangdong provincial spatial data infrastructure (SDI) are illustrated and several concluding remarks are stressed.

  14. Concurrency control in distributed database systems

    CERN Document Server

    Cellary, W; Gelenbe, E

    1989-01-01

    Distributed Database Systems (DDBS) may be defined as integrated database systems composed of autonomous local databases, geographically distributed and interconnected by a computer network.The purpose of this monograph is to present DDBS concurrency control algorithms and their related performance issues. The most recent results have been taken into consideration. A detailed analysis and selection of these results has been made so as to include those which will promote applications and progress in the field. The application of the methods and algorithms presented is not limited to DDBSs but a

  15. Application of Integrated Database to the Casting Design

    Institute of Scientific and Technical Information of China (English)

    In-Sung Cho; Seung-Mok Yoo; Chae-Ho Lim; Jeong-Kil Choi

    2008-01-01

    Construction of integrated database including casting shapes with their casting design, technical knowledge, and thermophysical properties of the casting alloys were introduced in the present study. Recognition tech- nique for casting design by industrial computer tomography was used for the construction of shape database. Technical knowledge of the casting processes such as ferrous and non-ferrous alloys and their manufacturing process of the castings were accumulated and the search engine for the knowledge was developed. Database of thermophysical properties of the casting alloys were obtained via the experimental study, and the properties were used for .the in-house computer simulation of casting process. The databases were linked with intelligent casting expert system developed in center for e-design, KITECH. It is expected that the databases can help non casting experts to devise the casting and its process. Various examples of the application by using the databases were shown in the present study.

  16. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  17. 大型遥感图像处理系统中集成数据库设计及应用%Design and Application of Integrated Database for a Large Remote Sensing Processing System

    Institute of Scientific and Technical Information of China (English)

    李军; 刘高焕; 迟耀斌; 朱重光

    2001-01-01

    大型遥感图像处理应用系统中,往往需要实时获取各种背景或专题数据,该过程即是数据动态集成过程。集成数据库是建立在各种专题数据库基础上的数据集成使用框架体系,该文描述了集成数据库的结构及各类子库的组成,根据项目的特殊需求提出了虚拟数据库概念,并结合实例说明了集成数据库以元数据为链条的使用机制与方法。%It is necessary to provide any essential background data andthematic data timely in image processing and applications. In fact, it is very difficult to integrate different kinds of data into one database that is managed by commercial GIS or image processing software such as ARC/INFO or ERDAS. In this paper, the author describes an integrated database management system which is a framework based on different kinds of database, such as image database, vector spatial database, spatial entity spectrum characteristics database, spatial entity image sample database, control point (tics) database, documents database, models database, and product database. The querying and retrieving system, which are basic functions of integrated database management system, depend on metadata being divided into three parts: database metadata, dataset metadata and attribute field metadata. Finally, the author introduces the concept of virtual database that is a logical database based on other practical databases, and describes its structure and application in product making system for a large remote sensing application in detail.

  18. Human Exposure Database System (HEDS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Human Exposure Database System (HEDS) provides public access to data sets, documents, and metadata from EPA on human exposure. It is primarily intended for...

  19. The ATLAS Distributed Data Management System & Databases

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Barisits, M; Beermann, T; Vigne, R; Serfon, C

    2013-01-01

    The ATLAS Distributed Data Management (DDM) System is responsible for the global management of petabytes of high energy physics data. The current system, DQ2, has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle. RDBMS are well-suited to enforcing data integrity in online transaction processing applications, however, concerns have been raised about the scalability of its data warehouse-like workload. In particular, analysis of archived data or aggregation of transactional data for summary purposes is problematic. Therefore, we have evaluated new approaches to handle vast amounts of data. We have investigated a class of database technologies commonly referred to as NoSQL databases. This includes distributed filesystems, like HDFS, that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value stores, like HBase. In this talk we will describe our use cases in ATLAS, share our experiences with various databases used ...

  20. The magnet components database system

    Energy Technology Data Exchange (ETDEWEB)

    Baggett, M.J. (Brookhaven National Lab., Upton, NY (USA)); Leedy, R.; Saltmarsh, C.; Tompkins, J.C. (Superconducting Supercollider Lab., Dallas, TX (USA))

    1990-01-01

    The philosophy, structure, and usage MagCom, the SSC magnet components database, are described. The database has been implemented in Sybase (a powerful relational database management system) on a UNIX-based workstation at the Superconducting Super Collider Laboratory (SSCL); magnet project collaborators can access the database via network connections. The database was designed to contain the specifications and measured values of important properties for major materials, plus configuration information (specifying which individual items were used in each cable, coil, and magnet) and the test results on completed magnets. These data will facilitate the tracking and control of the production process as well as the correlation of magnet performance with the properties of its constituents. 3 refs., 10 figs.

  1. Construction of an integrated database to support genomic sequence analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, W.; Overbeek, R.

    1994-11-01

    The central goal of this project is to develop an integrated database to support comparative analysis of genomes including DNA sequence data, protein sequence data, gene expression data and metabolism data. In developing the logic-based system GenoBase, a broader integration of available data was achieved due to assistance from collaborators. Current goals are to easily include new forms of data as they become available and to easily navigate through the ensemble of objects described within the database. This report comments on progress made in these areas.

  2. Shielding integral benchmark archive and database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, B.L.; Grove, R.E. [Radiation Safety Information Computational Center RSICC, Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, TN 37831-6171 (United States); Kodeli, I. [Josef Stefan Inst., Jamova 39, 1000 Ljubljana (Slovenia); Gulliford, J.; Sartori, E. [OECD NEA Data Bank, Bd des Iles, 92130 Issy-les-Moulineaux (France)

    2011-07-01

    The shielding integral benchmark archive and database (SINBAD) collection of experiments descriptions was initiated in the early 1990s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development's Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD was designed to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD can serve as a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories - fission, fusion, and accelerator experiments. Many experiments are described and analyzed using deterministic or stochastic (Monte Carlo) radiation transport software. The nuclear cross sections also play an important role as they are necessary in performing computational analysis. (authors)

  3. Shielding Integral Benchmark Archive and Database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL; Grove, Robert E [ORNL; Kodeli, I. [International Atomic Energy Agency (IAEA); Sartori, Enrico [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  4. DENdb: database of integrated human enhancers.

    Science.gov (United States)

    Ashoor, Haitham; Kleftogiannis, Dimitrios; Radovanovic, Aleksandar; Bajic, Vladimir B

    2015-01-01

    Enhancers are cis-acting DNA regulatory regions that play a key role in distal control of transcriptional activities. Identification of enhancers, coupled with a comprehensive functional analysis of their properties, could improve our understanding of complex gene transcription mechanisms and gene regulation processes in general. We developed DENdb, a centralized on-line repository of predicted enhancers derived from multiple human cell-lines. DENdb integrates enhancers predicted by five different methods generating an enriched catalogue of putative enhancers for each of the analysed cell-lines. DENdb provides information about the overlap of enhancers with DNase I hypersensitive regions, ChIP-seq regions of a number of transcription factors and transcription factor binding motifs, means to explore enhancer interactions with DNA using several chromatin interaction assays and enhancer neighbouring genes. DENdb is designed as a relational database that facilitates fast and efficient searching, browsing and visualization of information. Database URL: http://www.cbrc.kaust.edu.sa/dendb/.

  5. Integrated Space Asset Management Database and Modeling

    Science.gov (United States)

    Gagliano, L.; MacLeod, T.; Mason, S.; Percy, T.; Prescott, J.

    The Space Asset Management Database (SAM-D) was implemented in order to effectively track known objects in space by ingesting information from a variety of databases and performing calculations to determine the expected position of the object at a specified time. While SAM-D performs this task very well, it is limited by technology and is not available outside of the local user base. Modeling and simulation can be powerful tools to exploit the information contained in SAM-D. However, the current system does not allow proper integration options for combining the data with both legacy and new M&S tools. A more capable data management infrastructure would extend SAM-D to support the larger data sets to be generated by the COI. A service-oriented architecture model will allow it to easily expand to incorporate new capabilities, including advanced analytics, M&S tools, fusion techniques and user interface for visualizations. Based on a web-centric approach, the entire COI will be able to access the data and related analytics. In addition, tight control of information sharing policy will increase confidence in the system, which would encourage industry partners to provide commercial data. SIMON is a Government off the Shelf information sharing platform in use throughout DoD and DHS information sharing and situation awareness communities. SIMON providing fine grained control to data owners allowing them to determine exactly how and when their data is shared. SIMON supports a micro-service approach to system development, meaning M&S and analytic services can be easily built or adapted. It is uniquely positioned to fill this need as an information-sharing platform with a proven track record of successful situational awareness system deployments. Combined with the integration of new and legacy M&S tools, a SIMON-based architecture will provide a robust SA environment for the NASA SA COI that can be extended and expanded indefinitely. First Results of Coherent Uplink from a

  6. On Simplification of Database Integrity Constraints

    DEFF Research Database (Denmark)

    Christiansen, Henning; Martinenghi, Davide

    2006-01-01

    , and the present paper is an attempt to fill this gap. On the theoretical side, a general characterization is introduced of the problem of simplification of integrity constraints and a natural definition is given of what it means for a simplification procedure to be ideal. We prove that ideality of simplification...... is strictly related to query containment; in fact, an ideal simplification procedure can only exist in database languages for which query containment is decidable. However, simplifications that do not qualify as ideal may also be relevant for practical purposes. We present a concrete approach based...

  7. A web-based audiometry database system.

    Science.gov (United States)

    Yeh, Chung-Hui; Wei, Sung-Tai; Chen, Tsung-Wen; Wang, Ching-Yuang; Tsai, Ming-Hsui; Lin, Chia-Der

    2014-07-01

    To establish a real-time, web-based, customized audiometry database system, we worked in cooperation with the departments of medical records, information technology, and otorhinolaryngology at our hospital. This system includes an audiometry data entry system, retrieval and display system, patient information incorporation system, audiometry data transmission program, and audiometry data integration. Compared with commercial audiometry systems and traditional hand-drawn audiometry data, this web-based system saves time and money and is convenient for statistics research. Copyright © 2013. Published by Elsevier B.V.

  8. Tiered Human Integrated Sequence Search Databases for Shotgun Proteomics.

    Science.gov (United States)

    Deutsch, Eric W; Sun, Zhi; Campbell, David S; Binz, Pierre-Alain; Farrah, Terry; Shteynberg, David; Mendoza, Luis; Omenn, Gilbert S; Moritz, Robert L

    2016-11-04

    The results of analysis of shotgun proteomics mass spectrometry data can be greatly affected by the selection of the reference protein sequence database against which the spectra are matched. For many species there are multiple sources from which somewhat different sequence sets can be obtained. This can lead to confusion about which database is best in which circumstances-a problem especially acute in human sample analysis. All sequence databases are genome-based, with sequences for the predicted gene and their protein translation products compiled. Our goal is to create a set of primary sequence databases that comprise the union of sequences from many of the different available sources and make the result easily available to the community. We have compiled a set of four sequence databases of varying sizes, from a small database consisting of only the ∼20,000 primary isoforms plus contaminants to a very large database that includes almost all nonredundant protein sequences from several sources. This set of tiered, increasingly complete human protein sequence databases suitable for mass spectrometry proteomics sequence database searching is called the Tiered Human Integrated Search Proteome set. In order to evaluate the utility of these databases, we have analyzed two different data sets, one from the HeLa cell line and the other from normal human liver tissue, with each of the four tiers of database complexity. The result is that approximately 0.8%, 1.1%, and 1.5% additional peptides can be identified for Tiers 2, 3, and 4, respectively, as compared with the Tier 1 database, at substantially increasing computational cost. This increase in computational cost may be worth bearing if the identification of sequence variants or the discovery of sequences that are not present in the reviewed knowledge base entries is an important goal of the study. We find that it is useful to search a data set against a simpler database, and then check the uniqueness of the

  9. Jelly Views : Extending Relational Database Systems Toward Deductive Database Systems

    Directory of Open Access Journals (Sweden)

    Igor Wojnicki

    2004-01-01

    Full Text Available This paper regards the Jelly View technology, which provides a new, practical methodology for knowledge decomposition, storage, and retrieval within Relational Database Management Systems (RDBMS. Intensional Knowledge clauses (rules are decomposed and stored in the RDBMS founding reusable components. The results of the rule-based processing are visible as regular views, accessible through SQL. From the end-user point of view the processing capability becomes unlimited (arbitrarily complex queries can be constructed using Intensional Knowledge, while the most external queries are expressed with standard SQL. The RDBMS functionality becomes extended toward that of the Deductive Databases

  10. A relational database application in support of integrated neuroscience research.

    Science.gov (United States)

    Rudowsky, Ira; Kulyba, Olga; Kunin, Mikhail; Ogarodnikov, Dmitri; Raphan, Theodore

    2004-12-01

    The development of relational databases has significantly improved the performance of storage, search, and retrieval functions and has made it possible for applications that perform real-time data acquisition and analysis to interact with these types of databases. The purpose of this research was to develop a user interface for interaction between a data acquisition and analysis application and a relational database using the Oracle9i system. The overall system was designed to have an indexing capability that threads into the data acquisition and analysis programs. Tables were designed and relations within the database for indexing the files and information contained within the files were established. The system provides retrieval capabilities over a broad range of media, including analog, event, and video data types. The system's ability to interact with a data capturing program at the time of the experiment to create both multimedia files as well as the meta-data entries in the relational database avoids manual entries in the database and ensures data integrity and completeness for further interaction with the data by analysis applications.

  11. SPIRE Data-Base Management System

    Science.gov (United States)

    Fuechsel, C. F.

    1984-01-01

    Spacelab Payload Integration and Rocket Experiment (SPIRE) data-base management system (DBMS) based on relational model of data bases. Data bases typically used for engineering and mission analysis tasks and, unlike most commercially available systems, allow data items and data structures stored in forms suitable for direct analytical computation. SPIRE DBMS designed to support data requests from interactive users as well as applications programs.

  12. A database of immunoglobulins with integrated tools: DIGIT.

    KAUST Repository

    Chailyan, Anna

    2011-11-10

    The DIGIT (Database of ImmunoGlobulins with Integrated Tools) database (http://biocomputing.it/digit) is an integrated resource storing sequences of annotated immunoglobulin variable domains and enriched with tools for searching and analyzing them. The annotations in the database include information on the type of antigen, the respective germline sequences and on pairing information between light and heavy chains. Other annotations, such as the identification of the complementarity determining regions, assignment of their structural class and identification of mutations with respect to the germline, are computed on the fly and can also be obtained for user-submitted sequences. The system allows customized BLAST searches and automatic building of 3D models of the domains to be performed.

  13. Comparison of object and relational database systems

    OpenAIRE

    GEYER, Jakub

    2012-01-01

    This thesis focuses on the issue of a convenient choice of database platforms. The key features of the object database systems and the relational database systems are mutually compared and tested on concrete representative samples of each individual platform.

  14. DENdb: database of integrated human enhancers

    KAUST Repository

    Ashoor, Haitham

    2015-09-05

    Enhancers are cis-acting DNA regulatory regions that play a key role in distal control of transcriptional activities. Identification of enhancers, coupled with a comprehensive functional analysis of their properties, could improve our understanding of complex gene transcription mechanisms and gene regulation processes in general. We developed DENdb, a centralized on-line repository of predicted enhancers derived from multiple human cell-lines. DENdb integrates enhancers predicted by five different methods generating an enriched catalogue of putative enhancers for each of the analysed cell-lines. DENdb provides information about the overlap of enhancers with DNase I hypersensitive regions, ChIP-seq regions of a number of transcription factors and transcription factor binding motifs, means to explore enhancer interactions with DNA using several chromatin interaction assays and enhancer neighbouring genes. DENdb is designed as a relational database that facilitates fast and efficient searching, browsing and visualization of information.

  15. Integrated List - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available grated List Data detail Data name Integrated List DOI 10.18908/lsdba.nbdc00114-005 ... About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Integrated List - JSNP | LSDB Archive ... ...switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us JSNP Inte

  16. Embedded Systems Programming: Accessing Databases from Esterel

    Directory of Open Access Journals (Sweden)

    White David

    2008-01-01

    Full Text Available Abstract A current limitation in embedded controller design and programming is the lack of database support in development tools such as Esterel Studio. This article proposes a way of integrating databases and Esterel by providing two application programming interfaces (APIs which enable the use of relational databases inside Esterel programs. As databases and Esterel programs are often executed on different machines, result sets returned as responses to database queries may be processed either locally and according to Esterel's synchrony hypothesis, or remotely along several of Esterel's execution cycles. These different scenarios are reflected in the design and usage rules of the two APIs presented in this article, which rely on Esterel's facilities for extending the language by external data types, external functions, and procedures, as well as tasks. The APIs' utility is demonstrated by means of a case study modelling an automated warehouse storage system, which is constructed using Lego Mindstorms robotics kits. The robot's controller is programmed in Esterel in a way that takes dynamic ordering information and the warehouse's floor layout into account, both of which are stored in a MySQL database.

  17. Embedded Systems Programming: Accessing Databases from Esterel

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available A current limitation in embedded controller design and programming is the lack of database support in development tools such as Esterel Studio. This article proposes a way of integrating databases and Esterel by providing two application programming interfaces (APIs which enable the use of relational databases inside Esterel programs. As databases and Esterel programs are often executed on different machines, result sets returned as responses to database queries may be processed either locally and according to Esterel’s synchrony hypothesis, or remotely along several of Esterel’s execution cycles. These different scenarios are reflected in the design and usage rules of the two APIs presented in this article, which rely on Esterel’s facilities for extending the language by external data types, external functions, and procedures, as well as tasks. The APIs’ utility is demonstrated by means of a case study modelling an automated warehouse storage system, which is constructed using Lego Mindstorms robotics kits. The robot’s controller is programmed in Esterel in a way that takes dynamic ordering information and the warehouse’s floor layout into account, both of which are stored in a MySQL database.

  18. The CMS Condition Database system

    CERN Document Server

    Govi, Giacomo Maria; Ojeda-Sandonis, Miguel; Pfeiffer, Andreas; Sipos, Roland

    2015-01-01

    The Condition Database plays a key role in the CMS computing infrastructure. The complexity of the detector and the variety of the sub-systems involved are setting tight requirements for handling the Conditions. In the last two years the collaboration has put an effort in the re-design of the Condition Database system, with the aim to improve the scalability and the operability for the data taking starting in 2015. The re-design has focused in simplifying the architecture, using the lessons learned during the operation of the previous data-taking period. In the new system the relational features of the database schema are mainly exploited to handle the metadata ( Tag and Interval of Validity ), allowing for a limited and controlled set of queries. The bulk condition data ( Payloads ) are stored as unstructured binary data, allowing the storage in a single table with a common layout for all of the condition data types. In this presentation, we describe the full architecture of the system, including the serv...

  19. ERPDB: An Integrated Database of ERP Data for Neuroinformatics Research

    Directory of Open Access Journals (Sweden)

    QingHong Yan

    2007-10-01

    Full Text Available Event-related potential (ERP is the measurement of the brain's electrical activity in response to different types of events, such as attention, words, thinking, or sounds. By measuring the brain's response to such events, we can learn how different types of information are processed. As the mass of recorded ERP data explodes, an automatic and accurate tool to store, manage, and retrieve data readily is of increasing concern in neuroinformatics. In this paper, we describe a relational ERP database that has been constructed using the SQL server 2000 database management system and an IIS web server that has been setup for data retrieval through a custom web interface (http://202.113.232.103:8088/erpdb/index.asp. A novel database structure has been used to store ERP data of different activity channels, which provides a rapid and accurate way for data retrieval within any given range on the time zone with various searching options. The database is divided into: (1 subjects' information and record information and (2 ERP data, which has been structured and standardized in a database table supplemented with unrestricted text files. It can integrate or exchange data with other clinical databases or computer-based information systems through a program based on ADO techniques. Users are able to readily retrieve ERP data through the user-friendly web page interface. All online resources of the database are freely available to the scientific community. As the database develops further, we anticipate it will become a valuable tool that will make a great contribution to everyday clinical practice, teaching, and research work inneuroscience and psychology in the future.

  20. ISPIDER Central: an integrated database web-server for proteomics.

    Science.gov (United States)

    Siepen, Jennifer A; Belhajjame, Khalid; Selley, Julian N; Embury, Suzanne M; Paton, Norman W; Goble, Carole A; Oliver, Stephen G; Stevens, Robert; Zamboulis, Lucas; Martin, Nigel; Poulovassillis, Alexandra; Jones, Philip; Côté, Richard; Hermjakob, Henning; Pentony, Melissa M; Jones, David T; Orengo, Christine A; Hubbard, Simon J

    2008-07-01

    Despite the growing volumes of proteomic data, integration of the underlying results remains problematic owing to differences in formats, data captured, protein accessions and services available from the individual repositories. To address this, we present the ISPIDER Central Proteomic Database search (http://www.ispider.manchester.ac.uk/cgi-bin/ProteomicSearch.pl), an integration service offering novel search capabilities over leading, mature, proteomic repositories including PRoteomics IDEntifications database (PRIDE), PepSeeker, PeptideAtlas and the Global Proteome Machine. It enables users to search for proteins and peptides that have been characterised in mass spectrometry-based proteomics experiments from different groups, stored in different databases, and view the collated results with specialist viewers/clients. In order to overcome limitations imposed by the great variability in protein accessions used by individual laboratories, the European Bioinformatics Institute's Protein Identifier Cross-Reference (PICR) service is used to resolve accessions from different sequence repositories. Custom-built clients allow users to view peptide/protein identifications in different contexts from multiple experiments and repositories, as well as integration with the Dasty2 client supporting any annotations available from Distributed Annotation System servers. Further information on the protein hits may also be added via external web services able to take a protein as input. This web server offers the first truly integrated access to proteomics repositories and provides a unique service to biologists interested in mass spectrometry-based proteomics.

  1. A Relational Database System for Student Use.

    Science.gov (United States)

    Fertuck, Len

    1982-01-01

    Describes an APL implementation of a relational database system suitable for use in a teaching environment in which database development and database administration are studied, and discusses the functions of the user and the database administrator. An appendix illustrating system operation and an eight-item reference list are attached. (Author/JL)

  2. Integrated Historical Tsunami Event and Deposit Database

    Science.gov (United States)

    Dunbar, P. K.; McCullough, H. L.

    2010-12-01

    The National Geophysical Data Center (NGDC) provides integrated access to historical tsunami event, deposit, and proxy data. The NGDC tsunami archive initially listed tsunami sources and locations with observed tsunami effects. Tsunami frequency and intensity are important for understanding tsunami hazards. Unfortunately, tsunami recurrence intervals often exceed the historic record. As a result, NGDC expanded the archive to include the Global Tsunami Deposits Database (GTD_DB). Tsunami deposits are the physical evidence left behind when a tsunami impacts a shoreline or affects submarine sediments. Proxies include co-seismic subsidence, turbidite deposits, changes in biota following an influx of marine water in a freshwater environment, etc. By adding past tsunami data inferred from the geologic record, the GTD_DB extends the record of tsunamis backward in time. Although the best methods for identifying tsunami deposits and proxies in the geologic record remain under discussion, developing an overall picture of where tsunamis have affected coasts, calculating recurrence intervals, and approximating runup height and inundation distance provides a better estimate of a region’s true tsunami hazard. Tsunami deposit and proxy descriptions in the GTD_DB were compiled from published data found in journal articles, conference proceedings, theses, books, conference abstracts, posters, web sites, etc. The database now includes over 1,200 descriptions compiled from over 1,100 citations. Each record in the GTD_DB is linked to its bibliographic citation where more information on the deposit can be found. The GTD_DB includes data for over 50 variables such as: event description (e.g., 2010 Chile Tsunami), geologic time period, year, deposit location name, latitude, longitude, country, associated body of water, setting during the event (e.g., beach, lake, river, deep sea), upper and lower contacts, underlying and overlying material, etc. If known, the tsunami source mechanism

  3. Integrating Technologies, Methodologies, and Databases into a Comprehensive Terminology Management Environment to Support Interoperability among Clinical Information Systems

    Science.gov (United States)

    Shakib, Shaun Cameron

    2013-01-01

    Controlled clinical terminologies are essential to realizing the benefits of electronic health record systems. However, implementing consistent and sustainable use of terminology has proven to be both intellectually and practically challenging. First, this project derives a conceptual understanding of the scope and intricacies of the challenge by…

  4. A linguistic integration of a biological database

    Energy Technology Data Exchange (ETDEWEB)

    Collado-Vides, J. [Univ. Nacional Autonoma de Mexico, Morelos (Mexico)

    1993-12-31

    One of the major theoretical concerns associated with the Human Genome Project is that of the methodology to decipher ``raw`` sequences of DNA. This work is concerned with a subsequent problem, the one of how huge amounts of already deciphered information that will emerge in the near future can be integrated in order to enhance their biological understanding. The formal foundations for a linguistic theory of the regulation of gene expression will be discussed. The linguistic analysis presented here is restricted to sequences with known biological function since: (1) there is no way to obtain, from DNA sequences alone, a regulatory representation of transcription units, and (2) the elements of substitution -- methodologically equivalent to phonemes -- are complete sequences of the binding sites of proteins. The authors have recently collected and analyzed the regulatory regions of a large number of E. coli promoters. The number of sigma 70 promoters studied may well represent the largest homogeneous body of knowledge of gene regulation at present. This collection is a data set for the construction of a grammar of the sigma 70 system of transcription and regulation. This grammatical model generates all the arrays of the collection, as well as novel combinations predicted to be consistent with the principles of the data set. This Grammar is testable, as well as expandable if the analysis of emerging data requires it. The elaboration of a linguistic methodology capable of integrating prokaryotic data constitutes a preliminary step towards the analysis and integration of the more complex eukaryotic systems of regulation.

  5. Emission & Generation Resource Integrated Database (eGRID)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Emissions & Generation Resource Integrated Database (eGRID) is an integrated source of data on environmental characteristics of electric power generation....

  6. Integrated database for rapid mass movements in Norway

    Directory of Open Access Journals (Sweden)

    C. Jaedicke

    2009-03-01

    Full Text Available Rapid gravitational slope mass movements include all kinds of short term relocation of geological material, snow or ice. Traditionally, information about such events is collected separately in different databases covering selected geographical regions and types of movement. In Norway the terrain is susceptible to all types of rapid gravitational slope mass movements ranging from single rocks hitting roads and houses to large snow avalanches and rock slides where entire mountainsides collapse into fjords creating flood waves and endangering large areas. In addition, quick clay slides occur in desalinated marine sediments in South Eastern and Mid Norway. For the authorities and inhabitants of endangered areas, the type of threat is of minor importance and mitigation measures have to consider several types of rapid mass movements simultaneously.

    An integrated national database for all types of rapid mass movements built around individual events has been established. Only three data entries are mandatory: time, location and type of movement. The remaining optional parameters enable recording of detailed information about the terrain, materials involved and damages caused. Pictures, movies and other documentation can be uploaded into the database. A web-based graphical user interface has been developed allowing new events to be entered, as well as editing and querying for all events. An integration of the database into a GIS system is currently under development.

    Datasets from various national sources like the road authorities and the Geological Survey of Norway were imported into the database. Today, the database contains 33 000 rapid mass movement events from the last five hundred years covering the entire country. A first analysis of the data shows that the most frequent type of recorded rapid mass movement is rock slides and snow avalanches followed by debris slides in third place. Most events are recorded in the steep fjord

  7. Integration of an Evidence Base into a Probabilistic Risk Assessment Model. The Integrated Medical Model Database: An Organized Evidence Base for Assessing In-Flight Crew Health Risk and System Design

    Science.gov (United States)

    Saile, Lynn; Lopez, Vilma; Bickham, Grandin; FreiredeCarvalho, Mary; Kerstman, Eric; Byrne, Vicky; Butler, Douglas; Myers, Jerry; Walton, Marlei

    2011-01-01

    This slide presentation reviews the Integrated Medical Model (IMM) database, which is an organized evidence base for assessing in-flight crew health risk. The database is a relational database accessible to many people. The database quantifies the model inputs by a ranking based on the highest value of the data as Level of Evidence (LOE) and the quality of evidence (QOE) score that provides an assessment of the evidence base for each medical condition. The IMM evidence base has already been able to provide invaluable information for designers, and for other uses.

  8. Security Research on Engineering Database System

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Engine engineering database system is an oriented C AD applied database management system that has the capability managing distributed data. The paper discusses the security issue of the engine engineering database management system (EDBMS). Through studying and analyzing the database security, to draw a series of securi ty rules, which reach B1, level security standard. Which includes discretionary access control (DAC), mandatory access control (MAC) and audit. The EDBMS implem ents functions of DAC, ...

  9. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  10. The Geophysical Database Management System in Taiwan

    Directory of Open Access Journals (Sweden)

    Tzay-Chyn Shin

    2013-01-01

    Full Text Available The Geophysical Database Management System (GDMS is an integrated and web-based open data service which has been developed by the Central Weather Bureau (CWB, Taiwan, ROC since 2005. This service went online on August 1, 2008. The GDMS provides six types of geophysical data acquired from the Short-period Seismographic System, Broadband Seismographic System, Free-field Strong-motion Station, Strong-motion Building Array, Global Positioning System, and Groundwater Observation System. When utilizing the GDMS website, users can download seismic event data and continuous geophysical data. At present, many researchers have accessed this public platform to obtain geophysical data. Clearly, the establishment of GDMS is a significant improvement in data sorting for interested researchers.

  11. Content And Multimedia Database Management Systems

    NARCIS (Netherlands)

    Vries, de Arjen Paul

    1999-01-01

    A database management system is a general-purpose software system that facilitates the processes of defining, constructing, and manipulating databases for various applications. The main characteristic of the ‘database approach’ is that it increases the value of data by its emphasis on data independe

  12. 2010 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2010 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  13. 2014 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2014 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  14. 2009 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2009 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  15. 2011 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2011 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  16. 2012 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2012 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  17. The Instrumentation of the Multibackend Database System

    Science.gov (United States)

    1993-06-10

    COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD IGROUP SUB-GROUP Parallel Database, Multilingual ...identify by block number) Most database system designs and implementations are limited to single language ( monolingual ) and single model (mono- model...solution to the processing cost and data sharing problems of hetero- geneous database systems. One solution is a multimodel and multilingual database

  18. KAIKObase: An integrated silkworm genome database and data mining tool

    Directory of Open Access Journals (Sweden)

    Nagaraju Javaregowda

    2009-10-01

    Full Text Available Abstract Background The silkworm, Bombyx mori, is one of the most economically important insects in many developing countries owing to its large-scale cultivation for silk production. With the development of genomic and biotechnological tools, B. mori has also become an important bioreactor for production of various recombinant proteins of biomedical interest. In 2004, two genome sequencing projects for B. mori were reported independently by Chinese and Japanese teams; however, the datasets were insufficient for building long genomic scaffolds which are essential for unambiguous annotation of the genome. Now, both the datasets have been merged and assembled through a joint collaboration between the two groups. Description Integration of the two data sets of silkworm whole-genome-shotgun sequencing by the Japanese and Chinese groups together with newly obtained fosmid- and BAC-end sequences produced the best continuity (~3.7 Mb in N50 scaffold size among the sequenced insect genomes and provided a high degree of nucleotide coverage (88% of all 28 chromosomes. In addition, a physical map of BAC contigs constructed by fingerprinting BAC clones and a SNP linkage map constructed using BAC-end sequences were available. In parallel, proteomic data from two-dimensional polyacrylamide gel electrophoresis in various tissues and developmental stages were compiled into a silkworm proteome database. Finally, a Bombyx trap database was constructed for documenting insertion positions and expression data of transposon insertion lines. Conclusion For efficient usage of genome information for functional studies, genomic sequences, physical and genetic map information and EST data were compiled into KAIKObase, an integrated silkworm genome database which consists of 4 map viewers, a gene viewer, and sequence, keyword and position search systems to display results and data at the level of nucleotide sequence, gene, scaffold and chromosome. Integration of the

  19. Enabling Ontology Based Semantic Queries in Biomedical Database Systems.

    Science.gov (United States)

    Zheng, Shuai; Wang, Fusheng; Lu, James

    2014-03-01

    There is a lack of tools to ease the integration and ontology based semantic queries in biomedical databases, which are often annotated with ontology concepts. We aim to provide a middle layer between ontology repositories and semantically annotated databases to support semantic queries directly in the databases with expressive standard database query languages. We have developed a semantic query engine that provides semantic reasoning and query processing, and translates the queries into ontology repository operations on NCBO BioPortal. Semantic operators are implemented in the database as user defined functions extended to the database engine, thus semantic queries can be directly specified in standard database query languages such as SQL and XQuery. The system provides caching management to boosts query performance. The system is highly adaptable to support different ontologies through easy customizations. We have implemented the system DBOntoLink as an open source software, which supports major ontologies hosted at BioPortal. DBOntoLink supports a set of common ontology based semantic operations and have them fully integrated with a database management system IBM DB2. The system has been deployed and evaluated with an existing biomedical database for managing and querying image annotations and markups (AIM). Our performance study demonstrates the high expressiveness of semantic queries and the high efficiency of the queries.

  20. Integration of Biodiversity Databases in Taiwan and Linkage to Global Databases

    OpenAIRE

    Kwang-Tsao Shao; Ching-I Peng; Eric Yen; Kun-Chi Lai; Ming-Chih Wang; Jack Lin; Han Lee; Yang Alan; Shin-Yu Chen

    2007-01-01

    The biodiversity databases in Taiwan were dispersed to various institutions and colleges with limited amount of data by 2001. The Natural Resources and Ecology GIS Database sponsored by the Council of Agriculture, which is part of the National Geographic Information System planned by the Ministry of Interior, was the most well established biodiversity database in Taiwan. But thisThis database was, however, mainly collectingcollected the distribution data of terrestrial animals and plants with...

  1. Integrated Library Systems. ERIC Digest.

    Science.gov (United States)

    Lopata, Cynthia L.

    An automated library system usually consists of a number of functional modules, such as acquisitions, circulation, cataloging, serials, and an online public access catalog (OPAC). An "integrated" library system is an automated system in which all of the function modules share a common bibliographic database. There are several ways the…

  2. Airports and Navigation Aids Database System -

    Data.gov (United States)

    Department of Transportation — Airport and Navigation Aids Database System is the repository of aeronautical data related to airports, runways, lighting, NAVAID and their components, obstacles, no...

  3. Performance related issues in distributed database systems

    Science.gov (United States)

    Mukkamala, Ravi

    1991-01-01

    The key elements of research performed during the year long effort of this project are: Investigate the effects of heterogeneity in distributed real time systems; Study the requirements to TRAC towards building a heterogeneous database system; Study the effects of performance modeling on distributed database performance; and Experiment with an ORACLE based heterogeneous system.

  4. Content and multimedia database management systems

    OpenAIRE

    de Vries

    1999-01-01

    A database management system is a general-purpose software system that facilitates the processes of defining, constructing, and manipulating databases for various applications. The main characteristic of the ‘database approach’ is that it increases the value of data by its emphasis on data independence. DBMSs, and in particular those based on the relational data model, have been very successful at the management of administrative data in the business domain. This thesis has investigated data ...

  5. Loopedia, a Database for Loop Integrals arXiv

    CERN Document Server

    Bogner, C.; Hahn, T.; Heinrich, G.; Jones, S.P.; Kerner, M.; von Manteuffel, A.; Michel, M.; Panzer, E.; Papara, V.

    Loopedia is a new database at loopedia.org for information on Feynman integrals, intended to provide both bibliographic information as well as results made available by the community. Its bibliometry is complementary to that of SPIRES or arXiv in the sense that it admits searching for integrals by graph-theoretical objects, e.g. its topology.

  6. Data Integration Strategy for Database Grids Based on P2P Framework

    Institute of Scientific and Technical Information of China (English)

    WANG Guangqi; SHEN Derong; YU Ge; ZHOU Wensheng; LI Meifang

    2006-01-01

    The differences between the data integration of a dynamic database grid (DBG) and that of a distributed database system are analyzed, and three kinds of data integration strategies are given on the background of DBG based on Peer to Peer (P2P) framework, including the centralized data integration (CDI) strategy, the distributed data integration (DDI) strategy and the filter-based data integration (FDDI) strategy. CDI calls all the database grid services (DGSs) at a single node, DDI disperses the DGSs to multiple nodes, while FDDI schedules the data integration nodes based on filtering the keywords returned from DGSs. The performance of these three integration strategies are compared with and analyzed by simulation experiments. FDDI is more evident for filtering the keywords with data redundancy increasing. Through the reduction of large amount of data transportation, it effectively shortens the executing time for the task and improves its efficiency.

  7. Implementing database system for LHCb publications page

    CERN Document Server

    Abdullayev, Fakhriddin

    2017-01-01

    The LHCb is one of the main detectors of Large Hadron Collider, where physicists and scientists work together on high precision measurements of matter-antimatter asymmetries and searches for rare and forbidden decays, with the aim of discovering new and unexpected forces. The work does not only consist of analyzing data collected from experiments but also in publishing the results of those analyses. The LHCb publications are gathered on LHCb publications page to maximize their availability to both LHCb members and to the high energy community. In this project a new database system was implemented for LHCb publications page. This will help to improve access to research papers for scientists and better integration with current CERN library website and others.

  8. Dynamically Integrating OSM Data into a Borderland Database

    Directory of Open Access Journals (Sweden)

    Xiaoguang Zhou

    2015-09-01

    change-type evolution is analyzed, and seven rules are used to determine the change-type of the changed objects. Based on these rules and algorithms, we programmed an automatic (or semi-automatic integrating and updating prototype system for the borderland database. The developed system was intensively tested using OSM data for Vietnam and Pakistan as the experimental data.

  9. Research on computer virus database management system

    Science.gov (United States)

    Qi, Guoquan

    2011-12-01

    The growing proliferation of computer viruses becomes the lethal threat and research focus of the security of network information. While new virus is emerging, the number of viruses is growing, virus classification increasing complex. Virus naming because of agencies' capture time differences can not be unified. Although each agency has its own virus database, the communication between each other lacks, or virus information is incomplete, or a small number of sample information. This paper introduces the current construction status of the virus database at home and abroad, analyzes how to standardize and complete description of virus characteristics, and then gives the information integrity, storage security and manageable computer virus database design scheme.

  10. A seismogram digitization and database management system

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper introduces a 2Seismogram Digitization and Database Management System2 (SDDMS), which is devel-oped using Delphi3, and present the key technique of automatically extracting wave data from paper seismograms. The system has various functions, such as paper seismogram digitization, database management and data analysis, etc. With this system it is possible to analyze historical paper seismograms using modern computers. Application of this system will be of help to the progress in earthquake prediction and seismological researches.

  11. Integrating Multi-Source Web Records into Relational Database

    Institute of Scientific and Technical Information of China (English)

    HUANG Jianbin; JI Hongbing; SUN Heli

    2006-01-01

    How to integrate heterogeneous semi-structured Web records into relational database is an important and challengeable research topic. An improved model of conditional random fields was presented to combine the learning of labeled samples and unlabeled database records in order to reduce the dependence on tediously hand-labeled training data. The proposed model was used to solve the problem of schema matching between data source schema and database schema. Experimental results using a large number of Web pages from diverse domains show the novel approach's effectiveness.

  12. Database Translator (DATALATOR) for Integrated Exploitation

    Science.gov (United States)

    2010-10-31

    t»Soei\\ S Forrm\\ • butane*’ OOntoB»«\\ DhptayStofcf ^ J [ClASS BROWSER for Project: • Class Hierarchy MtM, THING B :SYSTEM-CLASS 1x1 ...have successfully funded early-stage companies, and (iii) industry experts with specialized knowledge of key vertical markets . The company will also... market . Situational applications are built on-the-fly to solve a specific business problem, which fits neatly with the agile development approach

  13. Integrating Relational Databases and Constraint Languages

    DEFF Research Database (Denmark)

    Hansen, Michael Reichhardt; Hansen, Bo S.; Lucas, Peter

    1989-01-01

    A new structure of application programs is suggested, which separates the algorithmic parts from factual information (data and rules). The latter is to be stored in a repository that can be shared among multiple applications. It is argued that rules stating pure relations are better suited for sh...... is outlined. The relation of this approach to PROLOG and expert systems technology (production rules) is discussed....

  14. Distributed Database Management Systems A Practical Approach

    CERN Document Server

    Rahimi, Saeed K

    2010-01-01

    This book addresses issues related to managing data across a distributed database system. It is unique because it covers traditional database theory and current research, explaining the difficulties in providing a unified user interface and global data dictionary. The book gives implementers guidance on hiding discrepancies across systems and creating the illusion of a single repository for users. It also includes three sample frameworksâ€"implemented using J2SE with JMS, J2EE, and Microsoft .Netâ€"that readers can use to learn how to implement a distributed database management system. IT and

  15. The methodology of database design in organization management systems

    Science.gov (United States)

    Chudinov, I. L.; Osipova, V. V.; Bobrova, Y. V.

    2017-01-01

    The paper describes the unified methodology of database design for management information systems. Designing the conceptual information model for the domain area is the most important and labor-intensive stage in database design. Basing on the proposed integrated approach to design, the conceptual information model, the main principles of developing the relation databases are provided and user’s information needs are considered. According to the methodology, the process of designing the conceptual information model includes three basic stages, which are defined in detail. Finally, the article describes the process of performing the results of analyzing user’s information needs and the rationale for use of classifiers.

  16. Integration of Agent System with Legacy Software

    Institute of Scientific and Technical Information of China (English)

    SHEN Qi; ZHAO Yan-hong; YIN Zhao-lin

    2003-01-01

    Agent technique is a new method that can analyze, design and realize a distributed open system. It has been used in almost every field. But if act for the real practical words in technique, it must integrate with legacy software, such as database system etc, and control them. This paper introduces the specification of agent software integration, ontology, instances database as implementing agent software integration with CORBA technique and takes XML, ACL as language communicating among agents.

  17. ARAMEMNON, a novel database for Arabidopsis integral membrane proteins

    DEFF Research Database (Denmark)

    Schwacke, Rainer; Schneider, Anja; van der Graaff, Eric

    2003-01-01

    A specialized database (DB) for Arabidopsis membrane proteins, ARAMEMNON, was designed that facilitates the interpretation of gene and protein sequence data by integrating features that are presently only available from individual sources. Using several publicly available prediction programs, put...... is accessible at the URL http://aramemnon.botanik.uni-koeln.de....

  18. Database Security System for Applying Sophisticated Access Control via Database Firewall Server

    OpenAIRE

    Eun-Ae Cho; Chang-Joo Moon; Dae-Ha Park; Kang-Bin Yim

    2014-01-01

    Database security, privacy, access control, database firewall, data break masking Recently, information leakage incidents have occurred due to database security vulnerabilities. The administrators in the traditional database access control methods grant simple permissions to users for accessing database objects. Even though they tried to apply more strict permissions in recent database systems, it was difficult to properly adopt sophisticated access control policies to commercial databases...

  19. An architecture for mobile database management system

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In order to design a new kind of mobile database management system (DBMS) more suitable for mobile computing than the existent DBMS, the essence of database systems in mobile computing is analyzed. An opinion is introduced that the mobile database is a kind of dynamic distributed database, and the concept of virtual servers to translate the clients' mobility to the servers' mobility is proposed. Based on these opinions, a kind of architecture of mobile DBMS, which is of versatility, is presented. The architecture is composed of a virtual server and a local DBMS, the virtual server is the kernel of the architecture and its functions are described. Eventually, the server kernel of a mobile DBMS prototype is illustrated.

  20. Cluster Technique of In-Memory Database in Integrated Information Management System%信息运维综合监管系统中的内存数据库集群技术

    Institute of Scientific and Technical Information of China (English)

    梁鸿健; 胡游君; 唐海荣

    2011-01-01

    信息系统一般都采用传统的关系型数据库,传统的关系型数据库管理系统主要强调维护数据的完整性、一致性、稳定性,一般采用物理磁盘存储的方式。由于磁盘存取、内外存的数据传输、缓冲区管理、排队等待及锁的延迟等缺点使得数据库I/O不能满足信息运维综合监管系统中各类实时数据处理、响应的应用需求,如果将整个数据库管理系统放入内存,使每个事务I/O消耗变得极短,则为高速数据处理和存储的应用场景提供了有力的支持。为了保证数据的安全性,传统的关系型数据库管理系统基本上都采用多节点集群方式来保证数据的多节点存储和数据安全。如果内存数据库管理系统也能实现多节点集群部署,则能有效的解决内存数据库系统存在数据库系统稳定性、数据安全性等问题,从而符合信息运维综合监管系统对内存数据库高速数据处理的应用需求。本文提出了一种支持互备、扩展、混合三种模式的内存数据库的集群技术,在提高内存数据库的稳定性和数据安全性的前提下发挥内存数据库的高速I/O优势。%Generally speaking, IT system adopts traditional relational databases which emphasize on the completely, consistence and security of data stored in physical disc. Due to delay of disc I/O, data transmission, Operating system buffer manage, queue wait and system lock etc., database I/O fail to satisfy the application of real time data process and response in integrated information management system. Putting the whole database management system in memory makes every transaction I/O consumption extremely short, which also provides strong support to the application of high speed data processing and storing. To ensure the security of the data, traditional relational databases usually employ multi-node cluster to make sure multi-node storage and security of the data

  1. Tactical Systems Integration Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Tactical Systems Integration Laboratory is used to design and integrate computer hardware and software and related electronic subsystems for tactical vehicles....

  2. Toward an interactive article: integrating journals and biological databases

    Directory of Open Access Journals (Sweden)

    Marygold Steven J

    2011-05-01

    Full Text Available Abstract Background Journal articles and databases are two major modes of communication in the biological sciences, and thus integrating these critical resources is of urgent importance to increase the pace of discovery. Projects focused on bridging the gap between journals and databases have been on the rise over the last five years and have resulted in the development of automated tools that can recognize entities within a document and link those entities to a relevant database. Unfortunately, automated tools cannot resolve ambiguities that arise from one term being used to signify entities that are quite distinct from one another. Instead, resolving these ambiguities requires some manual oversight. Finding the right balance between the speed and portability of automation and the accuracy and flexibility of manual effort is a crucial goal to making text markup a successful venture. Results We have established a journal article mark-up pipeline that links GENETICS journal articles and the model organism database (MOD WormBase. This pipeline uses a lexicon built with entities from the database as a first step. The entity markup pipeline results in links from over nine classes of objects including genes, proteins, alleles, phenotypes and anatomical terms. New entities and ambiguities are discovered and resolved by a database curator through a manual quality control (QC step, along with help from authors via a web form that is provided to them by the journal. New entities discovered through this pipeline are immediately sent to an appropriate curator at the database. Ambiguous entities that do not automatically resolve to one link are resolved by hand ensuring an accurate link. This pipeline has been extended to other databases, namely Saccharomyces Genome Database (SGD and FlyBase, and has been implemented in marking up a paper with links to multiple databases. Conclusions Our semi-automated pipeline hyperlinks articles published in GENETICS to

  3. Deductive databases and P systems

    Directory of Open Access Journals (Sweden)

    Miguel A. Gutierrez-Naranjo

    2004-06-01

    Full Text Available In computational processes based on backwards chaining, a rule of the type is seen as a procedure which points that the problem can be split into the problems. In classical devices, the subproblems are solved sequentially. In this paper we present some questions that circulated during the Second Brainstorming Week related to the application of the parallelism of P systems to computation based on backwards chaining on the example of inferential deductive process.

  4. Object Identity in Database Systems

    Institute of Scientific and Technical Information of China (English)

    李天柱

    1995-01-01

    The concept of object identity and implementation of object identity in some systems have been explained in literature.Based on an analysis on the idea of data scheme in ANSI/X3/SPARC,this paper presents the concept of full-identity,which includes entity identity,conceptual object identity,and internal object identity,In addition,the equality of objects,which is richer and more practical,is discussed based on the full identity of objects.Therefore,the semantics and constructions of the identity for the complex objects are fully observed,and some appliactions in object management,version management,and user interface are found.Also,it could support the combination of O-O model with V-O model.

  5. Intrusion-Tolerant Based Survivable Model of Database System

    Institute of Scientific and Technical Information of China (English)

    ZHUJianming; WANGChao; MAJianfeng

    2005-01-01

    Survivability has become increasingly important with society's increased dependence of critical infrastructures on computers. Intrusiontolerant systems extend traditional secure systems to be able to survive or operate through attacks, thus it is an approach for achieving survivability. This paper proposes survivable model of database system based on intrusion-tolerant mechanisms. The model is built on three layers security architecture, to defense intrusion at the outer layer, to detect intrusion at the middle layer, and to tolerate intrusion at the inner layer. We utilize the techniques of both redundancy and diversity and threshold secret sharing schemes to implement the survivability of database and to protect confidential data from compromised servers in the presence of intrusions. Comparing with the existing schemes, our approach has realized the security and robustness for the key functions of a database system by using the integration security strategy and multiple security measures.

  6. Quality assurance database for the CBM silicon tracking system

    Energy Technology Data Exchange (ETDEWEB)

    Lymanets, Anton [Physikalisches Institut, Universitaet Tuebingen (Germany); Collaboration: CBM-Collaboration

    2015-07-01

    The Silicon Tracking System is a main tracking device of the CBM Experiment at FAIR. Its construction includes production, quality assurance and assembly of large number of components, e.g., 106 carbon fiber support structures, 1300 silicon microstrip sensors, 16.6k readout chips, analog microcables, etc. Detector construction is distributed over several production and assembly sites and calls for a database that would be extensible and allow tracing the components, integrating the test data, monitoring the component statuses and data flow. A possible implementation of the above-mentioned requirements is being developed at GSI (Darmstadt) based on the FAIR DB Virtual Database Library that provides connectivity to common SQL-Database engines (PostgreSQL, Oracle, etc.). Data structure, database architecture as well as status of implementation are discussed.

  7. Coherent Integration of Databases by Abductive Logic Programming

    CERN Document Server

    Arieli, O; Denecker, M; Van Nuffelen, B; 10.1613/jair.1322

    2011-01-01

    We introduce an abductive method for a coherent integration of independent data-sources. The idea is to compute a list of data-facts that should be inserted to the amalgamated database or retracted from it in order to restore its consistency. This method is implemented by an abductive solver, called Asystem, that applies SLDNFA-resolution on a meta-theory that relates different, possibly contradicting, input databases. We also give a pure model-theoretic analysis of the possible ways to `recover' consistent data from an inconsistent database in terms of those models of the database that exhibit as minimal inconsistent information as reasonably possible. This allows us to characterize the `recovered databases' in terms of the `preferred' (i.e., most consistent) models of the theory. The outcome is an abductive-based application that is sound and complete with respect to a corresponding model-based, preferential semantics, and -- to the best of our knowledge -- is more expressive (thus more general) than any ot...

  8. Distortion-Free Watermarking Approach for Relational Database Integrity Checking

    Directory of Open Access Journals (Sweden)

    Lancine Camara

    2014-01-01

    Full Text Available Nowadays, internet is becoming a suitable way of accessing the databases. Such data are exposed to various types of attack with the aim to confuse the ownership proofing or the content protection. In this paper, we propose a new approach based on fragile zero watermarking for the authentication of numeric relational data. Contrary to some previous databases watermarking techniques which cause some distortions in the original database and may not preserve the data usability constraints, our approach simply seeks to generate the watermark from the original database. First, the adopted method partitions the database relation into independent square matrix groups. Then, group-based watermarks are securely generated and registered in a trusted third party. The integrity verification is performed by computing the determinant and the diagonal’s minor for each group. As a result, tampering can be localized up to attribute group level. Theoretical and experimental results demonstrate that the proposed technique is resilient against tuples insertion, tuples deletion, and attributes values modification attacks. Furthermore, comparison with recent related effort shows that our scheme performs better in detecting multifaceted attacks.

  9. DPTEdb, an integrative database of transposable elements in dioecious plants.

    Science.gov (United States)

    Li, Shu-Fen; Zhang, Guo-Jun; Zhang, Xue-Jin; Yuan, Jin-Hong; Deng, Chuan-Liang; Gu, Lian-Feng; Gao, Wu-Jun

    2016-01-01

    Dioecious plants usually harbor 'young' sex chromosomes, providing an opportunity to study the early stages of sex chromosome evolution. Transposable elements (TEs) are mobile DNA elements frequently found in plants and are suggested to play important roles in plant sex chromosome evolution. The genomes of several dioecious plants have been sequenced, offering an opportunity to annotate and mine the TE data. However, comprehensive and unified annotation of TEs in these dioecious plants is still lacking. In this study, we constructed a dioecious plant transposable element database (DPTEdb). DPTEdb is a specific, comprehensive and unified relational database and web interface. We used a combination of de novo, structure-based and homology-based approaches to identify TEs from the genome assemblies of previously published data, as well as our own. The database currently integrates eight dioecious plant species and a total of 31 340 TEs along with classification information. DPTEdb provides user-friendly web interfaces to browse, search and download the TE sequences in the database. Users can also use tools, including BLAST, GetORF, HMMER, Cut sequence and JBrowse, to analyze TE data. Given the role of TEs in plant sex chromosome evolution, the database will contribute to the investigation of TEs in structural, functional and evolutionary dynamics of the genome of dioecious plants. In addition, the database will supplement the research of sex diversification and sex chromosome evolution of dioecious plants.Database URL: http://genedenovoweb.ticp.net:81/DPTEdb/index.php. © The Author(s) 2016. Published by Oxford University Press.

  10. Database modeling to integrate macrobenthos data in Spatial Data Infrastructure

    Directory of Open Access Journals (Sweden)

    José Alberto Quintanilha

    2012-08-01

    Full Text Available Coastal zones are complex areas that include marine and terrestrial environments. Besides its huge environmental wealth, they also attracts humans because provides food, recreation, business, and transportation, among others. Some difficulties to manage these areas are related with their complexity, diversity of interests and the absence of standardization to collect and share data to scientific community, public agencies, among others. The idea to organize, standardize and share this information based on Web Atlas is essential to support planning and decision making issues. The construction of a spatial database integrating the environmental business, to be used on Spatial Data Infrastructure (SDI is illustrated by a bioindicator that indicates the quality of the sediments. The models show the phases required to build Macrobenthos spatial database based on Santos Metropolitan Region as a reference. It is concluded that, when working with environmental data the structuring of knowledge in a conceptual model is essential for their subsequent integration into the SDI. During the modeling process it can be noticed that methodological issues related to the collection process may obstruct or prejudice the integration of data from different studies of the same area. The development of a database model, as presented in this study, can be used as a reference for further research with similar goals.

  11. DEVELOPING FLEXIBLE APPLICATIONS WITH XML AND DATABASE INTEGRATION

    Directory of Open Access Journals (Sweden)

    Hale AS

    2004-04-01

    Full Text Available In recent years the most popular subject in Information System area is Enterprise Application Integration (EAI. It can be defined as a process of forming a standart connection between different systems of an organization?s information system environment. The incorporating, gaining and marriage of corporations are the major reasons of popularity in Enterprise Application Integration. The main purpose is to solve the application integrating problems while similar systems in such corporations continue working together for a more time. With the help of XML technology, it is possible to find solutions to the problems of application integration either within the corporation or between the corporations.

  12. YUCSA: A CLIPS expert database system to monitor academic performance

    Science.gov (United States)

    Toptsis, Anestis A.; Ho, Frankie; Leindekar, Milton; Foon, Debra Low; Carbonaro, Mike

    1991-01-01

    The York University CLIPS Student Administrator (YUCSA), an expert database system implemented in C Language Integrated Processing System (CLIPS), for monitoring the academic performance of undergraduate students at York University, is discussed. The expert system component in the system has already been implemented for two major departments, and it is under testing and enhancement for more departments. Also, more elaborate user interfaces are under development. We describe the design and implementation of the system, problems encountered, and immediate future plans. The system has excellent maintainability and it is very efficient, taking less than one minute to complete an assessment of one student.

  13. An integrated web medicinal materials DNA database: MMDBD (Medicinal Materials DNA Barcode Database

    Directory of Open Access Journals (Sweden)

    But Paul

    2010-06-01

    Full Text Available Abstract Background Thousands of plants and animals possess pharmacological properties and there is an increased interest in using these materials for therapy and health maintenance. Efficacies of the application is critically dependent on the use of genuine materials. For time to time, life-threatening poisoning is found because toxic adulterant or substitute is administered. DNA barcoding provides a definitive means of authentication and for conducting molecular systematics studies. Owing to the reduced cost in DNA authentication, the volume of the DNA barcodes produced for medicinal materials is on the rise and necessitates the development of an integrated DNA database. Description We have developed an integrated DNA barcode multimedia information platform- Medicinal Materials DNA Barcode Database (MMDBD for data retrieval and similarity search. MMDBD contains over 1000 species of medicinal materials listed in the Chinese Pharmacopoeia and American Herbal Pharmacopoeia. MMDBD also contains useful information of the medicinal material, including resources, adulterant information, medical parts, photographs, primers used for obtaining the barcodes and key references. MMDBD can be accessed at http://www.cuhk.edu.hk/icm/mmdbd.htm. Conclusions This work provides a centralized medicinal materials DNA barcode database and bioinformatics tools for data storage, analysis and exchange for promoting the identification of medicinal materials. MMDBD has the largest collection of DNA barcodes of medicinal materials and is a useful resource for researchers in conservation, systematic study, forensic and herbal industry.

  14. Genesis of an Electronic Database Expert System.

    Science.gov (United States)

    Ma, Wei; Cole, Timothy W.

    2000-01-01

    Reports on the creation of a prototype, Web-based expert system that helps users better navigate library databases at the University of Illinois at Urbana-Champaign. Discusses concerns that gave rise to the project. Summarizes previous work/research and common approaches in academic libraries today. Describes plans for testing the prototype,…

  15. A Grid Architecture for Manufacturing Database System

    Directory of Open Access Journals (Sweden)

    Laurentiu CIOVICĂ

    2011-06-01

    Full Text Available Before the Enterprise Resource Planning concepts business functions within enterprises were supported by small and isolated applications, most of them developed internally. Yet today ERP platforms are not by themselves the answer to all organizations needs especially in times of differentiated and diversified demands among end customers. ERP platforms were integrated with specialized systems for the management of clients, Customer Relationship Management and vendors, Supplier Relationship Management. They were integrated with Manufacturing Execution Systems for better planning and control of production lines. In order to offer real time, efficient answers to the management level, ERP systems were integrated with Business Intelligence systems. This paper analyses the advantages of grid computing at this level of integration, communication and interoperability between complex specialized informatics systems with a focus on the system architecture and data base systems.

  16. Intelligent test integration system

    Science.gov (United States)

    Sztipanovits, J.; Padalkar, S.; Rodriguez-Moscoso, J.; Kawamura, K.; Purves, B.; Williams, R.; Biglari, H.

    1988-01-01

    A new test technology is described which was developed for space system integration. The ultimate purpose of the system is to support the automatic generation of test systems in real time, distributed computing environments. The Intelligent Test Integration System (ITIS) is a knowledge based layer above the traditional test system components which can generate complex test configurations from the specification of test scenarios.

  17. LHCb Conditions Database Operation Assistance Systems

    CERN Multimedia

    Shapoval, Illya

    2012-01-01

    The Conditions Database of the LHCb experiment (CondDB) provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger, reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues: - an extension to the automatic content validation done by the “Oracle Streams” replication technology, to trap cases when the replication was unsuccessful; - an automated distribution process for the S...

  18. Emerging multidisciplinary research across database management systems

    CERN Document Server

    Nica, Anisoara; Varde, Aparna

    2011-01-01

    The database community is exploring more and more multidisciplinary avenues: Data semantics overlaps with ontology management; reasoning tasks venture into the domain of artificial intelligence; and data stream management and information retrieval shake hands, e.g., when processing Web click-streams. These new research avenues become evident, for example, in the topics that doctoral students choose for their dissertations. This paper surveys the emerging multidisciplinary research by doctoral students in database systems and related areas. It is based on the PIKM 2010, which is the 3rd Ph.D. workshop at the International Conference on Information and Knowledge Management (CIKM). The topics addressed include ontology development, data streams, natural language processing, medical databases, green energy, cloud computing, and exploratory search. In addition to core ideas from the workshop, we list some open research questions in these multidisciplinary areas.

  19. Integrated library systems.

    Science.gov (United States)

    Goldstein, C M

    1983-07-01

    The development of integrated library systems is discussed. The four major discussion points are (1) initial efforts; (2) network resources; (3) minicomputer-based systems; and (4) beyond library automation. Four existing systems are cited as examples of current systems.

  20. Rice Annotation Project Database (RAP-DB): an integrative and interactive database for rice genomics.

    Science.gov (United States)

    Sakai, Hiroaki; Lee, Sung Shin; Tanaka, Tsuyoshi; Numa, Hisataka; Kim, Jungsok; Kawahara, Yoshihiro; Wakimoto, Hironobu; Yang, Ching-chia; Iwamoto, Masao; Abe, Takashi; Yamada, Yuko; Muto, Akira; Inokuchi, Hachiro; Ikemura, Toshimichi; Matsumoto, Takashi; Sasaki, Takuji; Itoh, Takeshi

    2013-02-01

    The Rice Annotation Project Database (RAP-DB, http://rapdb.dna.affrc.go.jp/) has been providing a comprehensive set of gene annotations for the genome sequence of rice, Oryza sativa (japonica group) cv. Nipponbare. Since the first release in 2005, RAP-DB has been updated several times along with the genome assembly updates. Here, we present our newest RAP-DB based on the latest genome assembly, Os-Nipponbare-Reference-IRGSP-1.0 (IRGSP-1.0), which was released in 2011. We detected 37,869 loci by mapping transcript and protein sequences of 150 monocot species. To provide plant researchers with highly reliable and up to date rice gene annotations, we have been incorporating literature-based manually curated data, and 1,626 loci currently incorporate literature-based annotation data, including commonly used gene names or gene symbols. Transcriptional activities are shown at the nucleotide level by mapping RNA-Seq reads derived from 27 samples. We also mapped the Illumina reads of a Japanese leading japonica cultivar, Koshihikari, and a Chinese indica cultivar, Guangluai-4, to the genome and show alignments together with the single nucleotide polymorphisms (SNPs) and gene functional annotations through a newly developed browser, Short-Read Assembly Browser (S-RAB). We have developed two satellite databases, Plant Gene Family Database (PGFD) and Integrative Database of Cereal Gene Phylogeny (IDCGP), which display gene family and homologous gene relationships among diverse plant species. RAP-DB and the satellite databases offer simple and user-friendly web interfaces, enabling plant and genome researchers to access the data easily and facilitating a broad range of plant research topics.

  1. Lynx: a database and knowledge extraction engine for integrative medicine.

    Science.gov (United States)

    Sulakhe, Dinanath; Balasubramanian, Sandhya; Xie, Bingqing; Feng, Bo; Taylor, Andrew; Wang, Sheng; Berrocal, Eduardo; Dave, Utpal; Xu, Jinbo; Börnigen, Daniela; Gilliam, T Conrad; Maltsev, Natalia

    2014-01-01

    We have developed Lynx (http://lynx.ci.uchicago.edu)--a web-based database and a knowledge extraction engine, supporting annotation and analysis of experimental data and generation of weighted hypotheses on molecular mechanisms contributing to human phenotypes and disorders of interest. Its underlying knowledge base (LynxKB) integrates various classes of information from >35 public databases and private collections, as well as manually curated data from our group and collaborators. Lynx provides advanced search capabilities and a variety of algorithms for enrichment analysis and network-based gene prioritization to assist the user in extracting meaningful knowledge from LynxKB and experimental data, whereas its service-oriented architecture provides public access to LynxKB and its analytical tools via user-friendly web services and interfaces.

  2. The Center for Integrated Molecular Brain Imaging (Cimbi) database

    DEFF Research Database (Denmark)

    Knudsen, Gitte M.; Jensen, Peter S.; Erritzoe, David

    2016-01-01

    related to the serotonergic transmitter system with its normative data on the serotonergic subtype receptors 5-HT1A, 5-HT1B, 5-HT2A, and 5-HT4 and the 5-HT transporter (5-HTT), but can easily serve other purposes. The Cimbi database and Cimbi biobank were formally established in 2008 with the purpose...... to store the wealth of Cimbi-acquired data in a highly structured and standardized manner in accordance with the regulations issued by the Danish Data Protection Agency as well as to provide a quality-controlled resource for future hypothesis-generating and hypothesis-driven studies. The Cimbi database...... currently comprises a total of 1100 PET and 1000 structural and functional MRI scans and it holds a multitude of additional data, such as genetic and biochemical data, and scores from 17 self-reported questionnaires and from 11 neuropsychological paper/computer tests. The database associated Cimbi biobank...

  3. viruSITE—integrated database for viral genomics

    Science.gov (United States)

    Stano, Matej; Beke, Gabor; Klucar, Lubos

    2016-01-01

    Viruses are the most abundant biological entities and the reservoir of most of the genetic diversity in the Earth's biosphere. Viral genomes are very diverse, generally short in length and compared to other organisms carry only few genes. viruSITE is a novel database which brings together high-value information compiled from various resources. viruSITE covers the whole universe of viruses and focuses on viral genomes, genes and proteins. The database contains information on virus taxonomy, host range, genome features, sequential relatedness as well as the properties and functions of viral genes and proteins. All entries in the database are linked to numerous information resources. The above-mentioned features make viruSITE a comprehensive knowledge hub in the field of viral genomics. The web interface of the database was designed so as to offer an easy-to-navigate, intuitive and user-friendly environment. It provides sophisticated text searching and a taxonomy-based browsing system. viruSITE also allows for an alternative approach based on sequence search. A proprietary genome browser generates a graphical representation of viral genomes. In addition to retrieving and visualising data, users can perform comparative genomics analyses using a variety of tools. Database URL: http://www.virusite.org/ PMID:28025349

  4. Cluster based parallel database management system for data intensive computing

    Institute of Scientific and Technical Information of China (English)

    Jianzhong LI; Wei ZHANG

    2009-01-01

    This paper describes a computer-cluster based parallel database management system (DBMS), InfiniteDB, developed by the authors. InfiniteDB aims at efficiently sup-port data intensive computing in response to the rapid grow-ing in database size and the need of high performance ana-lyzing of massive databases. It can be efficiently executed in the computing system composed by thousands of computers such as cloud computing system. It supports the parallelisms of intra-query, inter-query, intra-operation, inter-operation and pipelining. It provides effective strategies for managing massive databases including the multiple data declustering methods, the declustering-aware algorithms for relational operations and other database operations, and the adaptive query optimization method. It also provides the functions of parallel data warehousing and data mining, the coordinator-wrapper mechanism to support the integration of heteroge-neous information resources on the Internet, and the fault tol-erant and resilient infrastructures. It has been used in many applications and has proved quite effective for data intensive computing.

  5. GDR (Genome Database for Rosaceae): integrated web-database for Rosaceae genomics and genetics data.

    Science.gov (United States)

    Jung, Sook; Staton, Margaret; Lee, Taein; Blenda, Anna; Svancara, Randall; Abbott, Albert; Main, Dorrie

    2008-01-01

    The Genome Database for Rosaceae (GDR) is a central repository of curated and integrated genetics and genomics data of Rosaceae, an economically important family which includes apple, cherry, peach, pear, raspberry, rose and strawberry. GDR contains annotated databases of all publicly available Rosaceae ESTs, the genetically anchored peach physical map, Rosaceae genetic maps and comprehensively annotated markers and traits. The ESTs are assembled to produce unigene sets of each genus and the entire Rosaceae. Other annotations include putative function, microsatellites, open reading frames, single nucleotide polymorphisms, gene ontology terms and anchored map position where applicable. Most of the published Rosaceae genetic maps can be viewed and compared through CMap, the comparative map viewer. The peach physical map can be viewed using WebFPC/WebChrom, and also through our integrated GDR map viewer, which serves as a portal to the combined genetic, transcriptome and physical mapping information. ESTs, BACs, markers and traits can be queried by various categories and the search result sites are linked to the mapping visualization tools. GDR also provides online analysis tools such as a batch BLAST/FASTA server for the GDR datasets, a sequence assembly server and microsatellite and primer detection tools. GDR is available at http://www.rosaceae.org.

  6. System integration report

    Science.gov (United States)

    Badler, N. I.; Korein, J. D.; Meyer, C.; Manoochehri, K.; Rovins, J.; Beale, J.; Barr, B.

    1985-01-01

    Several areas that arise from the system integration issue were examined. Intersystem analysis is discussed as it relates to software development, shared data bases and interfaces between TEMPUS and PLAID, shaded graphics rendering systems, object design (BUILD), the TEMPUS animation system, anthropometric lab integration, ongoing TEMPUS support and maintenance, and the impact of UNIX and local workstations on the OSDS environment.

  7. DemaDb: an integrated dematiaceous fungal genomes database.

    Science.gov (United States)

    Kuan, Chee Sian; Yew, Su Mei; Chan, Chai Ling; Toh, Yue Fen; Lee, Kok Wei; Cheong, Wei-Hien; Yee, Wai-Yan; Hoh, Chee-Choong; Yap, Soon-Joo; Ng, Kee Peng

    2016-01-01

    Many species of dematiaceous fungi are associated with allergic reactions and potentially fatal diseases in human, especially in tropical climates. Over the past 10 years, we have isolated more than 400 dematiaceous fungi from various clinical samples. In this study, DemaDb, an integrated database was designed to support the integration and analysis of dematiaceous fungal genomes. A total of 92 072 putative genes and 6527 pathways that identified in eight dematiaceous fungi (Bipolaris papendorfii UM 226, Daldinia eschscholtzii UM 1400, D. eschscholtzii UM 1020, Pyrenochaeta unguis-hominis UM 256, Ochroconis mirabilis UM 578, Cladosporium sphaerospermum UM 843, Herpotrichiellaceae sp. UM 238 and Pleosporales sp. UM 1110) were deposited in DemaDb. DemaDb includes functional annotations for all predicted gene models in all genomes, such as Gene Ontology, EuKaryotic Orthologous Groups, Kyoto Encyclopedia of Genes and Genomes (KEGG), Pfam and InterProScan. All predicted protein models were further functionally annotated to Carbohydrate-Active enzymes, peptidases, secondary metabolites and virulence factors. DemaDb Genome Browser enables users to browse and visualize entire genomes with annotation data including gene prediction, structure, orientation and custom feature tracks. The Pathway Browser based on the KEGG pathway database allows users to look into molecular interaction and reaction networks for all KEGG annotated genes. The availability of downloadable files containing assembly, nucleic acid, as well as protein data allows the direct retrieval for further downstream works. DemaDb is a useful resource for fungal research community especially those involved in genome-scale analysis, functional genomics, genetics and disease studies of dematiaceous fungi. Database URL: http://fungaldb.um.edu.my.

  8. An integrated computational pipeline and database to support whole-genome sequence annotation.

    Science.gov (United States)

    Mungall, C J; Misra, S; Berman, B P; Carlson, J; Frise, E; Harris, N; Marshall, B; Shu, S; Kaminker, J S; Prochnik, S E; Smith, C D; Smith, E; Tupy, J L; Wiel, C; Rubin, G M; Lewis, S E

    2002-01-01

    We describe here our experience in annotating the Drosophila melanogaster genome sequence, in the course of which we developed several new open-source software tools and a database schema to support large-scale genome annotation. We have developed these into an integrated and reusable software system for whole-genome annotation. The key contributions to overall annotation quality are the marshalling of high-quality sequences for alignments and the design of a system with an adaptable and expandable flexible architecture.

  9. LHCb Conditions database operation assistance systems

    Science.gov (United States)

    Clemencic, M.; Shapoval, I.; Cattaneo, M.; Degaudenzi, H.; Santinelli, R.

    2012-12-01

    The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger (HLT), reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues. The first system is a CondDB state tracking extension to the Oracle 3D Streams replication technology, to trap cases when the CondDB replication was corrupted. Second, an automated distribution system for the SQLite-based CondDB, providing also smart backup and checkout mechanisms for the CondDB managers and LHCb users respectively. And, finally, a system to verify and monitor the internal (CondDB self-consistency) and external (LHCb physics software vs. CondDB) compatibility. The former two systems are used in production in the LHCb experiment and have achieved the desired goal of higher flexibility and robustness for the management and operation of the CondDB. The latter one has been fully designed and is passing currently to the implementation stage.

  10. Towards a Component Based Model for Database Systems

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2004-02-01

    Full Text Available Due to their effectiveness in the design and development of software applications and due to their recognized advantages in terms of reusability, Component-Based Software Engineering (CBSE concepts have been arousing a great deal of interest in recent years. This paper presents and extends a component-based approach to object-oriented database systems (OODB introduced by us in [1] and [2]. Components are proposed as a new abstraction level for database system, logical partitions of the schema. In this context, the scope is introduced as an escalated property for transactions. Components are studied from the integrity, consistency, and concurrency control perspective. The main benefits of our proposed component model for OODB are the reusability of the database design, including the access statistics required for a proper query optimization, and a smooth information exchange. The integration of crosscutting concerns into the component database model using aspect-oriented techniques is also discussed. One of the main goals is to define a method for the assessment of component composition capabilities. These capabilities are restricted by the component’s interface and measured in terms of adaptability, degree of compose-ability and acceptability level. The above-mentioned metrics are extended from database components to generic software components. This paper extends and consolidates into one common view the ideas previously presented by us in [1, 2, 3].[1] Octavian Paul Rotaru, Marian Dobre, Component Aspects in Object Oriented Databases, Proceedings of the International Conference on Software Engineering Research and Practice (SERP’04, Volume II, ISBN 1-932415-29-7, pages 719-725, Las Vegas, NV, USA, June 2004.[2] Octavian Paul Rotaru, Marian Dobre, Mircea Petrescu, Integrity and Consistency Aspects in Component-Oriented Databases, Proceedings of the International Symposium on Innovation in Information and Communication Technology (ISIICT

  11. Development of database systems for safety of repositories for disposal of radioactive wastes

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yeong Hun; Han, Jeong Sang; Shin, Hyeon Jun; Ham, Sang Won; Kim, Hye Seong [Yonsei Univ., Seoul (Korea, Republic of)

    1999-03-15

    In the study, GSIS os developed for the maximizing effectiveness of the database system. For this purpose, the spatial relation of data from various fields that are constructed in the database which was developed for the site selection and management of repository for radioactive waste disposal. By constructing the integration system that can link attribute and spatial data, it is possible to evaluate the safety of repository effectively and economically. The suitability of integrating database and GSIS is examined by constructing the database in the test district where the site characteristics are similar to that of repository for radioactive waste disposal.

  12. Integrated Reporting Information System -

    Data.gov (United States)

    Department of Transportation — The Integrated Reporting Information System (IRIS) is a flexible and scalable web-based system that supports post operational analysis and evaluation of the National...

  13. Systems Integration (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    2011-10-01

    The Systems Integration (SI) subprogram works closely with industry, universities, and the national laboratories to overcome technical barriers to the large-scale deployment of solar technologies. To support these goals, the subprogram invests primarily in four areas: grid integration, technology validation, solar resource assessment, and balance of system development.

  14. Systems Integration (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    DOE Solar Energy Technologies Program

    2011-10-13

    The Systems Integration (SI) subprogram works closely with industry, universities, and the national laboratories to overcome technical barriers to the large-scale deployment of solar technologies. To support these goals, the subprogram invests primarily in four areas: grid integration, technology validation, solar resource assessment, and balance of system development.

  15. Constructing a knowledge-based database for dermatological integrative medical information.

    Science.gov (United States)

    Shin, Jeeyoung; Jo, Yunju; Bae, Hyunsu; Hong, Moochang; Shin, Minkyu; Kim, Yangseok

    2013-01-01

    Recently, overuse of steroids and immunosuppressive drugs has produced incurable dermatological health problems. Traditional medical approaches have been studied for alternative solutions. However, accessing relevant information is difficult given the differences in information for western medicine (WM) and traditional medicine (TM). Therefore, an integrated medical information infrastructure must be utilized to bridge western and traditional treatments. In this study, WM and TM information was collected based on literature searches and information from internet databases on dermatological issues. Additionally, definitions for unified terminology and disease categorization based on individual cases were generated. Also a searchable database system was established that may be a possible model system for integrating both WM and TM medical information on dermatological conditions. Such a system will yield benefits for researchers and facilitate the best possible medical solutions for patients. The DIMI is freely available online.

  16. Development of Integrated PSA Database and Application Technology

    Energy Technology Data Exchange (ETDEWEB)

    Han, Sang Hoon; Park, Jin Hee; Kim, Seung Hwan; Choi, Sun Yeong; Jung, Woo Sik; Jeong, Kwang Sub; Ha Jae Joo; Yang, Joon Eon; Min Kyung Ran; Kim, Tae Woon

    2005-04-15

    The purpose of this project is to develop 1) the reliability database framework, 2) the methodology for the reactor trip and abnormal event analysis, and 3) the prototype PSA information DB system. We already have a part of the reactor trip and component reliability data. In this study, we extend the collection of data up to 2002. We construct the pilot reliability database for common cause failure and piping failure data. A reactor trip or a component failure may have an impact on the safety of a nuclear power plant. We perform the precursor analysis for such events that occurred in the KSNP, and to develop a procedure for the precursor analysis. A risk monitor provides a mean to trace the changes in the risk following the changes in the plant configurations. We develop a methodology incorporating the model of secondary system related to the reactor trip into the risk monitor model. We develop a prototype PSA information system for the UCN 3 and 4 PSA models where information for the PSA is inputted into the system such as PSA reports, analysis reports, thermal-hydraulic analysis results, system notebooks, and so on. We develop a unique coherent BDD method to quantify a fault tree and the fastest fault tree quantification engine FTREX. We develop quantification software for a full PSA model and a one top model.

  17. Integrated Application Software System.

    Science.gov (United States)

    1982-12-01

    spread-shost to complete the celculation/ modelo It compliments the Inclusion of the word ov-.elsor and database manaaement system In the IASS. The...pslces. i isi - -- ----- ... Table 0.3 * VZS!CALC Arithmetic & Aqgregate Functions a, Addition b. Subtraction a, Multiplication d. Division eo

  18. Critical assessment of human metabolic pathway databases: a stepping stone for future integration

    Science.gov (United States)

    2011-01-01

    Background Multiple pathway databases are available that describe the human metabolic network and have proven their usefulness in many applications, ranging from the analysis and interpretation of high-throughput data to their use as a reference repository. However, so far the various human metabolic networks described by these databases have not been systematically compared and contrasted, nor has the extent to which they differ been quantified. For a researcher using these databases for particular analyses of human metabolism, it is crucial to know the extent of the differences in content and their underlying causes. Moreover, the outcomes of such a comparison are important for ongoing integration efforts. Results We compared the genes, EC numbers and reactions of five frequently used human metabolic pathway databases. The overlap is surprisingly low, especially on reaction level, where the databases agree on 3% of the 6968 reactions they have combined. Even for the well-established tricarboxylic acid cycle the databases agree on only 5 out of the 30 reactions in total. We identified the main causes for the lack of overlap. Importantly, the databases are partly complementary. Other explanations include the number of steps a conversion is described in and the number of possible alternative substrates listed. Missing metabolite identifiers and ambiguous names for metabolites also affect the comparison. Conclusions Our results show that each of the five networks compared provides us with a valuable piece of the puzzle of the complete reconstruction of the human metabolic network. To enable integration of the networks, next to a need for standardizing the metabolite names and identifiers, the conceptual differences between the databases should be resolved. Considerable manual intervention is required to reach the ultimate goal of a unified and biologically accurate model for studying the systems biology of human metabolism. Our comparison provides a stepping stone

  19. Critical assessment of human metabolic pathway databases: a stepping stone for future integration

    Directory of Open Access Journals (Sweden)

    Stobbe Miranda D

    2011-10-01

    Full Text Available Abstract Background Multiple pathway databases are available that describe the human metabolic network and have proven their usefulness in many applications, ranging from the analysis and interpretation of high-throughput data to their use as a reference repository. However, so far the various human metabolic networks described by these databases have not been systematically compared and contrasted, nor has the extent to which they differ been quantified. For a researcher using these databases for particular analyses of human metabolism, it is crucial to know the extent of the differences in content and their underlying causes. Moreover, the outcomes of such a comparison are important for ongoing integration efforts. Results We compared the genes, EC numbers and reactions of five frequently used human metabolic pathway databases. The overlap is surprisingly low, especially on reaction level, where the databases agree on 3% of the 6968 reactions they have combined. Even for the well-established tricarboxylic acid cycle the databases agree on only 5 out of the 30 reactions in total. We identified the main causes for the lack of overlap. Importantly, the databases are partly complementary. Other explanations include the number of steps a conversion is described in and the number of possible alternative substrates listed. Missing metabolite identifiers and ambiguous names for metabolites also affect the comparison. Conclusions Our results show that each of the five networks compared provides us with a valuable piece of the puzzle of the complete reconstruction of the human metabolic network. To enable integration of the networks, next to a need for standardizing the metabolite names and identifiers, the conceptual differences between the databases should be resolved. Considerable manual intervention is required to reach the ultimate goal of a unified and biologically accurate model for studying the systems biology of human metabolism. Our comparison

  20. Channelpedia: an integrative and interactive database for ion channels

    Directory of Open Access Journals (Sweden)

    Rajnish eRanjan

    2011-12-01

    Full Text Available Ion channels are membrane proteins that selectively conduct ions across the cell membrane. The flux of ions through ion channels drives electrical and biochemical processes in cells and plays a critical role in shaping the electrical properties of neurons. During the past three decades,extensive research has been carried out to characterize the molecular, structural and biophysical properties of ion channels. This research has begun to elucidate the role of ion channels in neuronal function and has subsequently led to the development of computational models of ion channel function. Although there have been substantial efforts to consolidate these findings into easily accessible and coherent online resources, a single comprehensive resource is still lacking. The success of these initiatives has been hindered by the sheer diversity of approaches and the variety in data formats. Here, we present Channelpedia (http://www.Channelpedia.net which is designed to store information related to ion channels and models and is characterized by an efficient information management framework. Composed of a combination of a database and a wiki like discussion platform Channelpedia allows researchers to collaborate and synthesize ion channel information from literature. Equipped to automatically update references, Channelpedia integrates and highlights recent publications with relevant information in the database. It is web based, freely accessible and currently contains 187 annotated ion channels with 45 Hodgkin-Huxley models.

  1. TOMATOMICS: A Web Database for Integrated Omics Information in Tomato

    KAUST Repository

    Kudo, Toru

    2016-11-29

    Solanum lycopersicum (tomato) is an important agronomic crop and a major model fruit-producing plant. To facilitate basic and applied research, comprehensive experimental resources and omics information on tomato are available following their development. Mutant lines and cDNA clones from a dwarf cultivar, Micro-Tom, are two of these genetic resources. Large-scale sequencing data for ESTs and full-length cDNAs from Micro-Tom continue to be gathered. In conjunction with information on the reference genome sequence of another cultivar, Heinz 1706, the Micro-Tom experimental resources have facilitated comprehensive functional analyses. To enhance the efficiency of acquiring omics information for tomato biology, we have integrated the information on the Micro-Tom experimental resources and the Heinz 1706 genome sequence. We have also inferred gene structure by comparison of sequences between the genome of Heinz 1706 and the transcriptome, which are comprised of Micro-Tom full-length cDNAs and Heinz 1706 RNA-seq data stored in the KaFTom and Sequence Read Archive databases. In order to provide large-scale omics information with streamlined connectivity we have developed and maintain a web database TOMATOMICS (http://bioinf.mind.meiji.ac.jp/tomatomics/). In TOMATOMICS, access to the information on the cDNA clone resources, full-length mRNA sequences, gene structures, expression profiles and functional annotations of genes is available through search functions and the genome browser, which has an intuitive graphical interface.

  2. M4FT-16LL080302052-Update to Thermodynamic Database Development and Sorption Database Integration

    Energy Technology Data Exchange (ETDEWEB)

    Zavarin, Mavrik [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Glenn T. Seaborg Inst.. Physical and Life Sciences; Wolery, T. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Akima Infrastructure Services, LLC; Atkins-Duffin, C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Global Security

    2016-08-16

    This progress report (Level 4 Milestone Number M4FT-16LL080302052) summarizes research conducted at Lawrence Livermore National Laboratory (LLNL) within the Argillite Disposal R&D Work Package Number FT-16LL08030205. The focus of this research is the thermodynamic modeling of Engineered Barrier System (EBS) materials and properties and development of thermodynamic databases and models to evaluate the stability of EBS materials and their interactions with fluids at various physico-chemical conditions relevant to subsurface repository environments. The development and implementation of equilibrium thermodynamic models are intended to describe chemical and physical processes such as solubility, sorption, and diffusion.

  3. ASEAN Mineral Database and Information System (AMDIS)

    Science.gov (United States)

    Okubo, Y.; Ohno, T.; Bandibas, J. C.; Wakita, K.; Oki, Y.; Takahashi, Y.

    2014-12-01

    AMDIS has lunched officially since the Fourth ASEAN Ministerial Meeting on Minerals on 28 November 2013. In cooperation with Geological Survey of Japan, the web-based GIS was developed using Free and Open Source Software (FOSS) and the Open Geospatial Consortium (OGC) standards. The system is composed of the local databases and the centralized GIS. The local databases created and updated using the centralized GIS are accessible from the portal site. The system introduces distinct advantages over traditional GIS. Those are a global reach, a large number of users, better cross-platform capability, charge free for users, charge free for provider, easy to use, and unified updates. Raising transparency of mineral information to mining companies and to the public, AMDIS shows that mineral resources are abundant throughout the ASEAN region; however, there are many datum vacancies. We understand that such problems occur because of insufficient governance of mineral resources. Mineral governance we refer to is a concept that enforces and maximizes the capacity and systems of government institutions that manages minerals sector. The elements of mineral governance include a) strengthening of information infrastructure facility, b) technological and legal capacities of state-owned mining companies to fully-engage with mining sponsors, c) government-led management of mining projects by supporting the project implementation units, d) government capacity in mineral management such as the control and monitoring of mining operations, and e) facilitation of regional and local development plans and its implementation with the private sector.

  4. Database system selection for marketing strategies support in information systems

    Directory of Open Access Journals (Sweden)

    František Dařena

    2007-01-01

    Full Text Available In today’s dynamically changing environment marketing has a significant role. Creating successful marketing strategies requires large amount of high quality information of various kinds and data types. A powerful database management system is a necessary condition for marketing strategies creation support. The paper briefly describes the field of marketing strategies and specifies the features that should be provided by database systems in connection with these strategies support. Major commercial (Oracle, DB2, MS SQL, Sybase and open-source (PostgreSQL, MySQL, Firebird databases are than examined from the point of view of accordance with these characteristics and their comparison in made. The results are useful for making the decision before acquisition of a database system during information system’s hardware architecture specification.

  5. Dynamic graph system for a semantic database

    Science.gov (United States)

    Mizell, David

    2015-01-27

    A method and system in a computer system for dynamically providing a graphical representation of a data store of entries via a matrix interface is disclosed. A dynamic graph system provides a matrix interface that exposes to an application program a graphical representation of data stored in a data store such as a semantic database storing triples. To the application program, the matrix interface represents the graph as a sparse adjacency matrix that is stored in compressed form. Each entry of the data store is considered to represent a link between nodes of the graph. Each entry has a first field and a second field identifying the nodes connected by the link and a third field with a value for the link that connects the identified nodes. The first, second, and third fields represent the rows, column, and elements of the adjacency matrix.

  6. 16th East-European Conference on Advances in Databases and Information Systems (ADBIS 2012)

    CERN Document Server

    Härder, Theo; Wrembel, Robert; Advances in Databases and Information Systems

    2013-01-01

    This volume is the second one of the 16th East-European Conference on Advances in Databases and Information Systems (ADBIS 2012), held on September 18-21, 2012, in Poznań, Poland. The first one has been published in the LNCS series.   This volume includes 27 research contributions, selected out of 90. The contributions cover a wide spectrum of topics in the database and information systems field, including: database foundation and theory, data modeling and database design, business process modeling, query optimization in relational and object databases, materialized view selection algorithms, index data structures, distributed systems, system and data integration, semi-structured data and databases, semantic data management, information retrieval, data mining techniques, data stream processing, trust and reputation in the Internet, and social networks. Thus, the content of this volume covers the research areas from fundamentals of databases, through still hot topic research problems (e.g., data mining, XML ...

  7. Function integrated track system

    OpenAIRE

    Hohnecker, Eberhard

    2010-01-01

    The paper discusses a function integrated track system that focuses on the reduction of acoustic emissions from railway lines. It is shown that the combination of an embedded rail system (ERS), a sound absorbing track surface, and an integrated mini sound barrier has significant acoustic advantages compared to a standard ballast superstructure. The acoustic advantages of an embedded rail system are particularly pronounced in the case of railway bridges. Finally, it is shown that a...

  8. Integrated management systems

    CERN Document Server

    Bugdol, Marek

    2015-01-01

    Examining the challenges of integrated management, this book explores the importance and potential benefits of using an integrated approach as a cross-functional concept of management. It covers not only standardized management systems (e.g. International Organization for Standardization), but also models of self-assessment, as well as different types of integration. Furthermore, it demonstrates how processes and systems can be integrated, and how management efficiency can be increased. The major part of this book focuses on management concepts which use integration as a key tool of management processes (e.g. the systematic approach, supply chain management, virtual and network organizations, processes management and total quality management). Case studies, illustrations, and tables are also provided to exemplify and illuminate the content, as well as examples of successful and failed integrations. Providing a particularly useful resource to managers and specialists involved in the improvement of organization...

  9. Integration of reusable systems

    CERN Document Server

    Rubin, Stuart

    2014-01-01

    Software reuse and integration has been described as the process of creating software systems from existing software rather than building software systems from scratch. Whereas reuse solely deals with the artifacts creation, integration focuses on how reusable artifacts interact with the already existing parts of the specified transformation. Currently, most reuse research focuses on creating and integrating adaptable components at development or at compile time. However, with the emergence of ubiquitous computing, reuse technologies that can support adaptation and reconfiguration of architectures and components at runtime are in demand. This edited book includes 15 high quality research papers written by experts in information reuse and integration to cover the most recent advances in the field. These papers are extended versions of the best papers which were presented at IEEE International Conference on Information Reuse and Integration and IEEE International Workshop on Formal Methods Integration, which wa...

  10. Multilingual Database Management System: A Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Nurul H.M. Saad

    2011-01-01

    Full Text Available Problem statement: The use of English as well as Arabic language is increasingly evident in the aspects of international business and finance. This study explored the management of multilingual data in multilingual system, to cater two or more different speakers of Internet users. Approach: The proposed method divided into two ends: The front-end that consisted of the Client and the Translator components and the back-end where the management module and the database located. In this method, a single encoded table required to store information and corresponding dictionaries needed to store the multilingual data. The proposed method based on the framework presented in previous work with some modification to suit with characteristics of chosen languages. Results: Experimental evaluation was performed in storage requirement and mathematical analysis had been used to show the time of each database operations for both of the traditional and the proposed method. Conclusion/Recommendations: The proposed method found to be consistently performed well in the developed multilingual system.

  11. The fundamentals of object-oriented database management systems.

    Science.gov (United States)

    Plateau, D

    1993-01-01

    The purpose of this document is to characterize the two technologies (database and object-oriented technologies) which constitute the foundation of object-oriented database management systems. The O2 Object-Oriented DataBase Management System is then described as an example of this type of system.

  12. ARAMEMNON, a Novel Database for Arabidopsis Integral Membrane Proteins1

    Science.gov (United States)

    Schwacke, Rainer; Schneider, Anja; van der Graaff, Eric; Fischer, Karsten; Catoni, Elisabetta; Desimone, Marcelo; Frommer, Wolf B.; Flügge, Ulf-Ingo; Kunze, Reinhard

    2003-01-01

    A specialized database (DB) for Arabidopsis membrane proteins, ARAMEMNON, was designed that facilitates the interpretation of gene and protein sequence data by integrating features that are presently only available from individual sources. Using several publicly available prediction programs, putative integral membrane proteins were identified among the approximately 25,500 proteins in the Arabidopsis genome DBs. By averaging the predictions from seven programs, approximately 6,500 proteins were classified as transmembrane (TM) candidate proteins. Some 1,800 of these contain at least four TM spans and are possibly linked to transport functions. The ARAMEMNON DB enables direct comparison of the predictions of seven different TM span computation programs and the predictions of subcellular localization by eight signal peptide recognition programs. A special function displays the proteins related to the query and dynamically generates a protein family structure. As a first set of proteins from other organisms, all of the approximately 700 putative membrane proteins were extracted from the genome of the cyanobacterium Synechocystis sp. and incorporated in the ARAMEMNON DB. The ARAMEMNON DB is accessible at the URL http://aramemnon.botanik.uni-koeln.de. PMID:12529511

  13. Nutritional phenotype databases and integrated nutrition: from molecules to populations.

    Science.gov (United States)

    Gibney, Michael J; McNulty, Breige A; Ryan, Miriam F; Walsh, Marianne C

    2014-05-01

    In recent years, there has been a great expansion in the nature of new technologies for the study of all biologic subjects at the molecular and genomic level and these have been applied to the field of human nutrition. The latter has traditionally relied on a mix of epidemiologic studies to generate hypotheses, dietary intervention studies to test these hypotheses, and a variety of experimental approaches to understand the underlying explanatory mechanisms. Both the novel and traditional approaches have begun to carve out separate identities vís-a-vís their own journals, their own international societies, and their own national and international symposia. The present review draws on the advent of large national nutritional phenotype databases and related technological developments to argue the case that there needs to be far more integration of molecular and public health nutrition. This is required to address new joint approaches to such areas as the measurement of food intake, biomarker discovery, and the genetic determinants of nutrient-sensitive genotypes and other areas such as personalized nutrition and the use of new technologies with mass application, such as in dried blood spots to replace venipuncture or portable electronic devices to monitor food intake and phenotype. Future development requires the full integration of these 2 disciplines, which will provide a challenge to both funding agencies and to university training of nutritionists.

  14. Spatial Database Modeling for Indoor Navigation Systems

    Science.gov (United States)

    Gotlib, Dariusz; Gnat, Miłosz

    2013-12-01

    For many years, cartographers are involved in designing GIS and navigation systems. Most GIS applications use the outdoor data. Increasingly, similar applications are used inside buildings. Therefore it is important to find the proper model of indoor spatial database. The development of indoor navigation systems should utilize advanced teleinformation, geoinformatics, geodetic and cartographical knowledge. The authors present the fundamental requirements for the indoor data model for navigation purposes. Presenting some of the solutions adopted in the world they emphasize that navigation applications require specific data to present the navigation routes in the right way. There is presented original solution for indoor data model created by authors on the basis of BISDM model. Its purpose is to expand the opportunities for use in indoor navigation.

  15. Active In-Database Processing to Support Ambient Assisted Living Systems

    Directory of Open Access Journals (Sweden)

    Wagner O. de Morais

    2014-08-01

    Full Text Available As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare.

  16. Dive Data Management System and Database

    Energy Technology Data Exchange (ETDEWEB)

    Gardiner, J.

    1998-05-01

    In 1994 the International Marine Contractors Association (IMCA, formerly AODC), the Health and Safety Executive (HSE) and the United Kingdom Offshore Operators Association (UKOOA) entered into a tri-partite Agreement to create a Dive Data Recording and Management System for offshore dives in the air range on the United Kingdom Continental Shelf (UKCS). The two companies of this system were: automatic Dive Data Recording Systems (DDRS) on dive support vessels, to log depth/time and other dive parameters; and a central Dive Data Management System (DDMS) to collate and analyse these data in an industry-wide database. This report summarises the progress of the project over the first two years of operation. It presents the data obtained in the period 1 January 1995 to 31 December 1996, in the form of industry-wide Standard Reports. It comments on the significance of the data, and it records the experience of the participants in implementing and maintaining the offshore Dive Data Recording Systems and the onshore central Dive Data Management System. A key success of the project has been to provide the air-range Diving Supervisor with an accurate, real-time display of the depth and time of every dive. This has enabled the dive and the associated decompression to be managed more effectively by the Supervisor. In the event of an incident, the recorded data are also available to the Dive/Safety Manager, who now has more complete information on which to assess the possible causes of the incident. (author)

  17. Systems Integration Fact Sheet

    Energy Technology Data Exchange (ETDEWEB)

    None

    2016-06-01

    This fact sheet is an overview of the Systems Integration subprogram at the U.S. Department of Energy SunShot Initiative. The Systems Integration subprogram enables the widespread deployment of safe, reliable, and cost-effective solar energy technologies by addressing the associated technical and non-technical challenges. These include timely and cost-effective interconnection procedures, optimal system planning, accurate prediction of solar resources, monitoring and control of solar power, maintaining grid reliability and stability, and many more. To address the challenges associated with interconnecting and integrating hundreds of gigawatts of solar power onto the electricity grid, the Systems Integration program funds research, development, and demonstration projects in four broad, interrelated focus areas: grid performance and reliability, dispatchability, power electronics, and communications.

  18. Integrated inventory information system

    Digital Repository Service at National Institute of Oceanography (India)

    Sarupria, J.S.; Kunte, P.D.

    The nature of oceanographic data and the management of inventory level information are described in Integrated Inventory Information System (IIIS). It is shown how a ROSCOPO (report on observations/samples collected during oceanographic programme...

  19. The Network Configuration of an Object Relational Database Management System

    Science.gov (United States)

    Diaz, Philip; Harris, W. C.

    2000-01-01

    The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.

  20. PlnTFDB: an integrative plant transcription factor database

    Directory of Open Access Journals (Sweden)

    Ruzicic Slobodan

    2007-02-01

    Full Text Available Abstract Background Transcription factors (TFs are key regulatory proteins that enhance or repress the transcriptional rate of their target genes by binding to specific promoter regions (i.e. cis-acting elements upon activation or de-activation of upstream signaling cascades. TFs thus constitute master control elements of dynamic transcriptional networks. TFs have fundamental roles in almost all biological processes (development, growth and response to environmental factors and it is assumed that they play immensely important functions in the evolution of species. In plants, TFs have been employed to manipulate various types of metabolic, developmental and stress response pathways. Cross-species comparison and identification of regulatory modules and hence TFs is thought to become increasingly important for the rational design of new plant biomass. Up to now, however, no computational repository is available that provides access to the largely complete sets of transcription factors of sequenced plant genomes. Description PlnTFDB is an integrative plant transcription factor database that provides a web interface to access large (close to complete sets of transcription factors of several plant species, currently encompassing Arabidopsis thaliana (thale cress, Populus trichocarpa (poplar, Oryza sativa (rice, Chlamydomonas reinhardtii and Ostreococcus tauri. It also provides an access point to its daughter databases of a species-centered representation of transcription factors (OstreoTFDB, ChlamyTFDB, ArabTFDB, PoplarTFDB and RiceTFDB. Information including protein sequences, coding regions, genomic sequences, expressed sequence tags (ESTs, domain architecture and scientific literature is provided for each family. Conclusion We have created lists of putatively complete sets of transcription factors and other transcriptional regulators for five plant genomes. They are publicly available through http://plntfdb.bio.uni-potsdam.de. Further data will be

  1. biochem4j: Integrated and extensible biochemical knowledge through graph databases.

    Science.gov (United States)

    Swainston, Neil; Batista-Navarro, Riza; Carbonell, Pablo; Dobson, Paul D; Dunstan, Mark; Jervis, Adrian J; Vinaixa, Maria; Williams, Alan R; Ananiadou, Sophia; Faulon, Jean-Loup; Mendes, Pedro; Kell, Douglas B; Scrutton, Nigel S; Breitling, Rainer

    2017-01-01

    Biologists and biochemists have at their disposal a number of excellent, publicly available data resources such as UniProt, KEGG, and NCBI Taxonomy, which catalogue biological entities. Despite the usefulness of these resources, they remain fundamentally unconnected. While links may appear between entries across these databases, users are typically only able to follow such links by manual browsing or through specialised workflows. Although many of the resources provide web-service interfaces for computational access, performing federated queries across databases remains a non-trivial but essential activity in interdisciplinary systems and synthetic biology programmes. What is needed are integrated repositories to catalogue both biological entities and-crucially-the relationships between them. Such a resource should be extensible, such that newly discovered relationships-for example, those between novel, synthetic enzymes and non-natural products-can be added over time. With the introduction of graph databases, the barrier to the rapid generation, extension and querying of such a resource has been lowered considerably. With a particular focus on metabolic engineering as an illustrative application domain, biochem4j, freely available at http://biochem4j.org, is introduced to provide an integrated, queryable database that warehouses chemical, reaction, enzyme and taxonomic data from a range of reliable resources. The biochem4j framework establishes a starting point for the flexible integration and exploitation of an ever-wider range of biological data sources, from public databases to laboratory-specific experimental datasets, for the benefit of systems biologists, biosystems engineers and the wider community of molecular biologists and biological chemists.

  2. CycADS: an annotation database system to ease the development and update of BioCyc databases.

    Science.gov (United States)

    Vellozo, Augusto F; Véron, Amélie S; Baa-Puyoulet, Patrice; Huerta-Cepas, Jaime; Cottret, Ludovic; Febvay, Gérard; Calevro, Federica; Rahbé, Yvan; Douglas, Angela E; Gabaldón, Toni; Sagot, Marie-France; Charles, Hubert; Colella, Stefano

    2011-01-01

    In recent years, genomes from an increasing number of organisms have been sequenced, but their annotation remains a time-consuming process. The BioCyc databases offer a framework for the integrated analysis of metabolic networks. The Pathway tool software suite allows the automated construction of a database starting from an annotated genome, but it requires prior integration of all annotations into a specific summary file or into a GenBank file. To allow the easy creation and update of a BioCyc database starting from the multiple genome annotation resources available over time, we have developed an ad hoc data management system that we called Cyc Annotation Database System (CycADS). CycADS is centred on a specific database model and on a set of Java programs to import, filter and export relevant information. Data from GenBank and other annotation sources (including for example: KAAS, PRIAM, Blast2GO and PhylomeDB) are collected into a database to be subsequently filtered and extracted to generate a complete annotation file. This file is then used to build an enriched BioCyc database using the PathoLogic program of Pathway Tools. The CycADS pipeline for annotation management was used to build the AcypiCyc database for the pea aphid (Acyrthosiphon pisum) whose genome was recently sequenced. The AcypiCyc database webpage includes also, for comparative analyses, two other metabolic reconstruction BioCyc databases generated using CycADS: TricaCyc for Tribolium castaneum and DromeCyc for Drosophila melanogaster. Linked to its flexible design, CycADS offers a powerful software tool for the generation and regular updating of enriched BioCyc databases. The CycADS system is particularly suited for metabolic gene annotation and network reconstruction in newly sequenced genomes. Because of the uniform annotation used for metabolic network reconstruction, CycADS is particularly useful for comparative analysis of the metabolism of different organisms. Database URL: http://www.cycadsys.org.

  3. Software Application for Supporting the Education of Database Systems

    Science.gov (United States)

    Vágner, Anikó

    2015-01-01

    The article introduces an application which supports the education of database systems, particularly the teaching of SQL and PL/SQL in Oracle Database Management System environment. The application has two parts, one is the database schema and its content, and the other is a C# application. The schema is to administrate and store the tasks and the…

  4. Discrete integrable system and its integrable coupling

    Institute of Scientific and Technical Information of China (English)

    LI Zhu

    2009-01-01

    This paper derives new discrete integrable system based on discrete isospectral problem. It shows that the hierarchy is completely integrable in the Liouville sense and possesses bi-Hamiltonian structure. Finally, integrable couplings of the obtained system is given by means of semi-direct sums of Lie algebras.

  5. An Ontology-Based Approach for Semantic Conflict Resolution in Database Integration

    Institute of Scientific and Technical Information of China (English)

    Qiang Liu; Tao Huang; Shao-Hua Liu; Hua Zhong

    2007-01-01

    An important task in database integration is to resolve data conflicts, on both schema-level and semantic-level.Especially difficult the latter is. Some existing ontology-based approaches have been criticized for their lack of domain generality and semantic richness. With the aim to overcome these limitations, this paper introduces a systematic approach for detecting and resolving various semantic conflicts in heterogeneous databases, which includes two important parts: a semantic conflict representation model based on our classification framework of semantic conflicts, and a methodology for detecting and resolving semantic conflicts based on this model. The system has been developed, experimental evaluations on which indicate that this approach can resolve much of the semantic conflicts effectively, and keep independent of domains and integration patterns.

  6. Robust and Blind Watermarking of Relational Database Systems

    Directory of Open Access Journals (Sweden)

    A. Al-Haj

    2008-01-01

    Full Text Available Problem statement: Digital multimedia watermarking technology was suggested in the last decade to embed copyright information in digital objects such images, audio and video. However, the increasing use of relational database systems in many real-life applications created an ever increasing need for watermarking database systems. As a result, watermarking relational database systems is now merging as a research area that deals with the legal issue of copyright protection of database systems. Approach: In this study, we proposed an efficient database watermarking algorithm based on inserting binary image watermarks in non-numeric mutli-word attributes of selected database tuples. Results: The algorithm is robust as it resists attempts to remove or degrade the embedded watermark and it is blind as it does not require the original database in order to extract the embedded watermark. Conclusion: Experimental results demonstrated blindness and the robustness of the algorithm against common database attacks.

  7. Database and applications security integrating information security and data management

    CERN Document Server

    Thuraisingham, Bhavani

    2005-01-01

    This is the first book to provide an in-depth coverage of all the developments, issues and challenges in secure databases and applications. It provides directions for data and application security, including securing emerging applications such as bioinformatics, stream information processing and peer-to-peer computing. Divided into eight sections, each of which focuses on a key concept of secure databases and applications, this book deals with all aspects of technology, including secure relational databases, inference problems, secure object databases, secure distributed databases and emerging

  8. Integrated data acquisition, storage, retrieval and processing using the COMPASS DataBase (CDB)

    Energy Technology Data Exchange (ETDEWEB)

    Urban, J., E-mail: urban@ipp.cas.cz [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Pipek, J.; Hron, M. [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Janky, F.; Papřok, R.; Peterka, M. [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Department of Surface and Plasma Science, Faculty of Mathematics and Physics, Charles University in Prague, V Holešovičkách 2, 180 00 Praha 8 (Czech Republic); Duarte, A.S. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade Técnica de Lisboa, 1049-001 Lisboa (Portugal)

    2014-05-15

    Highlights: • CDB is used as a new data storage solution for the COMPASS tokamak. • The software is light weight, open, fast and easily extensible and scalable. • CDB seamlessly integrates with any data acquisition system. • Rich metadata are stored for physics signals. • Data can be processed automatically, based on dependence rules. - Abstract: We present a complex data handling system for the COMPASS tokamak, operated by IPP ASCR Prague, Czech Republic [1]. The system, called CDB (COMPASS DataBase), integrates different data sources as an assortment of data acquisition hardware and software from different vendors is used. Based on widely available open source technologies wherever possible, CDB is vendor and platform independent and it can be easily scaled and distributed. The data is directly stored and retrieved using a standard NAS (Network Attached Storage), hence independent of the particular technology; the description of the data (the metadata) is recorded in a relational database. Database structure is general and enables the inclusion of multi-dimensional data signals in multiple revisions (no data is overwritten). This design is inherently distributed as the work is off-loaded to the clients. Both NAS and database can be implemented and optimized for fast local access as well as secure remote access. CDB is implemented in Python language; bindings for Java, C/C++, IDL and Matlab are provided. Independent data acquisitions systems as well as nodes managed by FireSignal [2] are all integrated using CDB. An automated data post-processing server is a part of CDB. Based on dependency rules, the server executes, in parallel if possible, prescribed post-processing tasks.

  9. DEVELOPING MULTITHREADED DATABASE APPLICATION USING JAVA TOOLS AND ORACLE DATABASE MANAGEMENT SYSTEM IN INTRANET ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Raied Salman

    2015-11-01

    Full Text Available In many business organizations, database applications are designed and implemented using various DBMS and Programming Languages. These applications are used to maintain databases for the organizations. The organization departments can be located at different locations and can be connected by intranet environment. In such environment maintenance of database records become an assignment of complexity which needs to be resolved. In this paper an intranet application is designed and implemented using Object-Oriented Programming Language Java and Object-Relational Database Management System Oracle in multithreaded Operating System environment.

  10. The "Integrated Library System."

    Science.gov (United States)

    Dowlin, Kenneth E.

    1985-01-01

    Reviews internal and external dimensions of library environment that must be taken into account by library managers when choosing an integrated library system. The selection, acquisition, and implementation stages of Maggie III--a computerized library system sensitive to the internal and external organizational environment--are described. (MBR)

  11. Performance assessment of EMR systems based on post-relational database.

    Science.gov (United States)

    Yu, Hai-Yan; Li, Jing-Song; Zhang, Xiao-Guang; Tian, Yu; Suzuki, Muneou; Araki, Kenji

    2012-08-01

    Post-relational databases provide high performance and are currently widely used in American hospitals. As few hospital information systems (HIS) in either China or Japan are based on post-relational databases, here we introduce a new-generation electronic medical records (EMR) system called Hygeia, which was developed with the post-relational database Caché and the latest platform Ensemble. Utilizing the benefits of a post-relational database, Hygeia is equipped with an "integration" feature that allows all the system users to access data-with a fast response time-anywhere and at anytime. Performance tests of databases in EMR systems were implemented in both China and Japan. First, a comparison test was conducted between a post-relational database, Caché, and a relational database, Oracle, embedded in the EMR systems of a medium-sized first-class hospital in China. Second, a user terminal test was done on the EMR system Izanami, which is based on the identical database Caché and operates efficiently at the Miyazaki University Hospital in Japan. The results proved that the post-relational database Caché works faster than the relational database Oracle and showed perfect performance in the real-time EMR system.

  12. Audit Database and Information Tracking System

    Data.gov (United States)

    Social Security Administration — This database contains information about Social Security Administration audits regarding SSA agency performance and compliance. These audits can be requested by both...

  13. Minority Serving Institutions Reporting System Database

    Data.gov (United States)

    Social Security Administration — The database will be used to track SSA's contributions to Minority Serving Institutions such as Historically Black Colleges and Universities (HBCU), Tribal Colleges...

  14. A neuroinformatics database system for disease-oriented neuroimaging research.

    Science.gov (United States)

    Wong, Stephen T C; Hoo, Kent Soo; Cao, Xinhua; Tjandra, Donny; Fu, J C; Dillon, William P

    2004-03-01

    Clinical databases are continually growing and accruing more patient information. One of the challenges for managing this wealth of data is efficient retrieval and analysis of a broad range of image and non-image patient data from diverse data sources. This article describes the design and implementation of a new class of research data warehouse, neuroinformatics database system (NIDS), which will alleviate these problems for clinicians and researchers studying and treating patients with intractable temporal lobe epilepsy. The NIDS is a secured, multi-tier system that enables the user to gather, proofread, analyze, and store data from multiple underlying sources. In addition to data management, the NIDS provides several key functions including image analysis and processing, free text search of patient reports, construction of general queries, and on-line statistical analysis. The establishment of this integrated research database will serve as a foundation for future hypothesis-driven experiments, which could uncover previously unsuspected correlations and perhaps help to identify new and accurate predictors for image diagnosis.

  15. Routing Protocols for Transmitting Large Databases or Multi-databases Systems

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Most knowledgeable people agree that networking and routingtechnologi es have been around about 25 years. Routing is simultaneously the most complicat ed function of a network and the most important. It is of the same kind that mor e than 70% of computer application fields are MIS applications. So the challenge in building and using a MIS in the network is developing the means to find, acc ess, and communicate large databases or multi-databases systems. Because genera l databases are not time continuous, in fact, they can not be streaming, so we ca n't obtain reliable and secure quality of service by deleting some unimportant d atagrams in the databases transmission. In this article, we will discuss which k ind of routing protocol is the best type for large databases or multi-databases systems transmission in the networks.

  16. Three dimensional system integration

    CERN Document Server

    Papanikolaou, Antonis; Radojcic, Riko

    2010-01-01

    Three-dimensional (3D) integrated circuit (IC) stacking is the next big step in electronic system integration. It enables packing more functionality, as well as integration of heterogeneous materials, devices, and signals, in the same space (volume). This results in consumer electronics (e.g., mobile, handheld devices) which can run more powerful applications, such as full-length movies and 3D games, with longer battery life. This technology is so promising that it is expected to be a mainstream technology a few years from now, less than 10-15 years from its original conception. To achieve thi

  17. Databases

    Data.gov (United States)

    National Aeronautics and Space Administration — The databases of computational and experimental data from the first Aeroelastic Prediction Workshop are located here. The databases file names tell their contents by...

  18. Integrating Environmental and Human Health Databases in the Great Lakes Basin: Themes, Challenges and Future Directions

    Directory of Open Access Journals (Sweden)

    Kate L. Bassil

    2015-03-01

    Full Text Available Many government, academic and research institutions collect environmental data that are relevant to understanding the relationship between environmental exposures and human health. Integrating these data with health outcome data presents new challenges that are important to consider to improve our effective use of environmental health information. Our objective was to identify the common themes related to the integration of environmental and health data, and suggest ways to address the challenges and make progress toward more effective use of data already collected, to further our understanding of environmental health associations in the Great Lakes region. Environmental and human health databases were identified and reviewed using literature searches and a series of one-on-one and group expert consultations. Databases identified were predominantly environmental stressors databases, with fewer found for health outcomes and human exposure. Nine themes or factors that impact integration were identified: data availability, accessibility, harmonization, stakeholder collaboration, policy and strategic alignment, resource adequacy, environmental health indicators, and data exchange networks. The use and cost effectiveness of data currently collected could be improved by strategic changes to data collection and access systems to provide better opportunities to identify and study environmental exposures that may impact human health.

  19. Selecting a Relational Database Management System for Library Automation Systems.

    Science.gov (United States)

    Shekhel, Alex; O'Brien, Mike

    1989-01-01

    Describes the evaluation of four relational database management systems (RDBMSs) (Informix Turbo, Oracle 6.0 TPS, Unify 2000 and Relational Technology's Ingres 5.0) to determine which is best suited for library automation. The evaluation criteria used to develop a benchmark specifically designed to test RDBMSs for libraries are discussed. (CLB)

  20. Integrated nonthermal treatment system study

    Energy Technology Data Exchange (ETDEWEB)

    Biagi, C.; Bahar, D.; Teheranian, B.; Vetromile, J. [Morrison Knudsen Corp. (United States); Quapp, W.J. [Nuclear Metals (United States); Bechtold, T.; Brown, B.; Schwinkendorf, W. [Lockheed Martin Idaho Technologies Co., Idaho Falls, ID (United States); Swartz, G. [Swartz and Associates (United States)

    1997-01-01

    This report presents the results of a study of nonthermal treatment technologies. The study consisted of a systematic assessment of five nonthermal treatment alternatives. The treatment alternatives consist of widely varying technologies for safely destroying the hazardous organic components, reducing the volume, and preparing for final disposal of the contact-handled mixed low-level waste (MLLW) currently stored in the US Department of Energy complex. The alternatives considered were innovative nonthermal treatments for organic liquids and sludges, process residue, soil and debris. Vacuum desorption or various washing approaches are considered for treatment of soil, residue and debris. Organic destruction methods include mediated electrochemical oxidation, catalytic wet oxidation, and acid digestion. Other methods studied included stabilization technologies and mercury separation of treatment residues. This study is a companion to the integrated thermal treatment study which examined 19 alternatives for thermal treatment of MLLW waste. The quantities and physical and chemical compositions of the input waste are based on the inventory database developed by the US Department of Energy. The Integrated Nonthermal Treatment Systems (INTS) systems were evaluated using the same waste input (2,927 pounds per hour) as the Integrated Thermal Treatment Systems (ITTS). 48 refs., 68 figs., 37 tabs.

  1. Advanced integrated enhanced vision systems

    Science.gov (United States)

    Kerr, J. R.; Luk, Chiu H.; Hammerstrom, Dan; Pavel, Misha

    2003-09-01

    In anticipation of its ultimate role in transport, business and rotary wing aircraft, we clarify the role of Enhanced Vision Systems (EVS): how the output data will be utilized, appropriate architecture for total avionics integration, pilot and control interfaces, and operational utilization. Ground-map (database) correlation is critical, and we suggest that "synthetic vision" is simply a subset of the monitor/guidance interface issue. The core of integrated EVS is its sensor processor. In order to approximate optimal, Bayesian multi-sensor fusion and ground correlation functionality in real time, we are developing a neural net approach utilizing human visual pathway and self-organizing, associative-engine processing. In addition to EVS/SVS imagery, outputs will include sensor-based navigation and attitude signals as well as hazard detection. A system architecture is described, encompassing an all-weather sensor suite; advanced processing technology; intertial, GPS and other avionics inputs; and pilot and machine interfaces. Issues of total-system accuracy and integrity are addressed, as well as flight operational aspects relating to both civil certification and military applications in IMC.

  2. Digital microscopy for boosting database integration and analysis in TMA studies.

    Science.gov (United States)

    Krenacs, Tibor; Ficsor, Levente; Varga, Sebestyen Viktor; Angeli, Vivien; Molnar, Bela

    2010-01-01

    The enormous amount of clinical, pathological, and staining data to be linked, analyzed, and correlated in a tissue microarray (TMA) project makes digital slides ideal to be integrated into TMA database systems. With the help of a computer and dedicated software tools, digital slides offer dynamic access to microscopic information at any magnification with easy navigation, annotation, measurement, and archiving features. Advanced slide scanners work both in transmitted light and fluorescent modes to support biomarker testing with immunohistochemistry, immunofluorescence or fluorescence in situ hybridization (FISH). Currently, computer-driven integrated systems are available for creating TMAs, digitalizing TMA slides, linking sample and staining data, and analyzing their results. Digital signals permit image segmentation along color, intensity, and size for automated object quantification where digital slides offer superior imaging features and batch processing. In this chapter, the workflow and the advantages of digital TMA projects are demonstrated through the project-based MIRAX system developed by 3DHISTECH and supported by Zeiss.The enhanced features of digital slides compared with those of still images can boost integration and intelligence in TMA database management systems, offering essential support for high-throughput biomarker testing, for example, in tumor progression/prognosis, drug discovery, and target therapy research.

  3. Improving microbial genome annotations in an integrated database context.

    Directory of Open Access Journals (Sweden)

    I-Min A Chen

    Full Text Available Effective comparative analysis of microbial genomes requires a consistent and complete view of biological data. Consistency regards the biological coherence of annotations, while completeness regards the extent and coverage of functional characterization for genomes. We have developed tools that allow scientists to assess and improve the consistency and completeness of microbial genome annotations in the context of the Integrated Microbial Genomes (IMG family of systems. All publicly available microbial genomes are characterized in IMG using different functional annotation and pathway resources, thus providing a comprehensive framework for identifying and resolving annotation discrepancies. A rule based system for predicting phenotypes in IMG provides a powerful mechanism for validating functional annotations, whereby the phenotypic traits of an organism are inferred based on the presence of certain metabolic reactions and pathways and compared to experimentally observed phenotypes. The IMG family of systems are available at http://img.jgi.doe.gov/.

  4. TWRS information locator database system administrator`s manual

    Energy Technology Data Exchange (ETDEWEB)

    Knutson, B.J., Westinghouse Hanford

    1996-09-13

    This document is a guide for use by the Tank Waste Remediation System (TWRS) Information Locator Database (ILD) System Administrator. The TWRS ILD System is an inventory of information used in the TWRS Systems Engineering process to represent the TWRS Technical Baseline. The inventory is maintained in the form of a relational database developed in Paradox 4.5.

  5. Discrete systems and integrability

    CERN Document Server

    Hietarinta, J; Nijhoff, F W

    2016-01-01

    This first introductory text to discrete integrable systems introduces key notions of integrability from the vantage point of discrete systems, also making connections with the continuous theory where relevant. While treating the material at an elementary level, the book also highlights many recent developments. Topics include: Darboux and Bäcklund transformations; difference equations and special functions; multidimensional consistency of integrable lattice equations; associated linear problems (Lax pairs); connections with Padé approximants and convergence algorithms; singularities and geometry; Hirota's bilinear formalism for lattices; intriguing properties of discrete Painlevé equations; and the novel theory of Lagrangian multiforms. The book builds the material in an organic way, emphasizing interconnections between the various approaches, while the exposition is mostly done through explicit computations on key examples. Written by respected experts in the field, the numerous exercises and the thoroug...

  6. CNA’s Integrated Ship Database: Third Quarter CY 2013 Update

    Science.gov (United States)

    2014-10-01

    CNA’s Integrated Ship Database Third Quarter CY 2013 Update Gregory N. Suess • Lynette A. McClain DIS-2014-U-007713-Final October 2014 This document...13 Changes in database content for this quarter . . . . . . . . . . 25 Routine update of source data...blank.ii Introduction Background In this CNA Interactive Software document, we present the update of our Integrated Ship Database (ISDB) for the third

  7. Integrating stations from the North America Gravity Database into a local GPS-based land gravity survey

    Science.gov (United States)

    Shoberg, Thomas G.; Stoddard, Paul R.

    2013-01-01

    The ability to augment local gravity surveys with additional gravity stations from easily accessible national databases can greatly increase the areal coverage and spatial resolution of a survey. It is, however, necessary to integrate such data seamlessly with the local survey. One challenge to overcome in integrating data from national databases is that these data are typically of unknown quality. This study presents a procedure for the evaluation and seamless integration of gravity data of unknown quality from a national database with data from a local Global Positioning System (GPS)-based survey. The starting components include the latitude, longitude, elevation and observed gravity at each station location. Interpolated surfaces of the complete Bouguer anomaly are used as a means of quality control and comparison. The result is an integrated dataset of varying quality with many stations having GPS accuracy and other reliable stations of unknown origin, yielding a wider coverage and greater spatial resolution than either survey alone.

  8. Integrated management systems

    DEFF Research Database (Denmark)

    Jørgensen, Tine Herreborg; Remmen, Arne; Mellado, M. Dolores

    2006-01-01

    Different approaches to integration of management systems (ISO 9001, ISO 14001, OHSAS 18001 and SA 8000) with various levels of ambition have emerged. The tendency of increased compatibility between these standards has paved the road for discussions of, how to understand the different aspects of ...

  9. Integrable and superintegrable systems

    CERN Document Server

    1990-01-01

    Some of the most active practitioners in the field of integrable systems have been asked to describe what they think of as the problems and results which seem to be most interesting and important now and are likely to influence future directions. The papers in this collection, representing their authors' responses, offer a broad panorama of the subject as it enters the 1990's.

  10. Performance comparison of non-relational database systems

    OpenAIRE

    Žlender, Rok

    2011-01-01

    Deciding on which data store to use is one of the most important aspects of every project. Besides the established relational database systems non-relational solutions are gaining in their popularity. Non-relational database systems provide an interesting alternative when we are storing large amount of data or when we are looking for greater flexibility with our data model. Purpose of this thesis is to measure and analyze how chosen non-relational database systems compare against each othe...

  11. Power Systems integration

    Science.gov (United States)

    Brantley, L. W.

    1982-01-01

    Power systems integration in large flexible space structures is discussed with emphasis upon body control. A solar array is discussed as a typical example of spacecraft configuration problems. Information on how electric batteries dominate life-cycle costs is presented in chart form. Information is given on liquid metal droplet generators and collectors, hot spot analysis, power dissipation in solar arrays, solar array protection optimization, and electromagnetic compatibility for a power system platform.

  12. Database Performance Monitoring for the Photovoltaic Systems

    Energy Technology Data Exchange (ETDEWEB)

    Klise, Katherine A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    The Database Performance Monitoring (DPM) software (copyright in processes) is being developed at Sandia National Laboratories to perform quality control analysis on time series data. The software loads time indexed databases (currently csv format), performs a series of quality control tests defined by the user, and creates reports which include summary statistics, tables, and graphics. DPM can be setup to run on an automated schedule defined by the user. For example, the software can be run once per day to analyze data collected on the previous day. HTML formatted reports can be sent via email or hosted on a website. To compare performance of several databases, summary statistics and graphics can be gathered in a dashboard view which links to detailed reporting information for each database. The software can be customized for specific applications.

  13. Security in the CernVM File System and the Frontier Distributed Database Caching System

    CERN Document Server

    Dykstra, David

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently both CVMFS and Frontier have added X509-based integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  14. Security in the CernVM File System and the Frontier Distributed Database Caching System

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, D.; Blomer, J. [CERN

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  15. Development of a Comprehensive Database System for Safety Analyst.

    Science.gov (United States)

    Paz, Alexander; Veeramisti, Naveen; Khanal, Indira; Baker, Justin; de la Fuente-Mella, Hanns

    2015-01-01

    This study addressed barriers associated with the use of Safety Analyst, a state-of-the-art tool that has been developed to assist during the entire Traffic Safety Management process but that is not widely used due to a number of challenges as described in this paper. As part of this study, a comprehensive database system and tools to provide data to multiple traffic safety applications, with a focus on Safety Analyst, were developed. A number of data management tools were developed to extract, collect, transform, integrate, and load the data. The system includes consistency-checking capabilities to ensure the adequate insertion and update of data into the database. This system focused on data from roadways, ramps, intersections, and traffic characteristics for Safety Analyst. To test the proposed system and tools, data from Clark County, which is the largest county in Nevada and includes the cities of Las Vegas, Henderson, Boulder City, and North Las Vegas, was used. The database and Safety Analyst together help identify the sites with the potential for safety improvements. Specifically, this study examined the results from two case studies. The first case study, which identified sites having a potential for safety improvements with respect to fatal and all injury crashes, included all roadway elements and used default and calibrated Safety Performance Functions (SPFs). The second case study identified sites having a potential for safety improvements with respect to fatal and all injury crashes, specifically regarding intersections; it used default and calibrated SPFs as well. Conclusions were developed for the calibration of safety performance functions and the classification of site subtypes. Guidelines were provided about the selection of a particular network screening type or performance measure for network screening.

  16. Development of a Comprehensive Database System for Safety Analyst

    Directory of Open Access Journals (Sweden)

    Alexander Paz

    2015-01-01

    Full Text Available This study addressed barriers associated with the use of Safety Analyst, a state-of-the-art tool that has been developed to assist during the entire Traffic Safety Management process but that is not widely used due to a number of challenges as described in this paper. As part of this study, a comprehensive database system and tools to provide data to multiple traffic safety applications, with a focus on Safety Analyst, were developed. A number of data management tools were developed to extract, collect, transform, integrate, and load the data. The system includes consistency-checking capabilities to ensure the adequate insertion and update of data into the database. This system focused on data from roadways, ramps, intersections, and traffic characteristics for Safety Analyst. To test the proposed system and tools, data from Clark County, which is the largest county in Nevada and includes the cities of Las Vegas, Henderson, Boulder City, and North Las Vegas, was used. The database and Safety Analyst together help identify the sites with the potential for safety improvements. Specifically, this study examined the results from two case studies. The first case study, which identified sites having a potential for safety improvements with respect to fatal and all injury crashes, included all roadway elements and used default and calibrated Safety Performance Functions (SPFs. The second case study identified sites having a potential for safety improvements with respect to fatal and all injury crashes, specifically regarding intersections; it used default and calibrated SPFs as well. Conclusions were developed for the calibration of safety performance functions and the classification of site subtypes. Guidelines were provided about the selection of a particular network screening type or performance measure for network screening.

  17. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  18. A database/knowledge structure for a robotics vision system

    Science.gov (United States)

    Dearholt, D. W.; Gonzales, N. N.

    1987-01-01

    Desirable properties of robotics vision database systems are given, and structures which possess properties appropriate for some aspects of such database systems are examined. Included in the structures discussed is a family of networks in which link membership is determined by measures of proximity between pairs of the entities stored in the database. This type of network is shown to have properties which guarantee that the search for a matching feature vector is monotonic. That is, the database can be searched with no backtracking, if there is a feature vector in the database which matches the feature vector of the external entity which is to be identified. The construction of the database is discussed, and the search procedure is presented. A section on the support provided by the database for description of the decision-making processes and the search path is also included.

  19. Development of SRS.php, a Simple Object Access Protocol-based library for data acquisition from integrated biological databases.

    Science.gov (United States)

    Barbosa-Silva, A; Pafilis, E; Ortega, J M; Schneider, R

    2007-12-11

    Data integration has become an important task for biological database providers. The current model for data exchange among different sources simplifies the manner that distinct information is accessed by users. The evolution of data representation from HTML to XML enabled programs, instead of humans, to interact with biological databases. We present here SRS.php, a PHP library that can interact with the data integration Sequence Retrieval System (SRS). The library has been written using SOAP definitions, and permits the programmatic communication through webservices with the SRS. The interactions are possible by invoking the methods described in WSDL by exchanging XML messages. The current functions available in the library have been built to access specific data stored in any of the 90 different databases (such as UNIPROT, KEGG and GO) using the same query syntax format. The inclusion of the described functions in the source of scripts written in PHP enables them as webservice clients to the SRS server. The functions permit one to query the whole content of any SRS database, to list specific records in these databases, to get specific fields from the records, and to link any record among any pair of linked databases. The case study presented exemplifies the library usage to retrieve information regarding registries of a Plant Defense Mechanisms database. The Plant Defense Mechanisms database is currently being developed, and the proposal of SRS.php library usage is to enable the data acquisition for the further warehousing tasks related to its setup and maintenance.

  20. Portuguese food composition database quality management system.

    Science.gov (United States)

    Oliveira, L M; Castanheira, I P; Dantas, M A; Porto, A A; Calhau, M A

    2010-11-01

    The harmonisation of food composition databases (FCDB) has been a recognised need among users, producers and stakeholders of food composition data (FCD). To reach harmonisation of FCDBs among the national compiler partners, the European Food Information Resource (EuroFIR) Network of Excellence set up a series of guidelines and quality requirements, together with recommendations to implement quality management systems (QMS) in FCDBs. The Portuguese National Institute of Health (INSA) is the national FCDB compiler in Portugal and is also a EuroFIR partner. INSA's QMS complies with ISO/IEC (International Organization for Standardisation/International Electrotechnical Commission) 17025 requirements. The purpose of this work is to report on the strategy used and progress made for extending INSA's QMS to the Portuguese FCDB in alignment with EuroFIR guidelines. A stepwise approach was used to extend INSA's QMS to the Portuguese FCDB. The approach included selection of reference standards and guides and the collection of relevant quality documents directly or indirectly related to the compilation process; selection of the adequate quality requirements; assessment of adequacy and level of requirement implementation in the current INSA's QMS; implementation of the selected requirements; and EuroFIR's preassessment 'pilot' auditing. The strategy used to design and implement the extension of INSA's QMS to the Portuguese FCDB is reported in this paper. The QMS elements have been established by consensus. ISO/IEC 17025 management requirements (except 4.5) and 5.2 technical requirements, as well as all EuroFIR requirements (including technical guidelines, FCD compilation flowchart and standard operating procedures), have been selected for implementation. The results indicate that the quality management requirements of ISO/IEC 17025 in place in INSA fit the needs for document control, audits, contract review, non-conformity work and corrective actions, and users' (customers

  1. DATABASE STRUCTURE FOR THE INTEGRATION OF RS WITH GIS BASED ON SEMANTIC NETWORK

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The integration of remote sensing (RS) with geographical information sy stem (GIS) is a hotspot in geographical information science.A good da tabase structure is important to the integration of RS with GIS,which should be b eneficial to the complete integration of RS with GIS,able to deal with the disag reement between the resolution of remote sensing images and the precision of GIS data,and also helpful to the knowledge discovery and exploitation.In this pap er,the database structure storing the spatial data based on semantic network is presented.This database structure has several advantages.Firstly,the spatial data is stored as raster data with space index,so the image processing can be done directly on the GIS data that is stored hierarchically according to the di stinguishing precision.Secondly,the simple objects are aggregated into compl ex ones.Thirdly,because we use the indexing tree to depict the relationship of aggregation and the indexing pictures expressed by 2_D strings to describe the topolo gy structure of the objects,the concepts of surrounding and region are expressed clearly and the semantic content of the landscape can be illustrated well.All the factors that affect the recognition of the objects are depicted in the facto r space,which provides a uniform mathematical frame for the fusion of the seman tic and non_semantic information.Lastly,the object node,knowledge node and th e indexing node are integrated into one node.This feature enhances the ability of system in knowledge expressing,intelligent inference and association.The ap plication shows that this database structure can benefit the interpretation of r emote sensing image with the information of GIS.

  2. Multilingual lexicon design tool and database management system for MT

    CERN Document Server

    Barisevičius, G

    2011-01-01

    The paper presents the design and development of English-Lithuanian-English dictionarylexicon tool and lexicon database management system for MT. The system is oriented to support two main requirements: to be open to the user and to describe much more attributes of speech parts as a regular dictionary that are required for the MT. Programming language Java and database management system MySql is used to implement the designing tool and lexicon database respectively. This solution allows easily deploying this system in the Internet. The system is able to run on various OS such as: Windows, Linux, Mac and other OS where Java Virtual Machine is supported. Since the modern lexicon database managing system is used, it is not a problem accessing the same database for several users.

  3. Systems of integrable derivations

    Directory of Open Access Journals (Sweden)

    Vittoria Bonanzinga

    1994-05-01

    Full Text Available Let A be a commutative k-algebra, with k a subring of A. We give the definition of n-dimensional differentiation of A over k which formally extends the known one of unidimensional differentiation and we study the group of all n-dimensional differentiation of A over k. In the second part of the work we give some theorems of strong integrability for systems of derivations in terms of n-dimensional differentiation.

  4. Natural Language Interfaces to Database Systems

    Science.gov (United States)

    1988-10-01

    of Toronto, Philip A. Bernstein , Harvard University, and Harry K.T. Wong, IBM Research Laboratory, "A Language Facility for Designing Database...Colin Blakemore and Susan Greenfield (editors), Mindwaves - Thoughts on Intelligence, Identity, and Consciousness, Basil Blackwell, Inc. 1987. 1110

  5. Slimplectic Integrators: Variational Integrators for Nonconservative systems

    Science.gov (United States)

    Tsang, David

    2016-05-01

    Symplectic integrators are widely used for long-term integration of conservative astrophysical problems due to their ability to preserve the constants of motion; however, they cannot in general be applied in the presence of nonconservative interactions. Here we present the “slimplectic” integrator, a new type of numerical integrator that shares many of the benefits of traditional symplectic integrators yet is applicable to general nonconservative systems. We utilize a fixed-time-step variational integrator formalism applied to a newly developed principle of stationary nonconservative action (Galley, 2013, Galley et al 2014). As a result, the generalized momenta and energy (Noether current) evolutions are well-tracked. We discuss several example systems, including damped harmonic oscillators, Poynting-Robertson drag, and gravitational radiation reaction, by utilizing our new publicly available code to demonstrate the slimplectic integrator algorithm. Slimplectic integrators are well-suited for integrations of systems where nonconservative effects play an important role in the long-term dynamical evolution. As such they are particularly appropriate for cosmological or celestial N-body dynamics problems where nonconservative interactions, e.g., gas interactions or dissipative tides, can play an important role.

  6. PGSB/MIPS PlantsDB Database Framework for the Integration and Analysis of Plant Genome Data.

    Science.gov (United States)

    Spannagl, Manuel; Nussbaumer, Thomas; Bader, Kai; Gundlach, Heidrun; Mayer, Klaus F X

    2017-01-01

    Plant Genome and Systems Biology (PGSB), formerly Munich Institute for Protein Sequences (MIPS) PlantsDB, is a database framework for the integration and analysis of plant genome data, developed and maintained for more than a decade now. Major components of that framework are genome databases and analysis resources focusing on individual (reference) genomes providing flexible and intuitive access to data. Another main focus is the integration of genomes from both model and crop plants to form a scaffold for comparative genomics, assisted by specialized tools such as the CrowsNest viewer to explore conserved gene order (synteny). Data exchange and integrated search functionality with/over many plant genome databases is provided within the transPLANT project.

  7. PACSY, a relational database management system for protein structure and chemical shift analysis.

    Science.gov (United States)

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L

    2012-10-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu.

  8. Distributed Database Control and Allocation. Volume 3. Distributed Database System Designer’s Handbook.

    Science.gov (United States)

    1983-10-01

    Multiversion Data 2-18 2.7.1 Multiversion Timestamping 2-20 2.T.2 Multiversion Looking 2-20 2.8 Combining the Techniques 2-22 3. Database Recovery Algorithms...See rTHEM79, GIFF79] for details. 2.7 Multiversion Data Let us return to a database system model where each logical data item is stored at one DM...In a multiversion database each Write wifxl, produces a new copy (or version) of x, denoted xi. Thus, the value of z is a set of ver- sions. For each

  9. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Document Server

    CERN. Geneva

    2012-01-01

    Non-relational "NoSQL" databases such as Cassandra and CouchDB are best known for their ability to scale to large numbers of clients spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects, is based on traditional SQL databases but also has the same high scalability and wide-area distributability for an important subset of applications. This paper compares the architectures, behavior, performance, and maintainability of the two different approaches and identifies the criteria for choosing which approach to prefer over the other.

  10. The Mouse Genome Database: integration of and access to knowledge about the laboratory mouse.

    Science.gov (United States)

    Blake, Judith A; Bult, Carol J; Eppig, Janan T; Kadin, James A; Richardson, Joel E

    2014-01-01

    The Mouse Genome Database (MGD) (http://www.informatics.jax.org) is the community model organism database resource for the laboratory mouse, a premier animal model for the study of genetic and genomic systems relevant to human biology and disease. MGD maintains a comprehensive catalog of genes, functional RNAs and other genome features as well as heritable phenotypes and quantitative trait loci. The genome feature catalog is generated by the integration of computational and manual genome annotations generated by NCBI, Ensembl and Vega/HAVANA. MGD curates and maintains the comprehensive listing of functional annotations for mouse genes using the Gene Ontology, and MGD curates and integrates comprehensive phenotype annotations including associations of mouse models with human diseases. Recent improvements include integration of the latest mouse genome build (GRCm38), improved access to comparative and functional annotations for mouse genes with expanded representation of comparative vertebrate genomes and new loads of phenotype data from high-throughput phenotyping projects. All MGD resources are freely available to the research community.

  11. Integrated healthcare information systems.

    Science.gov (United States)

    Miller, J

    1995-01-01

    When it comes to electronic data processing in healthcare, we offer a guarded, but hopeful, prognosis. To be sure, the age of electronic information processing has hit healthcare. Employers, insurance companies, hospitals, physicians and a host of ancillary service providers are all being ushered into a world of high speed, high tech electronic information. Some are even predicting that the health information business will grow from $20 billion to over $100 billion in a decade. Yet, out industry lags behind other industries in its overall movement to the paperless world. Selecting and installing the most advanced integrated information system isn't a simple task, as we've seen. As in life, compromises can produce less than optimal results. Nevertheless, integrated healthcare systems simply won't achieve their goals without systems designed to support the operation of a continuum of services. That's the reality! It is difficult to read about the wonderful advances in other sectors, while realizing that many trees still fall each year in the name of the health care industry. Yes, there are some outstanding examples of organizations pushing the envelop in a variety of areas. Yet from a very practical standpoint, many (like our physician's office) are still struggling or are on the sidelines wondering what to do. Given the competitive marketplace, organizations without effective systems may not have long to wonder and wait.

  12. Integrated Information Systems

    Directory of Open Access Journals (Sweden)

    Annika Moscati

    2012-11-01

    Full Text Available Currently in the field of management, enhancement, territory and cultural heritage analysis, two types of information systems offer significant tools: GIS (Geographic Information System and AIS (Architectural Information System. The first one manages urban and territorial scale data, the second one administers architectural scale data. For a complete management and analysis of heritage both scales (territorial-urban and architectural are essential. But despite numerous attempts made in recent years, currently no system is really able to manage them simultaneously. This study aims to create a hybrid system, which is a new interface that allows to simultaneously view an AIS, a GIS and a window for management of spatial queries. Considering the profound differences between the two systems, the ultimate goal is to integrate them by proposing a new Hybrid System (HS to solve the problem of scale change (from analysis to synthesis using a new data structure and a new interface. To achieve the ultimate goal it has been studied mainly: a the possibilities of implementation of the two systems; b spatial analysis and 3D topology.

  13. Pancreatic Expression database: a generic model for the organization, integration and mining of complex cancer datasets

    Directory of Open Access Journals (Sweden)

    Lemoine Nicholas R

    2007-11-01

    Full Text Available Abstract Background Pancreatic cancer is the 5th leading cause of cancer death in both males and females. In recent years, a wealth of gene and protein expression studies have been published broadening our understanding of pancreatic cancer biology. Due to the explosive growth in publicly available data from multiple different sources it is becoming increasingly difficult for individual researchers to integrate these into their current research programmes. The Pancreatic Expression database, a generic web-based system, is aiming to close this gap by providing the research community with an open access tool, not only to mine currently available pancreatic cancer data sets but also to include their own data in the database. Description Currently, the database holds 32 datasets comprising 7636 gene expression measurements extracted from 20 different published gene or protein expression studies from various pancreatic cancer types, pancreatic precursor lesions (PanINs and chronic pancreatitis. The pancreatic data are stored in a data management system based on the BioMart technology alongside the human genome gene and protein annotations, sequence, homologue, SNP and antibody data. Interrogation of the database can be achieved through both a web-based query interface and through web services using combined criteria from pancreatic (disease stages, regulation, differential expression, expression, platform technology, publication and/or public data (antibodies, genomic region, gene-related accessions, ontology, expression patterns, multi-species comparisons, protein data, SNPs. Thus, our database enables connections between otherwise disparate data sources and allows relatively simple navigation between all data types and annotations. Conclusion The database structure and content provides a powerful and high-speed data-mining tool for cancer research. It can be used for target discovery i.e. of biomarkers from body fluids, identification and analysis

  14. Composite Materials Design Database and Data Retrieval System Requirements

    Science.gov (United States)

    1991-08-01

    technology. Gaining such an understanding will facilitate the eventual development and operation of utilitarian composite materials databases ( CMDB ) designed...Significant Aspects of Materials Databases. While the components of a CMDB can be mapped to components of other types of databases, some differences...stand out and make it difficult to implement an effective CMDB on current Commercial, Off-The-Shelf (COTS) systems, or general DBMSs. These are summarized

  15. Metacatalog of Planetary Surface Features for Multicriteria Evaluation of Surface Evolution: the Integrated Planetary Feature Database

    Science.gov (United States)

    Hargitai, Henrik

    2016-10-01

    We have created a metacatalog, or catalog or catalogs, of surface features of Mars that also includes the actual data in the catalogs listed. The goal is to make mesoscale surface feature databases available in one place, in a GIS-ready format. The databases can be directly imported to ArcGIS or other GIS platforms, like Google Mars. Some of the catalogs in our database are also ingested into the JMARS platform.All catalogs have been previously published in a peer-reviewed journal, but they may contain updates of the published catalogs. Many of the catalogs are "integrated", i.e. they merge databases or information from various papers on the same topic, including references to each individual features listed.Where available, we have included shapefiles with polygon or linear features, however, most of the catalogs only contain point data of their center points and morphological data.One of the unexpected results of the planetary feature metacatalog is that some features have been described by several papers, using different, i.e., conflicting designations. This shows the need for the development of an identification system suitable for mesoscale (100s m to km sized) features that tracks papers and thus prevents multiple naming of the same feature.The feature database can be used for multicriteria analysis of a terrain, thus enables easy distribution pattern analysis and the correlation of the distribution of different landforms and features on Mars. Such catalog makes a scientific evaluation of potential landing sites easier and more effective during the selection process and also supports automated landing site selections.The catalog is accessible at https://planetarydatabase.wordpress.com/.

  16. The Yak genome database: an integrative database for studying yak biology and high-altitude adaption.

    Science.gov (United States)

    Hu, Quanjun; Ma, Tao; Wang, Kun; Xu, Ting; Liu, Jianquan; Qiu, Qiang

    2012-11-07

    The yak (Bos grunniens) is a long-haired bovine that lives at high altitudes and is an important source of milk, meat, fiber and fuel. The recent sequencing, assembly and annotation of its genome are expected to further our understanding of the means by which it has adapted to life at high altitudes and its ecologically important traits. The Yak Genome Database (YGD) is an internet-based resource that provides access to genomic sequence data and predicted functional information concerning the genes and proteins of Bos grunniens. The curated data stored in the YGD includes genome sequences, predicted genes and associated annotations, non-coding RNA sequences, transposable elements, single nucleotide variants, and three-way whole-genome alignments between human, cattle and yak. YGD offers useful searching and data mining tools, including the ability to search for genes by name or using function keywords as well as GBrowse genome browsers and/or BLAST servers, which can be used to visualize genome regions and identify similar sequences. Sequence data from the YGD can also be downloaded to perform local searches. A new yak genome database (YGD) has been developed to facilitate studies on high-altitude adaption and bovine genomics. The database will be continuously updated to incorporate new information such as transcriptome data and population resequencing data. The YGD can be accessed at http://me.lzu.edu.cn/yak.

  17. The Yak genome database: an integrative database for studying yak biology and high-altitude adaption

    Directory of Open Access Journals (Sweden)

    Hu Quanjun

    2012-11-01

    Full Text Available Abstract Background The yak (Bos grunniens is a long-haired bovine that lives at high altitudes and is an important source of milk, meat, fiber and fuel. The recent sequencing, assembly and annotation of its genome are expected to further our understanding of the means by which it has adapted to life at high altitudes and its ecologically important traits. Description The Yak Genome Database (YGD is an internet-based resource that provides access to genomic sequence data and predicted functional information concerning the genes and proteins of Bos grunniens. The curated data stored in the YGD includes genome sequences, predicted genes and associated annotations, non-coding RNA sequences, transposable elements, single nucleotide variants, and three-way whole-genome alignments between human, cattle and yak. YGD offers useful searching and data mining tools, including the ability to search for genes by name or using function keywords as well as GBrowse genome browsers and/or BLAST servers, which can be used to visualize genome regions and identify similar sequences. Sequence data from the YGD can also be downloaded to perform local searches. Conclusions A new yak genome database (YGD has been developed to facilitate studies on high-altitude adaption and bovine genomics. The database will be continuously updated to incorporate new information such as transcriptome data and population resequencing data. The YGD can be accessed at http://me.lzu.edu.cn/yak.

  18. Developing a Geological Management Information System: National Important Mining Zone Database

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Geo-data is a foundation for the prediction and assessment of ore resources, so managing and making full use of those data, including geography database, geology database, mineral deposits database, aeromagnetics database, gravity database, geochemistry database and remote sensing database, is very significant. We developed national important mining zone database (NIMZDB) to manage 14 national important mining zone databases to support a new round prediction of ore deposit. We found that attention should be paid to the following issues: ① data accuracy: integrity, logic consistency, attribute,spatial and time accuracy; ② management of both attribute and spatial data in the same system; ③transforming data between MapGIS and ArcGIS; ④ data sharing and security; ⑤ data searches that can query both attribute and spatial data. Accuracy of input data is guaranteed and the search, analysis and translation of data between MapGIS and ArcGIS has been made convenient via the development of a checking data module and a managing data module based on MapGIS and ArcGIS. Using ArcSDE, we based data sharing on a client/server system, and attribute and spatial data are also managed in the same system.

  19. The integrated web service and genome database for agricultural plants with biotechnology information

    Science.gov (United States)

    Kim, ChangKug; Park, DongSuk; Seol, YoungJoo; Hahn, JangHo

    2011-01-01

    The National Agricultural Biotechnology Information Center (NABIC) constructed an agricultural biology-based infrastructure and developed a Web based relational database for agricultural plants with biotechnology information. The NABIC has concentrated on functional genomics of major agricultural plants, building an integrated biotechnology database for agro-biotech information that focuses on genomics of major agricultural resources. This genome database provides annotated genome information from 1,039,823 records mapped to rice, Arabidopsis, and Chinese cabbage. PMID:21887015

  20. 7th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2015)

    CERN Document Server

    Nguyen, Ngoc; Batubara, John; New Trends in Intelligent Information and Database Systems

    2015-01-01

    Intelligent information and database systems are two closely related subfields of modern computer science which have been known for over thirty years. They focus on the integration of artificial intelligence and classic database technologies to create the class of next generation information systems. The book focuses on new trends in intelligent information and database systems and discusses topics addressed to the foundations and principles of data, information, and knowledge models, methodologies for intelligent information and database systems analysis, design, and implementation, their validation, maintenance and evolution. They cover a broad spectrum of research topics discussed both from the practical and theoretical points of view such as: intelligent information retrieval, natural language processing, semantic web, social networks, machine learning, knowledge discovery, data mining, uncertainty management and reasoning under uncertainty, intelligent optimization techniques in information systems, secu...

  1. POTENTIAL: A Highly Adaptive Core of Parallel Database System

    Institute of Scientific and Technical Information of China (English)

    文继荣; 陈红; 王珊

    2000-01-01

    POTENTIAL is a virtual database machine based on general computing platforms, especially parallel computing platforms. It provides a complete solution to high-performance database systems by a 'virtual processor + virtual data bus + virtual memory' architecture. Virtual processors manage all CPU resources in the system, on which various operations are running. Virtual data bus is responsible for the management of datatransmission between associated operations, which forms the hinges of the entire system. Virtual memory provides efficient data storage and buffering mechanisms that conform to data reference behaviors in database systems. The architecture of POTENTIAL is very clear and has many good features,including high efficiency, high scalability, high extensibility, high portability, etc.

  2. SolveDB: Integrating Optimization Problem Solvers Into SQL Databases

    DEFF Research Database (Denmark)

    Siksnys, Laurynas; Pedersen, Torben Bach

    2016-01-01

    Many real-world decision problems involve solving optimization problems based on data in an SQL database. Traditionally, solving such problems requires combining a DBMS with optimization software packages for each required class of problems (e.g. linear and constraint programming) -- leading...... to workflows that are cumbersome, complex, inefficient, and error-prone. In this paper, we present SolveDB - a DBMS for optimization applications. SolveDB supports solvers for different problem classes and offers seamless data management and optimization problem solving in a pure SQL-based setting. This allows...... for much simpler and more effective solutions of database-based optimization problems. SolveDB is based on the 3-level ANSI/SPARC architecture and allows formulating, solving, and analysing solutions of optimization problems using a single so-called solve query. SolveDB provides (1) an SQL-based syntax...

  3. TCR industrial system integration strategy

    CERN Document Server

    Bartolomé, R; Sollander, P; Martini, R; Vercoutter, B; Trebulle, M

    1999-01-01

    New turnkey data acquisition systems purchased from industry are being integrated into CERN's Technical Data Server. The short time available for system integration and the large amount of data per system require a standard and modular design. Four different integration layers have been defined in order to easily 'plug in' industrial systems. The first layer allows the integration of the equipment at the digital I/O port or fieldbus (Profibus-DP) level. A second layer permits the integration of PLCs (Siemens S5, S7 and Telemecanique); a third layer integrates equipment drivers. The fourth layer integrates turnkey mimic diagrams in the TCR operator console. The second and third layers use two new event-driven protocols based on TCP/IP. Using this structure, new systems are integrated in the data transmission chain, the layer at which they are integrated depending only on their integration capabilities.

  4. Integrated Strategic Tracking and Recruiting Database (iSTAR) Data Inventory

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Integrated Strategic Tracking and Recruiting Database (iSTAR) Data Inventory contains measured and modeled partnership and contact data. It is comprised of basic...

  5. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Document Server

    Dykstra, David

    2012-01-01

    One of the main attractions of non-relational "NoSQL" databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also has high scalability and wide-area distributability for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  6. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Science.gov (United States)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  7. Database and knowledge base integration--a data mapping method for Arden Syntax knowledge modules.

    Science.gov (United States)

    Johansson, B; Shahsavar, N; Ahlfeldt, H; Wigertz, O

    1996-12-01

    One of the most important categories of decision-support systems in medicine are data driven systems where the inference engine is linked to a database. It is, therefore, important to find methods that facilitate the implementation of database queries referred to in the knowledge modules. A method is described for linking clinical databases to a knowledge base with Arden Syntax modules. The method is based on a query meta-database including templates for SQL queries which is maintained by a database administrator. During knowledge module authoring the medical expert refers only to a code in the query meta-database; no knowledge is needed about the database model or the naming of attributes and relations. The method uses standard tools, such as C+2 and ODBC, which makes it possible to implement the method at many platforms and to link to different clinical databases in a standardized way.

  8. Analysis and Design of Soils and Terrain Digital Database (SOTER) Management System Based on Object-Oriented Method

    Institute of Scientific and Technical Information of China (English)

    ZHANG HAITAO; ZHOU YONG; R. V. BIRNIE; A. SIBBALD; REN YI

    2003-01-01

    A SOTER management system was developed by analyzing, designing, programming, testing, repeated proceeding and progressing based on the object-oriented method. The function of the attribute database management is inherited and expanded in the new system. The integrity and security of the SOTER database are enhanced. The attribute database management, the spatial database management and the model base are integrated into SOTER based on the component object model (COM), and the graphical user interface (GUI) for Windows is used to interact with clients, thus being easy to create and maintain the SOTER, and convenient to promote the quantification and automation of soil information application.

  9. Potentials of Advanced Database Technology for Military Information Systems

    NARCIS (Netherlands)

    Choenni, Sunil; Bruggeman, Ben

    2001-01-01

    Research and development in database technology evolves in several directions, which are not necessarily divergent. A number of these directions might be promising for military information systems as well. In this paper, we discuss the potentials of multi-media databases and data mining. Both direct

  10. An Architecture for Nested Transaction Support on Standard Database Systems

    NARCIS (Netherlands)

    Boertjes, E.M.; Grefen, P.W.P.J.; Vonk, J.; Apers, Peter M.G.

    Many applications dealing with complex processes require database support for nested transactions. Current commercial database systems lack this kind of support, offering flat, non-nested transactions only. This paper presents a three-layer architecture for implementing nested transaction support on

  11. CNA’s Integrated Ship Database: Second Quarter CY 2013 Update

    Science.gov (United States)

    2014-10-01

    CNA’s Integrated Ship Database Second Quarter CY 2013 Update Gregory N. Suess • Lynette A. McClain DIS-2014-U-007712-Final October 2014 Report...REPORT TYPE 3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE CNA’s Integrated Ship Database: Second Quarter CY 2013 Update 5a...content for this quarter . . . . . . . . . . 25 Routine update of source data . . . . . . . . . . . . . 25 Changes in the ship

  12. An uncertain data integration system

    NARCIS (Netherlands)

    Ayat, N.; Afsarmanesh, H.; Akbarinia, R.; Valduriez, P.

    2012-01-01

    Data integration systems offer uniform access to a set of autonomous and heterogeneous data sources. An important task in setting up a data integration system is to match the attributes of the source schemas. In this paper, we propose a data integration system which uses the knowledge implied within

  13. Performance analysis of different database in new internet mapping system

    Science.gov (United States)

    Yao, Xing; Su, Wei; Gao, Shuai

    2017-03-01

    In the Mapping System of New Internet, Massive mapping entries between AID and RID need to be stored, added, updated, and deleted. In order to better deal with the problem when facing a large number of mapping entries update and query request, the Mapping System of New Internet must use high-performance database. In this paper, we focus on the performance of Redis, SQLite, and MySQL these three typical databases, and the results show that the Mapping System based on different databases can adapt to different needs according to the actual situation.

  14. Spatial Database Management System of China Geological Survey Extent

    Institute of Scientific and Technical Information of China (English)

    Chen Jianguo; Chen Zhijun; Wang Quanming; Fang Yiping

    2003-01-01

    The spatial database management system of China geological survey extent is a social service system. Its aim is to help the government and the whole social public to expediently use the spatial database, such as querying, indexing, mapping and product outputting. The management system has been developed based on MAPGIS6. x SDK and Visual C++, considering the spatial database contents and structure and the requirements of users. This paper introduces the software structure, the data flow chart and some key techniques of software development.

  15. The BRENDA enzyme information system-From a database to an expert system.

    Science.gov (United States)

    Schomburg, I; Jeske, L; Ulbrich, M; Placzek, S; Chang, A; Schomburg, D

    2017-04-21

    Enzymes, representing the largest and by far most complex group of proteins, play an essential role in all processes of life, including metabolism, gene expression, cell division, the immune system, and others. Their function, also connected to most diseases or stress control makes them interesting targets for research and applications in biotechnology, medical treatments, or diagnosis. Their functional parameters and other properties are collected, integrated, and made available to the scientific community in the BRaunschweig ENzyme DAtabase (BRENDA). In the last 30 years BRENDA has developed into one of the most highly used biological databases worldwide. The data contents, the process of data acquisition, data integration and control, the ways to access the data, and visualizations provided by the website are described and discussed. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  16. DOE technology information management system database study report

    Energy Technology Data Exchange (ETDEWEB)

    Widing, M.A.; Blodgett, D.W.; Braun, M.D.; Jusko, M.J.; Keisler, J.M.; Love, R.J.; Robinson, G.L. [Argonne National Lab., IL (United States). Decision and Information Sciences Div.

    1994-11-01

    To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performed detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.

  17. Structured Query Translation in Peer to Peer Database Sharing Systems

    Directory of Open Access Journals (Sweden)

    Mehedi Masud

    2009-10-01

    Full Text Available This paper presents a query translation mechanism between heterogeneous peers in Peer to Peer Database Sharing Systems (PDSSs. A PDSS combines a database management system with P2P functionalities. The local databases on peers are called peer databases. In a PDSS, each peer chooses its own data model and schema and maintains data independently without any global coordinator. One of the problems in such a system is translating queries between peers, taking into account both the schema and data heterogeneity. Query translation is the problem of rewriting a query posed in terms of one peer schema to a query in terms of another peer schema. This paper proposes a query translation mechanism between peers where peers are acquainted in data sharing systems through data-level mappings for sharing data.

  18. Development and trial of the drug interaction database system

    Directory of Open Access Journals (Sweden)

    Virasakdi Chongsuvivatwong

    2003-07-01

    Full Text Available The drug interaction database system was originally developed at Songklanagarind Hospital. Data sets of drugs available in Songklanagarind Hospital comprising standard drug names, trade names, group names, and drug interactions were set up using Microsoft® Access 2000. The computer used was a Pentium III processor running at 450 MHz with 128 MB SDRAM operated by Microsoft® Windows 98. A robust structured query language algorithm was chosen for detecting interactions. The functioning of this database system, including speed and accuracy of detection, was tested at Songklanagarind Hospital and Naratiwatrachanagarind Hospital using hypothetical prescriptions. Its use in determining the incidence of drug interactions was also evaluated using a retrospective prescription data file. This study has shown that the database system correctly detected drug interactions from prescriptions. Speed of detection was approximately 1 to 2 seconds depending on the size of prescription. The database system was of benefit in determining of incidence rate of drug interaction in a hospital.

  19. Efficient Integrity Checking for Databases with Recursive Views

    DEFF Research Database (Denmark)

    Martinenghi, Davide; Christiansen, Henning

    2005-01-01

    into incremental and optimized tests specialized for given update patterns. These tests may involve the introduction of new views, but for relevant cases of recursion, simplified integrity constraints are obtained that can be checked more efficiently than the original ones and without auxiliary views. Notably...

  20. Health systems integration: state of the evidence

    Directory of Open Access Journals (Sweden)

    Gail D. Armitage

    2009-06-01

    Full Text Available Introduction: Integrated health systems are considered a solution to the challenge of maintaining the accessibility and integrity of healthcare in numerous jurisdictions worldwide. However, decision makers in a Canadian health region indicated they were challenged to find evidence-based information to assist with the planning and implementation of integrated healthcare systems. Methods: A systematic literature review of peer-reviewed literature from health sciences and business databases, and targeted grey literature sources. Results: Despite the large number of articles discussing integration, significant gaps in the research literature exist. There was a lack of high quality, empirical studies providing evidence on how health systems can improve service delivery and population health. No universal definition or concept of integration was found and multiple integration models from both the healthcare and business literature were proposed in the literature. The review also revealed a lack of standardized, validated tools that have been systematically used to evaluate integration outcomes. This makes measuring and comparing the impact of integration on system, provider and patient level challenging. Discussion and conclusion: Healthcare is likely too complex for a one-size-fits-all integration solution. It is important for decision makers and planners to choose a set of complementary models, structures and processes to create an integrated health system that fits the needs of the population across the continuum of care. However, in order to have evidence available, decision makers and planners should include evaluation for accountability purposes and to ensure a better understanding of the effectiveness and impact of health systems integration.

  1. Semantic-JSON: a lightweight web service interface for Semantic Web contents integrating multiple life science databases.

    Science.gov (United States)

    Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro

    2011-07-01

    Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org.

  2. Advanced Integrated Traction System

    Energy Technology Data Exchange (ETDEWEB)

    Greg Smith; Charles Gough

    2011-08-31

    The United States Department of Energy elaborates the compelling need for a commercialized competitively priced electric traction drive system to proliferate the acceptance of HEVs, PHEVs, and FCVs in the market. The desired end result is a technically and commercially verified integrated ETS (Electric Traction System) product design that can be manufactured and distributed through a broad network of competitive suppliers to all auto manufacturers. The objectives of this FCVT program are to develop advanced technologies for an integrated ETS capable of 55kW peak power for 18 seconds and 30kW of continuous power. Additionally, to accommodate a variety of automotive platforms the ETS design should be scalable to 120kW peak power for 18 seconds and 65kW of continuous power. The ETS (exclusive of the DC/DC Converter) is to cost no more than $660 (55kW at $12/kW) to produce in quantities of 100,000 units per year, should have a total weight less than 46kg, and have a volume less than 16 liters. The cost target for the optional Bi-Directional DC/DC Converter is $375. The goal is to achieve these targets with the use of engine coolant at a nominal temperature of 105C. The system efficiency should exceed 90% at 20% of rated torque over 10% to 100% of maximum speed. The nominal operating system voltage is to be 325V, with consideration for higher voltages. This project investigated a wide range of technologies, including ETS topologies, components, and interconnects. Each technology and its validity for automotive use were verified and then these technologies were integrated into a high temperature ETS design that would support a wide variety of applications (fuel cell, hybrids, electrics, and plug-ins). This ETS met all the DOE 2010 objectives of cost, weight, volume and efficiency, and the specific power and power density 2015 objectives. Additionally a bi-directional converter was developed that provides charging and electric power take-off which is the first step

  3. Alternative treatment technology information center computer database system

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, D. [Environmental Protection Agency, Edison, NJ (United States)

    1995-10-01

    The Alternative Treatment Technology Information Center (ATTIC) computer database system was developed pursuant to the 1986 Superfund law amendments. It provides up-to-date information on innovative treatment technologies to clean up hazardous waste sites. ATTIC v2.0 provides access to several independent databases as well as a mechanism for retrieving full-text documents of key literature. It can be accessed with a personal computer and modem 24 hours a day, and there are no user fees. ATTIC provides {open_quotes}one-stop shopping{close_quotes} for information on alternative treatment options by accessing several databases: (1) treatment technology database; this contains abstracts from the literature on all types of treatment technologies, including biological, chemical, physical, and thermal methods. The best literature as viewed by experts is highlighted. (2) treatability study database; this provides performance information on technologies to remove contaminants from wastewaters and soils. It is derived from treatability studies. This database is available through ATTIC or separately as a disk that can be mailed to you. (3) underground storage tank database; this presents information on underground storage tank corrective actions, surface spills, emergency response, and remedial actions. (4) oil/chemical spill database; this provides abstracts on treatment and disposal of spilled oil and chemicals. In addition to these separate databases, ATTIC allows immediate access to other disk-based systems such as the Vendor Information System for Innovative Treatment Technologies (VISITT) and the Bioremediation in the Field Search System (BFSS). The user may download these programs to their own PC via a high-speed modem. Also via modem, users are able to download entire documents through the ATTIC system. Currently, about fifty publications are available, including Superfund Innovative Technology Evaluation (SITE) program documents.

  4. Development of a Relational Database for Learning Management Systems

    Science.gov (United States)

    Deperlioglu, Omer; Sarpkaya, Yilmaz; Ergun, Ertugrul

    2011-01-01

    In today's world, Web-Based Distance Education Systems have a great importance. Web-based Distance Education Systems are usually known as Learning Management Systems (LMS). In this article, a database design, which was developed to create an educational institution as a Learning Management System, is described. In this sense, developed Learning…

  5. LOWER LEVEL INFERENCE CONTROL IN STATISTICAL DATABASE SYSTEMS

    Energy Technology Data Exchange (ETDEWEB)

    Lipton, D.L.; Wong, H.K.T.

    1984-02-01

    An inference is the process of transforming unclassified data values into confidential data values. Most previous research in inference control has studied the use of statistical aggregates to deduce individual records. However, several other types of inference are also possible. Unknown functional dependencies may be apparent to users who have 'expert' knowledge about the characteristics of a population. Some correlations between attributes may be concluded from 'commonly-known' facts about the world. To counter these threats, security managers should use random sampling of databases of similar populations, as well as expert systems. 'Expert' users of the DATABASE SYSTEM may form inferences from the variable performance of the user interface. Users may observe on-line turn-around time, accounting statistics. the error message received, and the point at which an interactive protocol sequence fails. One may obtain information about the frequency distributions of attribute values, and the validity of data object names from this information. At the back-end of a database system, improved software engineering practices will reduce opportunities to bypass functional units of the database system. The term 'DATA OBJECT' should be expanded to incorporate these data object types which generate new classes of threats. The security of DATABASES and DATABASE SySTEMS must be recognized as separate but related problems. Thus, by increased awareness of lower level inferences, system security managers may effectively nullify the threat posed by lower level inferences.

  6. A survey of commercial object-oriented database management systems

    Science.gov (United States)

    Atkins, John

    1992-01-01

    The object-oriented data model is the culmination of over thirty years of database research. Initially, database research focused on the need to provide information in a consistent and efficient manner to the business community. Early data models such as the hierarchical model and the network model met the goal of consistent and efficient access to data and were substantial improvements over simple file mechanisms for storing and accessing data. However, these models required highly skilled programmers to provide access to the data. Consequently, in the early 70's E.F. Codd, an IBM research computer scientists, proposed a new data model based on the simple mathematical notion of the relation. This model is known as the Relational Model. In the relational model, data is represented in flat tables (or relations) which have no physical or internal links between them. The simplicity of this model fostered the development of powerful but relatively simple query languages that now made data directly accessible to the general database user. Except for large, multi-user database systems, a database professional was in general no longer necessary. Database professionals found that traditional data in the form of character data, dates, and numeric data were easily represented and managed via the relational model. Commercial relational database management systems proliferated and performance of relational databases improved dramatically. However, there was a growing community of potential database users whose needs were not met by the relational model. These users needed to store data with data types not available in the relational model and who required a far richer modelling environment than that provided by the relational model. Indeed, the complexity of the objects to be represented in the model mandated a new approach to database technology. The Object-Oriented Model was the result.

  7. Common Systems Integration Lab (CSIL)

    Data.gov (United States)

    Federal Laboratory Consortium — The Common Systems Integration Lab (CSIL)supports the PMA-209 Air Combat Electronics Program Office. CSIL also supports development, test, integration and life cycle...

  8. Towards integrated microliquid handling systems

    NARCIS (Netherlands)

    Elwenspoek, M.; Lammerink, T.S.J.; Miyake, R.; Fluitman, J.H.J.

    1994-01-01

    In this paper we describe components for integrated microliquid handling systems such as fluid injection analysis, and first results of planar integration of components. The components discussed are channels, passive and active valves, actuators for micropumps, micromixers, microflow sensors, optica

  9. Common Systems Integration Lab (CSIL)

    Data.gov (United States)

    Federal Laboratory Consortium — The Common Systems Integration Lab (CSIL)supports the PMA-209 Air Combat Electronics Program Office. CSIL also supports development, test, integration and life cycle...

  10. Analysis of Cloud-Based Database Systems

    Science.gov (United States)

    2015-06-01

    University San Luis Obispo, 2009 Submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE IN COMPUTER SCIENCE...for a query to complete on average for the production system was 136,746 xvi microseconds. On our cloud-based system, the average was 198,875

  11. The cytochrome P450 engineering database: Integration of biochemical properties.

    Science.gov (United States)

    Sirim, Demet; Wagner, Florian; Lisitsa, Andrey; Pleiss, Jürgen

    2009-11-12

    Cytochrome P450 monooxygenases (CYPs) form a vast and diverse enzyme class of particular interest in drug development and a high biotechnological potential. Although very diverse in sequence, they share a common structural fold. For the comprehensive and systematic comparison of protein sequences and structures the Cytochrome P450 Engineering Database (CYPED) was established. It was built up based on an extensible data model that enables its functions readily enhanced. The new version of the CYPED contains information on sequences and structures of 8613 and 47 proteins, respectively, which strictly follow Nelson's classification rules for homologous families and superfamilies. To gain biochemical information on substrates and inhibitors, the CYPED was linked to the Cytochrome P450 Knowledgebase (CPK). To overcome differences in the data model and inconsistencies in the content of CYPED and CPK, a metric was established based on sequence similarity to link protein sequences as primary keys. In addition, the annotation of structurally and functionally relevant residues was extended by a reliable prediction of conserved secondary structure elements and by information on the effect of single nucleotide polymorphisms. The online accessible version of the CYPED at http://www.cyped.uni-stuttgart.de provides a valuable tool for the analysis of sequences, structures and their relationships to biochemical properties.

  12. Integration of Mathematica in the Large Hadron Collider Database

    CERN Document Server

    Beauquis, J

    2007-01-01

    The CERN Large Hadron Collider (LHC) is the major project in particle physics in the world. The particle accelerator is a 27 km ring where many thousands of superconducting magnets keep protons on track. Results from complex measurements of, for example, the magnetic field and the geometry of the main bending and focusing magnets are stored in databases for analysis and quality control. The geometry of the 15 m long main bending magnet weighing almost 30 tons has to be controlled within tenths of mm. All measurements are stored in ORACLE data bases. They are organized in two types: raw and derived data. Raw data come from the measurement devices and derived data describe quality measures calculated from the raw measurements. For example, the transverse position of the beam tube center relative to the theoretical axis of the accelerator is measured along the magnet. This data is used to simulate improvements or to calculate quality criteria, used in the daily quality checks of all produced magnets. The positio...

  13. An Autonomic Framework for Integrating Security and Quality of Service Support in Databases

    Science.gov (United States)

    Alomari, Firas

    2013-01-01

    The back-end databases of multi-tiered applications are a major data security concern for enterprises. The abundance of these systems and the emergence of new and different threats require multiple and overlapping security mechanisms. Therefore, providing multiple and diverse database intrusion detection and prevention systems (IDPS) is a critical…

  14. An Autonomic Framework for Integrating Security and Quality of Service Support in Databases

    Science.gov (United States)

    Alomari, Firas

    2013-01-01

    The back-end databases of multi-tiered applications are a major data security concern for enterprises. The abundance of these systems and the emergence of new and different threats require multiple and overlapping security mechanisms. Therefore, providing multiple and diverse database intrusion detection and prevention systems (IDPS) is a critical…

  15. Strong Ground Motion Database System for the Mexican Seismic Network

    Science.gov (United States)

    Perez-Yanez, C.; Ramirez-Guzman, L.; Ruiz, A. L.; Delgado, R.; Macías, M. A.; Sandoval, H.; Alcántara, L.; Quiroz, A.

    2014-12-01

    A web-based system for strong Mexican ground motion records dissemination and archival is presented. More than 50 years of continuous strong ground motion instrumentation and monitoring in Mexico have provided a fundamental resource -several thousands of accelerograms- for better understanding earthquakes and their effects in the region. Lead by the Institute of Engineering (IE) of the National Autonomous University of Mexico (UNAM), the engineering strong ground motion monitoring program at IE relies on a continuously growing network, that at present includes more than 100 free-field stations and provides coverage to the seismic zones in the country. Among the stations, approximately 25% send the observed acceleration to a processing center in Mexico City in real-time, and the rest require manual access, remote or in situ, for later processing and cataloguing. As part of a collaboration agreement between UNAM and the National Center for Disaster Prevention, regarding the construction and operation of a unified seismic network, a web system was developed to allow access to UNAM's engineering strong motion archive and host data from other institutions. The system allows data searches under a relational database schema, following a general structure relying on four databases containing the: 1) free-field stations, 2) epicentral location associated with the strong motion records available, 3) strong motion catalogue, and 4) acceleration files -the core of the system. In order to locate and easily access one or several records of the data bank, the web system presents a variety of parameters that can be involved in a query (seismic event, region boundary, station name or ID, radial distance to source or peak acceleration). This homogeneous platform has been designed to facilitate dissemination and processing of the information worldwide. Each file, in a standard format, contains information regarding the recording instrument, the station, the corresponding earthquake

  16. IMAS-Fish: Integrated MAnagement System to support the sustainability of Greek Fisheries resources. A multidisciplinary web-based database management system: implementation, capabilities, utilization and future prospects for fisheries stakeholde

    Directory of Open Access Journals (Sweden)

    S. KAVADAS

    2013-03-01

    Full Text Available This article describes in detail the “IMAS-Fish” web-based tool implementation technicalities and provides examples on how can it be used for scientific and management purposes setting new standards in fishery science. “IMAS-Fish” was developed to support the assessment of marine biological resources by: (i homogenizing all the available datasets under a relational database, (ii facilitating quality control and data entry, (iii offering easy access to raw data, (iv providing processed results through a series of classical and advanced fishery statistics algorithms, and (v visualizing the results on maps using GIS  technology. Available datasets cover among others: Fishery independent experimental surveys data (locations, species, catch compositions, biological data; Commercial fishing activities (fishing gear, locations, catch compositions, discards; Market sampling data (species, biometry, maturity, ageing; Satellite derived ocean data (Sea surface temperature, Salinity, Wind speed, Chlorophyll-a concentrations, Photosynthetically active radiation; Oceanographic parameters (CTD measurements; Official national fishery statistics; Fishing fleet registry and VMS  data; Fishing ports inventory; Fishing legislation archive (national and EU; Bathymetry grids. Currently, the homogenized database holds a total of more than 100,000,000 records. The web-based application is accessible through an internet browser and can serve as a valuable tool for all involved stakeholders: fisheries scientists, state officials responsible for management, fishermen cooperatives, academics, students and NGOs.

  17. IMAS-Fish: Integrated MAnagement System to support the sustainability of Greek Fisheries resources. A multidisciplinary web-based database management system: implementation, capabilities, utilization and future prospects for fisheries stakeholde

    Directory of Open Access Journals (Sweden)

    S. KAVADAS

    2013-04-01

    Full Text Available This article describes in detail the “IMAS-Fish” web-based tool implementation technicalities and provides examples on how can it be used for scientific and management purposes setting new standards in fishery science. “IMAS-Fish” was developed to support the assessment of marine biological resources by: (i homogenizing all the available datasets under a relational database, (ii facilitating quality control and data entry, (iii offering easy access to raw data, (iv providing processed results through a series of classical and advanced fishery statistics algorithms, and (v visualizing the results on maps using GIS  technology. Available datasets cover among others: Fishery independent experimental surveys data (locations, species, catch compositions, biological data; Commercial fishing activities (fishing gear, locations, catch compositions, discards; Market sampling data (species, biometry, maturity, ageing; Satellite derived ocean data (Sea surface temperature, Salinity, Wind speed, Chlorophyll-a concentrations, Photosynthetically active radiation; Oceanographic parameters (CTD measurements; Official national fishery statistics; Fishing fleet registry and VMS  data; Fishing ports inventory; Fishing legislation archive (national and EU; Bathymetry grids. Currently, the homogenized database holds a total of more than 100,000,000 records. The web-based application is accessible through an internet browser and can serve as a valuable tool for all involved stakeholders: fisheries scientists, state officials responsible for management, fishermen cooperatives, academics, students and NGOs.

  18. 77 FR 2521 - Integrated System Power Rates

    Science.gov (United States)

    2012-01-18

    ... Southwestern Power Administration Integrated System Power Rates AGENCY: Southwestern Power Administration, DOE... System pursuant to the Integrated System Rate Schedules which supersede the existing rate schedules... Integrated System pursuant to the following Integrated System Rate Schedules: Rate Schedule P-11,...

  19. Database design for Physical Access Control System for nuclear facilities

    Energy Technology Data Exchange (ETDEWEB)

    Sathishkumar, T., E-mail: satishkumart@igcar.gov.in; Rao, G. Prabhakara, E-mail: prg@igcar.gov.in; Arumugam, P., E-mail: aarmu@igcar.gov.in

    2016-08-15

    Highlights: • Database design needs to be optimized and highly efficient for real time operation. • It requires a many-to-many mapping between Employee table and Doors table. • This mapping typically contain thousands of records and redundant data. • Proposed novel database design reduces the redundancy and provides abstraction. • This design is incorporated with the access control system developed in-house. - Abstract: A (Radio Frequency IDentification) RFID cum Biometric based two level Access Control System (ACS) was designed and developed for providing access to vital areas of nuclear facilities. The system has got both hardware [Access controller] and software components [server application, the database and the web client software]. The database design proposed, enables grouping of the employees based on the hierarchy of the organization and the grouping of the doors based on Access Zones (AZ). This design also illustrates the mapping between the Employee Groups (EG) and AZ. By following this approach in database design, a higher level view can be presented to the system administrator abstracting the inner details of the individual entities and doors. This paper describes the novel approach carried out in designing the database of the ACS.

  20. VERDI: A Web Database System for Redshift Surveys

    Science.gov (United States)

    Wirth, G. D.; Patton, D. R.

    The Victoria Explorer for Redshift Databases on the Internet (VERDI) is a Web-based data retrieval system which allows users to access tabular data, images, and spectra of astronomical objects and to perform queries on the underlying database. We developed VERDI for use with the CNOC2 Field Galaxy Redshift Survey, but designed it to be generally applicable to deep galaxy redshift surveys. The software is freely available at http://astrowww.phys.uvic.ca/~cnoc, can easily be reconfigured and customized by the user, and performs well enough to support databases of many thousands of objects.

  1. Curriculum integration of urinary system

    Institute of Scientific and Technical Information of China (English)

    Min WANG; Li WANG; Wen-xie XU

    2015-01-01

    As the organ-system oriented integration of medical education has been carried out in many domestic medical schools for years,an urgent need of discussions on various problems of integrated medical education emerges.This paper reviews the urinary integrated educational work in Shanghai Jiao Tong University School of Medicine(SJTU-MS)and introduces the contents of the integrated curriculum of urinary system.We focus on whether we should apply the single cycle integration mode or dual cycle mode,and compare the vocational medical education and elite medical education,and demonstrate the importance of inter-system integration and the ideal integrated textbooks.Multifarious teaching methods and other issues are also mentioned.The future development of integrated medical education is prospected positively.

  2. A Transactional Asynchronous Replication Scheme for Mobile Database Systems

    Institute of Scientific and Technical Information of China (English)

    丁治明; 孟小峰; 王珊

    2002-01-01

    In mobile database systems, mobility of users has a significant impact on data replication. As a result, the various replica control protocols that exist today in traditional distributed and multidatabase environments are no longer suitable. To solve this problem, a new mobile database replication scheme, the Transaction-Level Result-Set Propagation (TLRSP)model, is put forward in this paper. The conflict detection and resolution strategy based on TLRSP is discussed in detail, and the implementation algorithm is proposed. In order to compare the performance of the TLRSP model with that of other mobile replication schemes, we have developed a detailed simulation model. Experimental results show that the TLRSP model provides an efficient support for replicated mobile database systems by reducing reprocessing overhead and maintaining database consistency.

  3. Design and Implementation of a Heterogeneous Distributed Database System

    Institute of Scientific and Technical Information of China (English)

    金志权; 柳诚飞; 等

    1990-01-01

    This paper introduces a heterogeneous distributed database system called LSZ system,where LSZ is an abbreviation of Li Shizhen,an ancient Chinese medical scientist.LSZ system adopts cluster as distributed database node(or site).Each cluster consists of one of several microcomputers and one server.Te paper describes its basic architecture and the prototype implementation,which includes query processing and optimization,transaction manager and data language translation.The system provides a uniform retrieve and update user interface through global relational data language GRDL.

  4. Ultra-Structure database design methodology for managing systems biology data and analyses

    Directory of Open Access Journals (Sweden)

    Hemminger Bradley M

    2009-08-01

    Full Text Available Abstract Background Modern, high-throughput biological experiments generate copious, heterogeneous, interconnected data sets. Research is dynamic, with frequently changing protocols, techniques, instruments, and file formats. Because of these factors, systems designed to manage and integrate modern biological data sets often end up as large, unwieldy databases that become difficult to maintain or evolve. The novel rule-based approach of the Ultra-Structure design methodology presents a potential solution to this problem. By representing both data and processes as formal rules within a database, an Ultra-Structure system constitutes a flexible framework that enables users to explicitly store domain knowledge in both a machine- and human-readable form. End users themselves can change the system's capabilities without programmer intervention, simply by altering database contents; no computer code or schemas need be modified. This provides flexibility in adapting to change, and allows integration of disparate, heterogenous data sets within a small core set of database tables, facilitating joint analysis and visualization without becoming unwieldy. Here, we examine the application of Ultra-Structure to our ongoing research program for the integration of large proteomic and genomic data sets (proteogenomic mapping. Results We transitioned our proteogenomic mapping information system from a traditional entity-relationship design to one based on Ultra-Structure. Our system integrates tandem mass spectrum data, genomic annotation sets, and spectrum/peptide mappings, all within a small, general framework implemented within a standard relational database system. General software procedures driven by user-modifiable rules can perform tasks such as logical deduction and location-based computations. The system is not tied specifically to proteogenomic research, but is rather designed to accommodate virtually any kind of biological research. Conclusion We find

  5. An Updating System for the Gridded Population Database of China Based on Remote Sensing, GIS and Spatial Database Technologies

    Directory of Open Access Journals (Sweden)

    Xiaohuan Yang

    2009-02-01

    Full Text Available The spatial distribution of population is closely related to land use and land cover (LULC patterns on both regional and global scales. Population can be redistributed onto geo-referenced square grids according to this relation. In the past decades, various approaches to monitoring LULC using remote sensing and Geographic Information Systems (GIS have been developed, which makes it possible for efficient updating of geo-referenced population data. A Spatial Population Updating System (SPUS is developed for updating the gridded population database of China based on remote sensing, GIS and spatial database technologies, with a spatial resolution of 1 km by 1 km. The SPUS can process standard Moderate Resolution Imaging Spectroradiometer (MODIS L1B data integrated with a Pattern Decomposition Method (PDM and an LULC-Conversion Model to obtain patterns of land use and land cover, and provide input parameters for a Population Spatialization Model (PSM. The PSM embedded in SPUS is used for generating 1 km by 1 km gridded population data in each population distribution region based on natural and socio-economic variables. Validation results from finer township-level census data of Yishui County suggest that the gridded population database produced by the SPUS is reliable.

  6. Integrated system checkout report

    Energy Technology Data Exchange (ETDEWEB)

    1991-08-14

    The planning and preparation phase of the Integrated Systems Checkout Program (ISCP) was conducted from October 1989 to July 1991. A copy of the ISCP, DOE-WIPP 90--002, is included in this report as an appendix. The final phase of the Checkout was conducted from July 10, 1991, to July 23, 1991. This phase exercised all the procedures and equipment required to receive, emplace, and retrieve contact handled transuranium (CH TRU) waste filled dry bins. In addition, abnormal events were introduced to simulate various equipment failures, loose surface radioactive contamination events, and personnel injury. This report provides a detailed summary of each days activities during this period. Qualification of personnel to safely conduct the tasks identified in the procedures and the abnormal events were verified by observers familiar with the Bin-Scale CH TRU Waste Test requirements. These observers were members of the staffs of Westinghouse WID Engineering, QA, Training, Health Physics, Safety, and SNL. Observers representing a number of DOE departments, the state of new Mexico, and the Defense Nuclear Facilities Safety Board observed those Checkout activities conducted during the period from July 17, 1991, to July 23, 1991. Observer comments described in this report are those obtained from the staff member observers. 1 figs., 1 tab.

  7. Development of the Lymphoma Enterprise Architecture Database: a caBIG Silver level compliant system.

    Science.gov (United States)

    Huang, Taoying; Shenoy, Pareen J; Sinha, Rajni; Graiser, Michael; Bumpers, Kevin W; Flowers, Christopher R

    2009-04-03

    Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid (caBIG) Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system (LEAD), which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK) provided by National Cancer Institute's Center for Bioinformatics to establish the LEAD platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG to the management of clinical and biological data.

  8. ISIS (Inventory and Security Information System): A prototype using the FOCUS 4GL and an ORACLE database

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, J.T.; Beckwith, A.L.; Stewart, C.R.; Kilgore, D.G.; Fortune, R. (Oak Ridge Gaseous Diffusion Plant, TN (USA); Oak Ridge National Lab., TN (USA); Maxima Corp., Oak Ridge, TN (USA))

    1989-01-01

    An interest in many corporate data processing environments, is the ability to use both fourth generation languages and relational databases to achieve flexible and integrated information systems. Another concern for planning corporate management information systems is the ability to access multiple database software environments with consistent end user programming tools. A study was conducted for the Pacific Missile Test Center that tested the use of FOCUS 4GL code developed on a PC and ported to a MicroVAX, to access an ORACLE relational database on the MicroVAX. The prototype developed gave insight into the viability of porting code, the development of integrated systems using two different vendors, and the complexities that arise when using information retrieval techniques for hierarchical data structures with relational databases. The experience gained from developing the prototype, resulted in a decision to continue prototype development in a single vendor software environment and stressed the importance of a relational database in developing information systems.

  9. The Database Driven ATLAS Trigger Configuration System

    CERN Document Server

    Martyniuk, Alex; The ATLAS collaboration

    2015-01-01

    This contribution describes the trigger selection configuration system of the ATLAS low- and high-level trigger (HLT) and the upgrades it received in preparation for LHC Run 2. The ATLAS trigger configuration system is responsible for applying the physics selection parameters for the online data taking at both trigger levels and the proper connection of the trigger lines across those levels. Here the low-level trigger consists of the already existing central trigger (CT) and the new Level-1 Topological trigger (L1Topo), which has been added for Run 2. In detail the tasks of the configuration system during the online data taking are Application of the selection criteria, e.g. energy cuts, minimum multiplicities, trigger object correlation, at the three trigger components L1Topo, CT, and HLT On-the-fly, e.g. rate-dependent, generation and application of prescale factors to the CT and HLT to adjust the trigger rates to the data taking conditions, such as falling luminosity or rate spikes in the detector readout ...

  10. Customizable neuroinformatics database system: XooNIps and its application to the pupil platform.

    Science.gov (United States)

    Yamaji, Kazutsuna; Sakai, Hiroyuki; Okumura, Yoshihiro; Usui, Shiro

    2007-07-01

    The developing field of neuroinformatics includes technologies for the collection and sharing of neuro-related digital resources. These resources will be of increasing value for understanding the brain. Developing a database system to integrate these disparate resources is necessary to make full use of these resources. This study proposes a base database system termed XooNIps that utilizes the content management system called XOOPS. XooNIps is designed for developing databases in different research fields through customization of the option menu. In a XooNIps-based database, digital resources are stored according to their respective categories, e.g., research articles, experimental data, mathematical models, stimulations, each associated with their related metadata. Several types of user authorization are supported for secure operations. In addition to the directory and keyword searches within a certain database, XooNIps searches simultaneously across other XooNIps-based databases on the Internet. Reviewing systems for user registration and for data submission are incorporated to impose quality control. Furthermore, XOOPS modules containing news, forums schedules, blogs and other information can be combined to enhance XooNIps functionality. These features provide better scalability, extensibility, and customizability to the general neuroinformatics community. The application of this system to data, models, and other information related to human pupils is described here.

  11. GDR (Genome Database for Rosaceae: integrated web resources for Rosaceae genomics and genetics research

    Directory of Open Access Journals (Sweden)

    Ficklin Stephen

    2004-09-01

    Full Text Available Abstract Background Peach is being developed as a model organism for Rosaceae, an economically important family that includes fruits and ornamental plants such as apple, pear, strawberry, cherry, almond and rose. The genomics and genetics data of peach can play a significant role in the gene discovery and the genetic understanding of related species. The effective utilization of these peach resources, however, requires the development of an integrated and centralized database with associated analysis tools. Description The Genome Database for Rosaceae (GDR is a curated and integrated web-based relational database. GDR contains comprehensive data of the genetically anchored peach physical map, an annotated peach EST database, Rosaceae maps and markers and all publicly available Rosaceae sequences. Annotations of ESTs include contig assembly, putative function, simple sequence repeats, and anchored position to the peach physical map where applicable. Our integrated map viewer provides graphical interface to the genetic, transcriptome and physical mapping information. ESTs, BACs and markers can be queried by various categories and the search result sites are linked to the integrated map viewer or to the WebFPC physical map sites. In addition to browsing and querying the database, users can compare their sequences with the annotated GDR sequences via a dedicated sequence similarity server running either the BLAST or FASTA algorithm. To demonstrate the utility of the integrated and fully annotated database and analysis tools, we describe a case study where we anchored Rosaceae sequences to the peach physical and genetic map by sequence similarity. Conclusions The GDR has been initiated to meet the major deficiency in Rosaceae genomics and genetics research, namely a centralized web database and bioinformatics tools for data storage, analysis and exchange. GDR can be accessed at http://www.genome.clemson.edu/gdr/.

  12. Intelligent high-speed cutting database system development

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In this paper,the components of a high-speed cutting system are analyzed firstly.The component variables of the high-speed cutting system are classified into four types:uncontrolled variables,process variables,control variables,and output variables.The relationships and interactions of these variables are discussed.Then,by analyzing and comparing intelligent reasoning methods frequently used,the hybrid reasoning is employed to build the high-speed cutting database system.Then,the data structures of high-speed cutting case base and databases are determined.Finally,the component parts and working process of the high-speed cutting database system on the basis of hybrid reasoning are presented.

  13. Object-Oriented Approach to Integrating Database Semantics. Volume 4.

    Science.gov (United States)

    1987-12-01

    to minimize the use of the communication system, and to the Adata representation and language translation problems. To address these problems, [11...e ... The big problem is now for the Director to figure out what to do with the individual data. 4k ,The first problem we have to solve is to, at

  14. Integrating RFID technique to design mobile handheld inventory management system

    Science.gov (United States)

    Huang, Yo-Ping; Yen, Wei; Chen, Shih-Chung

    2008-04-01

    An RFID-based mobile handheld inventory management system is proposed in this paper. Differing from the manual inventory management method, the proposed system works on the personal digital assistant (PDA) with an RFID reader. The system identifies electronic tags on the properties and checks the property information in the back-end database server through a ubiquitous wireless network. The system also provides a set of functions to manage the back-end inventory database and assigns different levels of access privilege according to various user categories. In the back-end database server, to prevent improper or illegal accesses, the server not only stores the inventory database and user privilege information, but also keeps track of the user activities in the server including the login and logout time and location, the records of database accessing, and every modification of the tables. Some experimental results are presented to verify the applicability of the integrated RFID-based mobile handheld inventory management system.

  15. Flybrain neuron database: a comprehensive database system of the Drosophila brain neurons.

    Science.gov (United States)

    Shinomiya, Kazunori; Matsuda, Keiji; Oishi, Takao; Otsuna, Hideo; Ito, Kei

    2011-04-01

    The long history of neuroscience has accumulated information about numerous types of neurons in the brain of various organisms. Because such neurons have been reported in diverse publications without controlled format, it is not easy to keep track of all the known neurons in a particular nervous system. To address this issue we constructed an online database called Flybrain Neuron Database (Flybrain NDB), which serves as a platform to collect and provide information about all the types of neurons published so far in the brain of Drosophila melanogaster. Projection patterns of the identified neurons in diverse areas of the brain were recorded in a unified format, with text-based descriptions as well as images and movies wherever possible. In some cases projection sites and the distribution of the post- and presynaptic sites were determined with greater detail than described in the original publication. Information about the labeling patterns of various antibodies and expression driver strains to visualize identified neurons are provided as a separate sub-database. We also implemented a novel visualization tool with which users can interactively examine three-dimensional reconstruction of the confocal serial section images with desired viewing angles and cross sections. Comprehensive collection and versatile search function of the anatomical information reported in diverse publications make it possible to analyze possible connectivity between different brain regions. We analyzed the preferential connectivity among optic lobe layers and the plausible olfactory sensory map in the lateral horn to show the usefulness of such a database.

  16. The Eruption Forecasting Information System (EFIS) database project

    Science.gov (United States)

    Ogburn, Sarah; Harpel, Chris; Pesicek, Jeremy; Wellik, Jay; Pallister, John; Wright, Heather

    2016-04-01

    The Eruption Forecasting Information System (EFIS) project is a new initiative of the U.S. Geological Survey-USAID Volcano Disaster Assistance Program (VDAP) with the goal of enhancing VDAP's ability to forecast the outcome of volcanic unrest. The EFIS project seeks to: (1) Move away from relying on the collective memory to probability estimation using databases (2) Create databases useful for pattern recognition and for answering common VDAP questions; e.g. how commonly does unrest lead to eruption? how commonly do phreatic eruptions portend magmatic eruptions and what is the range of antecedence times? (3) Create generic probabilistic event trees using global data for different volcano 'types' (4) Create background, volcano-specific, probabilistic event trees for frequently active or particularly hazardous volcanoes in advance of a crisis (5) Quantify and communicate uncertainty in probabilities A major component of the project is the global EFIS relational database, which contains multiple modules designed to aid in the construction of probabilistic event trees and to answer common questions that arise during volcanic crises. The primary module contains chronologies of volcanic unrest, including the timing of phreatic eruptions, column heights, eruptive products, etc. and will be initially populated using chronicles of eruptive activity from Alaskan volcanic eruptions in the GeoDIVA database (Cameron et al. 2013). This database module allows us to query across other global databases such as the WOVOdat database of monitoring data and the Smithsonian Institution's Global Volcanism Program (GVP) database of eruptive histories and volcano information. The EFIS database is in the early stages of development and population; thus, this contribution also serves as a request for feedback from the community.

  17. Integrating systems biology models and biomedical ontologies

    Directory of Open Access Journals (Sweden)

    de Bono Bernard

    2011-08-01

    Full Text Available Abstract Background Systems biology is an approach to biology that emphasizes the structure and dynamic behavior of biological systems and the interactions that occur within them. To succeed, systems biology crucially depends on the accessibility and integration of data across domains and levels of granularity. Biomedical ontologies were developed to facilitate such an integration of data and are often used to annotate biosimulation models in systems biology. Results We provide a framework to integrate representations of in silico systems biology with those of in vivo biology as described by biomedical ontologies and demonstrate this framework using the Systems Biology Markup Language. We developed the SBML Harvester software that automatically converts annotated SBML models into OWL and we apply our software to those biosimulation models that are contained in the BioModels Database. We utilize the resulting knowledge base for complex biological queries that can bridge levels of granularity, verify models based on the biological phenomenon they represent and provide a means to establish a basic qualitative layer on which to express the semantics of biosimulation models. Conclusions We establish an information flow between biomedical ontologies and biosimulation models and we demonstrate that the integration of annotated biosimulation models and biomedical ontologies enables the verification of models as well as expressive queries. Establishing a bi-directional information flow between systems biology and biomedical ontologies has the potential to enable large-scale analyses of biological systems that span levels of granularity from molecules to organisms.

  18. Integrating systems biology models and biomedical ontologies

    Science.gov (United States)

    2011-01-01

    Background Systems biology is an approach to biology that emphasizes the structure and dynamic behavior of biological systems and the interactions that occur within them. To succeed, systems biology crucially depends on the accessibility and integration of data across domains and levels of granularity. Biomedical ontologies were developed to facilitate such an integration of data and are often used to annotate biosimulation models in systems biology. Results We provide a framework to integrate representations of in silico systems biology with those of in vivo biology as described by biomedical ontologies and demonstrate this framework using the Systems Biology Markup Language. We developed the SBML Harvester software that automatically converts annotated SBML models into OWL and we apply our software to those biosimulation models that are contained in the BioModels Database. We utilize the resulting knowledge base for complex biological queries that can bridge levels of granularity, verify models based on the biological phenomenon they represent and provide a means to establish a basic qualitative layer on which to express the semantics of biosimulation models. Conclusions We establish an information flow between biomedical ontologies and biosimulation models and we demonstrate that the integration of annotated biosimulation models and biomedical ontologies enables the verification of models as well as expressive queries. Establishing a bi-directional information flow between systems biology and biomedical ontologies has the potential to enable large-scale analyses of biological systems that span levels of granularity from molecules to organisms. PMID:21835028

  19. System design and integration analysis for the Integrated Booking System (IBS)

    Energy Technology Data Exchange (ETDEWEB)

    Truett, L.F.; Wheeler, V.V.; Grubb, J.W. [Oak Ridge National Laboratory, TN (United States); Grubb, J.W.; Faby, E.Z. [Univ. of Tennessee, Knoxville, TN (United States)

    1995-11-01

    In accordance with tasking for the Military Traffic Management Command (MTMC), the Oak Ridge National Laboratory (ORNL) investigated design and integration issues and identified specific options for MTMC`s Integrated Booking System (IBS). Three system designs are described: the single-server, stand-alone IBS; the area-based IBS; and the fully-integrated IBS. Because of the functional and technical requirements of IBS and because of the MTMC strategy of sharing resources, ORNL recommends the fully-integrated design. This option uses the excess computing resources provided through the architectural components of the Integrated Cargo Database (ICDB) and provides visibility over the cargo record from initial request through final delivery.

  20. An Expert System Helps Students Learn Database Design

    Science.gov (United States)

    Post, Gerald V.; Whisenand, Thomas G.

    2005-01-01

    Teaching and learning database design is difficult for both instructors and students. Students need to solve many problems with feedback and corrections. A Web-based specialized expert system was created to enable students to create designs online and receive immediate feedback. An experiment testing the system shows that it significantly enhances…

  1. ADVICE--Educational System for Teaching Database Courses

    Science.gov (United States)

    Cvetanovic, M.; Radivojevic, Z.; Blagojevic, V.; Bojovic, M.

    2011-01-01

    This paper presents a Web-based educational system, ADVICE, that helps students to bridge the gap between database management system (DBMS) theory and practice. The usage of ADVICE is presented through a set of laboratory exercises developed to teach students conceptual and logical modeling, SQL, formal query languages, and normalization. While…

  2. Research of database-based modeling for mining management system

    Institute of Scientific and Technical Information of China (English)

    WU Hai-feng; JIN Zhi-xin; BAI Xi-jun

    2005-01-01

    Put forward the method to construct the simulation model automatically with database-based automatic modeling(DBAM) for mining system. Designed the standard simulation model linked with some open cut Pautomobile dispatch system. Analyzed and finded out the law among them, and designed model maker to realize the automatic programming of the new model program.

  3. An Expert System Helps Students Learn Database Design

    Science.gov (United States)

    Post, Gerald V.; Whisenand, Thomas G.

    2005-01-01

    Teaching and learning database design is difficult for both instructors and students. Students need to solve many problems with feedback and corrections. A Web-based specialized expert system was created to enable students to create designs online and receive immediate feedback. An experiment testing the system shows that it significantly enhances…

  4. Research on the J2EE-based product database management system

    Institute of Scientific and Technical Information of China (English)

    LIN Lin; YAO Yu; ZHONG Shi-sheng

    2007-01-01

    The basic frame and the design idea of J2EE-based Product Data Management (PDM) system are presented. This paper adopts the technology of Object-Oriented to realize the database design and builds the information model of this PDM system. The integration key technology of PDM and CAD systems are discussed,the isomerous interface characteristics between CAD and PDM systems are analyzed, and finally, the integration mode of the PDM and CAD systems is given. Using these technologies, the integration of PDM and CAD systems is realized and the consistence of data in PDM and CAD systems is kept. Finally, the Product Data Management system is developed, which has been tested on development process of the hydraulic generator. The running process is stable and safety.

  5. CardioTF, a database of deconstructing transcriptional circuits in the heart system

    Directory of Open Access Journals (Sweden)

    Yisong Zhen

    2016-08-01

    Full Text Available Background: Information on cardiovascular gene transcription is fragmented and far behind the present requirements of the systems biology field. To create a comprehensive source of data for cardiovascular gene regulation and to facilitate a deeper understanding of genomic data, the CardioTF database was constructed. The purpose of this database is to collate information on cardiovascular transcription factors (TFs, position weight matrices (PWMs, and enhancer sequences discovered using the ChIP-seq method. Methods: The Naïve-Bayes algorithm was used to classify literature and identify all PubMed abstracts on cardiovascular development. The natural language learning tool GNAT was then used to identify corresponding gene names embedded within these abstracts. Local Perl scripts were used to integrate and dump data from public databases into the MariaDB management system (MySQL. In-house R scripts were written to analyze and visualize the results. Results: Known cardiovascular TFs from humans and human homologs from fly, Ciona, zebrafish, frog, chicken, and mouse were identified and deposited in the database. PWMs from Jaspar, hPDI, and UniPROBE databases were deposited in the database and can be retrieved using their corresponding TF names. Gene enhancer regions from various sources of ChIP-seq data were deposited into the database and were able to be visualized by graphical output. Besides biocuration, mouse homologs of the 81 core cardiac TFs were selected using a Naïve-Bayes approach and then by intersecting four independent data sources: RNA profiling, expert annotation, PubMed abstracts and phenotype. Discussion: The CardioTF database can be used as a portal to construct transcriptional network of cardiac development. Availability and Implementation: Database URL: http://www.cardiosignal.org/database/cardiotf.html.

  6. Development of the Lymphoma Enterprise Architecture Database: A caBIG(TM Silver Level Compliant System

    Directory of Open Access Journals (Sweden)

    Taoying Huang

    2009-04-01

    Full Text Available Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid™ (caBIG™ Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system™ (LEAD™, which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK provided by National Cancer Institute’s Center for Bioinformatics to establish the LEAD™ platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD™ could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG™ can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG™ to the management of clinical and biological data.

  7. Development of the Lymphoma Enterprise Architecture Database: A caBIG(TM Silver Level Compliant System

    Directory of Open Access Journals (Sweden)

    Taoying Huang

    2009-01-01

    Full Text Available Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid™ (caBIG™ Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system™ (LEAD™, which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK provided by National Cancer Institute’s Center for Bioinformatics to establish the LEAD™ platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD™ could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG™ can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG™ to the management of clinical and biological data.

  8. Development of the Lymphoma Enterprise Architecture Database: A caBIG(tm) Silver level compliant System

    Science.gov (United States)

    Huang, Taoying; Shenoy, Pareen J.; Sinha, Rajni; Graiser, Michael; Bumpers, Kevin W.; Flowers, Christopher R.

    2009-01-01

    Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid™ (caBIG™) Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system™ (LEAD™), which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK) provided by National Cancer Institute’s Center for Bioinformatics to establish the LEAD™ platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD™ could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG™ can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG™ to the management of clinical and biological data. PMID:19492074

  9. Content-based image database system for epilepsy.

    Science.gov (United States)

    Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid; Fotouhi, Farshad; Elisevich, Kost

    2005-09-01

    We have designed and implemented a human brain multi-modality database system with content-based image management, navigation and retrieval support for epilepsy. The system consists of several modules including a database backbone, brain structure identification and localization, segmentation, registration, visual feature extraction, clustering/classification and query modules. Our newly developed anatomical landmark localization and brain structure identification method facilitates navigation through an image data and extracts useful information for segmentation, registration and query modules. The database stores T1-, T2-weighted and FLAIR MRI and ictal/interictal SPECT modalities with associated clinical data. We confine the visual feature extractors within anatomical structures to support semantically rich content-based procedures. The proposed system serves as a research tool to evaluate a vast number of hypotheses regarding the condition such as resection of the hippocampus with a relatively small volume and high average signal intensity on FLAIR. Once the database is populated, using data mining tools, partially invisible correlations between different modalities of data, modeled in database schema, can be discovered. The design and implementation aspects of the proposed system are the main focus of this paper.

  10. Optics Toolbox: An Intelligent Relational Database System For Optical Designers

    Science.gov (United States)

    Weller, Scott W.; Hopkins, Robert E.

    1986-12-01

    Optical designers were among the first to use the computer as an engineering tool. Powerful programs have been written to do ray-trace analysis, third-order layout, and optimization. However, newer computing techniques such as database management and expert systems have not been adopted by the optical design community. For the purpose of this discussion we will define a relational database system as a database which allows the user to specify his requirements using logical relations. For example, to search for all lenses in a lens database with a F/number less than two, and a half field of view near 28 degrees, you might enter the following: FNO English-like language, and which are easily modified by the user. An example rule is: IF require microscope objective in air and require NA > 0.9 THEN suggest the use of an oil immersion objective The heart of the expert system is the rule interpreter, sometimes called an inference engine, which reads the rules and forms conclusions based on them. The use of a relational database system containing lens prototypes seems to be a viable prospect. However, it is not clear that expert systems have a place in optical design. In domains such as medical diagnosis and petrology, expert systems are flourishing. These domains are quite different from optical design, however, because optical design is a creative process, and the rules are difficult to write down. We do think that an expert system is feasible in the area of first order layout, which is sufficiently diagnostic in nature to permit useful rules to be written. This first-order expert would emulate an expert designer as he interacted with a customer for the first time: asking the right questions, forming conclusions, and making suggestions. With these objectives in mind, we have developed the Optics Toolbox. Optics Toolbox is actually two programs in one: it is a powerful relational database system with twenty-one search parameters, four search modes, and multi-database

  11. Managing Consistency Anomalies in Distributed Integrated Databases with Relaxed ACID Properties

    DEFF Research Database (Denmark)

    Frank, Lars; Ulslev Pedersen, Rasmus

    2014-01-01

    In central databases the consistency of data is normally implemented by using the ACID (Atomicity, Consistency, Isolation and Durability) properties of a DBMS (Data Base Management System). This is not possible if distributed and/or mobile databases are involved and the availability of data also...... has to be optimized. Therefore, we will in this paper use so called relaxed ACID properties across different locations. The objective of designing relaxed ACID properties across different database locations is that the users can trust the data they use even if the distributed database temporarily...... been committed and completed, the execution has the consistency property. The above definition of the consistency property is not useful in distributed databases with relaxed ACID properties because such a database is almost always inconsistent. In the following, we will use the concept Consistency...

  12. Study on Mandatory Access Control in a Secure Database Management System

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper proposes a security policy model for mandatory access control in class B1 database management system whose level of labeling is tuple. The relation-hierarchical data model is extended to multilevel relation-hierarchical data model. Based on the multilevel relation-hierarchical data model, the concept of upper-lower layer relational integrity is presented after we analyze and eliminate the covert channels caused by the database integrity. Two SQL statements are extended to process polyinstantiation in the multilevel secure environment. The system is based on the multilevel relation-hierarchical data model and is capable of integratively storing and manipulating multilevel complicated objects (e. g., multilevel spatial data) and multilevel conventional data ( e. g., integer. real number and character string).

  13. Current trends and new challenges of databases and web applications for systems driven biological research

    Directory of Open Access Journals (Sweden)

    Pradeep Kumar eSreenivasaiah

    2010-12-01

    Full Text Available Dynamic and rapidly evolving nature of systems driven research imposes special requirements on the technology, approach, design and architecture of computational infrastructure including database and web application. Several solutions have been proposed to meet the expectations and novel methods have been developed to address the persisting problems of data integration. It is important for researchers to understand different technologies and approaches. Having familiarized with the pros and cons of the existing technologies, researchers can exploit its capabilities to the maximum potential for integrating data. In this review we discuss the architecture, design and key technologies underlying some of the prominent databases (DBs and web applications. We will mention their roles in integration of biological data and investigate some of the emerging design concepts and computational technologies that are likely to have a key role in the future of systems driven biomedical research.

  14. Understanding Patterns for System of Systems Integration

    DEFF Research Database (Denmark)

    Kazman, Rick; Schmid, Klaus; Nielsen, Claus Ballegård

    2013-01-01

    of systems integration patterns. These characteristics at the same time support the architecting process by highlighting important issues a SoS architect needs to consider. We discuss the consolidated template and illustrate it with an example pattern. We also discuss the integration of this novel pattern......Architecting systems of systems is well known to be a formidable challenge. A major aspect in this is defining the integration among the systems that constitute the system of systems. In this paper, we aim to support the SoS architect by systematically developing a way to characterize system...

  15. A Deep Web Data Integration System for Job Search

    Institute of Scientific and Technical Information of China (English)

    LIU Wei; LI Xian; LING Yanyan; ZHANG Xiaoyu; MENG Xiaofeng

    2006-01-01

    With the rapid development of Web, there are more and more Web databases available for users to access. At the same time, job searchers often have difficulties in first finding the right sources and then querying over them, providing such an integrated job search system over Web databases has become a Web application in high demand. Based on such consideration, we build a deep Web data integration system that supports unified access for users to multiple job Web sites as a job meta-search engine. In this paper, the architecture of the system is given first, and the key components in the system are introduced.

  16. Integrated systems innovations and applications

    CERN Document Server

    2015-01-01

    This book presents the results of discussions and presentation from the latest ISDT event (2014) which was dedicated to the 94th birthday anniversary of Prof. Lotfi A. Zade, father of Fuzzy logic. The book consists of three main chapters, namely: Chapter 1: Integrated Systems Design Chapter 2: Knowledge, Competence and Business Process Management Chapter 3: Integrated Systems Technologies Each article presents novel and scientific research results with respect to the target goal of improving our common understanding of KT integration.

  17. Generating Shifting Workloads to Benchmark Adaptability in Relational Database Systems

    Science.gov (United States)

    Rabl, Tilmann; Lang, Andreas; Hackl, Thomas; Sick, Bernhard; Kosch, Harald

    A large body of research concerns the adaptability of database systems. Many commercial systems already contain autonomic processes that adapt configurations as well as data structures and data organization. Yet there is virtually no possibility for a just measurement of the quality of such optimizations. While standard benchmarks have been developed that simulate real-world database applications very precisely, none of them considers variations in workloads produced by human factors. Today’s benchmarks test the performance of database systems by measuring peak performance on homogeneous request streams. Nevertheless, in systems with user interaction access patterns are constantly shifting. We present a benchmark that simulates a web information system with interaction of large user groups. It is based on the analysis of a real online eLearning management system with 15,000 users. The benchmark considers the temporal dependency of user interaction. Main focus is to measure the adaptability of a database management system according to shifting workloads. We will give details on our design approach that uses sophisticated pattern analysis and data mining techniques.

  18. Tuneable planar integrated optical systems.

    Science.gov (United States)

    Amberg, M; Oeder, A; Sinzinger, S; Hands, P J W; Love, G D

    2007-08-20

    Planar integrated free-space optical systems are well suited for a variety of applications, such as optical interconnects and security devices. Here, we demonstrate for the first time dynamic functionality of such microoptical systems by the integration of adaptive liquid-crystal-devices.

  19. An Interoperable Cartographic Database

    OpenAIRE

    Slobodanka Ključanin; Zdravko Galić

    2007-01-01

    The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on t...

  20. The design and implementation of pedagogical software for multi-backend/multi-lingual database system.

    OpenAIRE

    Little, Craig W.

    1987-01-01

    Approved for public release; distribution is unlimited Traditionally, courses in database systems do not use pedagogical software for the purpose of instructing the database systems, despite the progress made in modem database architecture. In this thesis, we present a working document to assist in the instruction of a new database system, the Multi-Backend Database System (MBDS)-and the Multi-Lingual Database System (MLDS). As the course of instruction describes the creatio...

  1. Tailored patient information using a database system: Increasing patient compliance in a day surgery setting

    DEFF Research Database (Denmark)

    Grode, Jesper Nicolai Riis; Grode, Louise; Steinsøe, Ulla

    2013-01-01

    rehabilitation. The hospital is responsible of providing the patients with accurate information enabling the patient to prepare for surgery. Often patients are overloaded with uncoordinated information, letters and leaflets. The contribution of this project is a database system enabling health professionals...... was established to support these requirements. A relational database system holds all information pieces in a granular, structured form. Each individual piece of information can be joined with other pieces thus supporting the tailoring of information. A web service layer caters for integration with output systems....../media (word processing engines, web, mobile apps, and information kiosks). To lower the adoption bar of the system, an MS Word user interface was integrated with the web service layer, and information can now quickly be categorised and grouped according to purpose of use, users can quickly setup information...

  2. (BARS) -- Bibliographic Retrieval System Sandia shock compression (SSC) database shock physics index (SPHINX) database. Volume 3, UNIX version Systems Guide

    Energy Technology Data Exchange (ETDEWEB)

    von Laven, G.M. [Advanced Software Engineering, Madison, AL (United States); Herrmann, W. [Sandia National Labs., Albuquerque, NM (United States)

    1993-09-01

    The Bibliographic Retrieval System (BARS) is a database management system specially designed to store and retrieve bibliographic references and track documents. The system uses INGRES to manage this database and user interface. It uses forms for journal articles, books, conference proceedings, theses, technical reports, letters, memos, visual aids, as well as a miscellaneous form which can be used for data sets or any other material which can be assigned an access or file number. Sorted output resulting from flexible BOOLEAN searches can be printed or saved in files which can be inserted in reference lists for use with word processors.

  3. Period Integrals and Tautological Systems

    CERN Document Server

    Lian, Bong H; Yau, Shing-Tung

    2011-01-01

    We study period integrals of CY hypersurfaces in a partial flag variety. We construct a holonomic system of differential equations which govern the period integrals. By means of representation theory, a set of generators of the system can be described explicitly. The results are also generalized to CY complete intersections. The construction of these new systems of differential equations have lead us to the notion of a tautological system.

  4. ViralORFeome: an integrated database to generate a versatile collection of viral ORFs.

    Science.gov (United States)

    Pellet, J; Tafforeau, L; Lucas-Hourani, M; Navratil, V; Meyniel, L; Achaz, G; Guironnet-Paquet, A; Aublin-Gex, A; Caignard, G; Cassonnet, P; Chaboud, A; Chantier, T; Deloire, A; Demeret, C; Le Breton, M; Neveu, G; Jacotot, L; Vaglio, P; Delmotte, S; Gautier, C; Combet, C; Deleage, G; Favre, M; Tangy, F; Jacob, Y; Andre, P; Lotteau, V; Rabourdin-Combe, C; Vidalain, P O

    2010-01-01

    Large collections of protein-encoding open reading frames (ORFs) established in a versatile recombination-based cloning system have been instrumental to study protein functions in high-throughput assays. Such 'ORFeome' resources have been developed for several organisms but in virology, plasmid collections covering a significant fraction of the virosphere are still needed. In this perspective, we present ViralORFeome 1.0 (http://www.viralorfeome.com), an open-access database and management system that provides an integrated set of bioinformatic tools to clone viral ORFs in the Gateway(R) system. ViralORFeome provides a convenient interface to navigate through virus genome sequences, to design ORF-specific cloning primers, to validate the sequence of generated constructs and to browse established collections of virus ORFs. Most importantly, ViralORFeome has been designed to manage all possible variants or mutants of a given ORF so that the cloning procedure can be applied to any emerging virus strain. A subset of plasmid constructs generated with ViralORFeome platform has been tested with success for heterologous protein expression in different expression systems at proteome scale. ViralORFeome should provide our community with a framework to establish a large collection of virus ORF clones, an instrumental resource to determine functions, activities and binding partners of viral proteins.

  5. Deep Time Data Infrastructure: Integrating Our Current Geologic and Biologic Databases

    Science.gov (United States)

    Kolankowski, S. M.; Fox, P. A.; Ma, X.; Prabhu, A.

    2016-12-01

    As our knowledge of Earth's geologic and mineralogical history grows, we require more efficient methods of sharing immense amounts of data. Databases across numerous disciplines have been utilized to offer extensive information on very specific Epochs of Earth's history up to its current state, i.e. Fossil record, rock composition, proteins, etc. These databases could be a powerful force in identifying previously unseen correlations such as relationships between minerals and proteins. Creating a unifying site that provides a portal to these databases will aid in our ability as a collaborative scientific community to utilize our findings more effectively. The Deep-Time Data Infrastructure (DTDI) is currently being defined as part of a larger effort to accomplish this goal. DTDI will not be a new database, but an integration of existing resources. Current geologic and related databases were identified, documentation of their schema was established and will be presented as a stage by stage progression. Through conceptual modeling focused around variables from their combined records, we will determine the best way to integrate these databases using common factors. The Deep-Time Data Infrastructure will allow geoscientists to bridge gaps in data and further our understanding of our Earth's history.

  6. CTDB: An Integrated Chickpea Transcriptome Database for Functional and Applied Genomics.

    Directory of Open Access Journals (Sweden)

    Mohit Verma

    Full Text Available Chickpea is an important grain legume used as a rich source of protein in human diet. The narrow genetic diversity and limited availability of genomic resources are the major constraints in implementing breeding strategies and biotechnological interventions for genetic enhancement of chickpea. We developed an integrated Chickpea Transcriptome Database (CTDB, which provides the comprehensive web interface for visualization and easy retrieval of transcriptome data in chickpea. The database features many tools for similarity search, functional annotation (putative function, PFAM domain and gene ontology search and comparative gene expression analysis. The current release of CTDB (v2.0 hosts transcriptome datasets with high quality functional annotation from cultivated (desi and kabuli types and wild chickpea. A catalog of transcription factor families and their expression profiles in chickpea are available in the database. The gene expression data have been integrated to study the expression profiles of chickpea transcripts in major tissues/organs and various stages of flower development. The utilities, such as similarity search, ortholog identification and comparative gene expression have also been implemented in the database to facilitate comparative genomic studies among different legumes and Arabidopsis. Furthermore, the CTDB represents a resource for the discovery of functional molecular markers (microsatellites and single nucleotide polymorphisms between different chickpea types. We anticipate that integrated information content of this database will accelerate the functional and applied genomic research for improvement of chickpea. The CTDB web service is freely available at http://nipgr.res.in/ctdb.html.

  7. KaBOB: ontology-based semantic integration of biomedical databases.

    Science.gov (United States)

    Livingston, Kevin M; Bada, Michael; Baumgartner, William A; Hunter, Lawrence E

    2015-04-23

    The ability to query many independent biological databases using a common ontology-based semantic model would facilitate deeper integration and more effective utilization of these diverse and rapidly growing resources. Despite ongoing work moving toward shared data formats and linked identifiers, significant problems persist in semantic data integration in order to establish shared identity and shared meaning across heterogeneous biomedical data sources. We present five processes for semantic data integration that, when applied collectively, solve seven key problems. These processes include making explicit the differences between biomedical concepts and database records, aggregating sets of identifiers denoting the same biomedical concepts across data sources, and using declaratively represented forward-chaining rules to take information that is variably represented in source databases and integrating it into a consistent biomedical representation. We demonstrate these processes and solutions by presenting KaBOB (the Knowledge Base Of Biomedicine), a knowledge base of semantically integrated data from 18 prominent biomedical databases using common representations grounded in Open Biomedical Ontologies. An instance of KaBOB with data about humans and seven major model organisms can be built using on the order of 500 million RDF triples. All source code for building KaBOB is available under an open-source license. KaBOB is an integrated knowledge base of biomedical data representationally based in prominent, actively maintained Open Biomedical Ontologies, thus enabling queries of the underlying data in terms of biomedical concepts (e.g., genes and gene products, interactions and processes) rather than features of source-specific data schemas or file formats. KaBOB resolves many of the issues that routinely plague biomedical researchers intending to work with data from multiple data sources and provides a platform for ongoing data integration and development and for

  8. Converting an integrated hospital formulary into an object-oriented database representation.

    Science.gov (United States)

    Gu, H; Liu, L M; Halper, M; Geller, J; Perl, Y

    1998-01-01

    Controlled Medical Vocabularies (CMVs) have proven to be extremely useful in their support of the tasks of information sharing and integration, communication among various software applications, and decision support. Modeling a CMV as an Object-Oriented Database (OODB) provides additional benefits such as increased support for vocabulary comprehension and flexible access. In this paper, we describe the process of modeling and converting an existing integrated hospital formulary (i.e., set of pharmacological concepts) into an equivalent OODB representation, which, in general, we refer to as an Object-Oriented Healthcare Vocabulary Repository (OOHVR). The source for our example OOHVR is a formulary provided by the Connecticut Healthcare Research and Education Foundation (CHREF). Utilizing this source formulary together with the semantic hierarchy composed of major and minor drug classes defined as part of the National Drug Code (NDC) directory, we constructed a CMV that was eventually converted into its OOHVR form (the CHREF-OOHVR). The actual conversion step was carried out automatically by a program, called the OOHVR Generator, that we have developed. At present, the CHREF-OOHVR is running on top of ONTOS, a commercial OODB management system, and is accessible on the Web.

  9. Towards Platform Independent Database Modelling in Enterprise Systems

    OpenAIRE

    Ellison, Martyn Holland; Calinescu, Radu; Paige, Richard F.

    2016-01-01

    Enterprise software systems are prevalent in many organisations, typically they are data-intensive and manage customer, sales, or other important data. When an enterprise system needs to be modernised or migrated (e.g. to the cloud) it is necessary to understand the structure of this data and how it is used. We have developed a tool-supported approach to model database structure, query patterns, and growth patterns. Compared to existing work, our tool offers increased system support and exten...

  10. Smart systems integration and simulation

    CERN Document Server

    Poncino, Massimo; Pravadelli, Graziano

    2016-01-01

    This book-presents new methods and tools for the integration and simulation of smart devices. The design approach described in this book explicitly accounts for integration of Smart Systems components and subsystems as a specific constraint. It includes methodologies and EDA tools to enable multi-disciplinary and multi-scale modeling and design, simulation of multi-domain systems, subsystems and components at all levels of abstraction, system integration and exploration for optimization of functional and non-functional metrics. By covering theoretical and practical aspects of smart device design, this book targets people who are working and studying on hardware/software modelling, component integration and simulation under different positions (system integrators, designers, developers, researchers, teachers, students etc.). In particular, it is a good introduction to people who have interest in managing heterogeneous components in an efficient and effective way on different domains and different abstraction l...

  11. A comprehensive database and analysis framework to incorporate multiscale data types and enable integrated analysis of bioactive polyphenols.

    Science.gov (United States)

    Pasinetti, Giulio M; Ho, Lap; Cheng, Haoxiang; Wang, Jun; Simon, James E; Wu, Qing-Li; Zhao, Danyue; Carry, Eileen; Ferruzzi, Mario G; Faith, Jeremiah; Valcarcel, Breanna; Hao, Ke

    2017-06-30

    The development of a given botanical preparation for eventual clinical application requires extensive, detailed characterizations of the chemical composition, as well as the biological availability, biological activity and safety profiles of the botanical. These issues are typically addressed using diverse experimental protocols and model systems. Based on this consideration, in this study we established a comprehensive database and analysis framework for the collection, collation and integrative analysis of diverse, multiscale data sets. Using this framework, we conducted an integrative analysis of heterogeneous data from in vivo and in vitro investigation of a complex bioactive dietary polyphenol-rich preparation (BDPP) and built an integrated network linking datasets generated from this multitude of diverse experimental paradigms. We established a comprehensive database and analysis framework as well as a systematic and logical means to catalogue and collate the diverse array of information gathered, which is securely stored and added to in a standardized manner to enable fast query. We demonstrated the utility of the database in: (1) a statistical ranking scheme to prioritize response to treatments and (2) in depth reconstruction of functionality studies. By examination of these datasets, the system allows analytical querying of heterogeneous data and the access of information related to interactions, mechanism of actions, functions, etc., which ultimately provide a global overview of complex biological responses. Collectively, we present an integrative analysis framework that leads to novel insights on the biological activities of a complex botanical such as BDPP that is based on data-driven characterizations of interactions between BDPP-derived phenolic metabolites, their mechanisms of action, as well as synergism and/or potential cancellation of biological functions. Out integrative analytical approach provides novel means for a systematic integrative

  12. SymbioGenomesDB: a database for the integration and access to knowledge on host-symbiont relationships.

    Science.gov (United States)

    Reyes-Prieto, Mariana; Vargas-Chávez, Carlos; Latorre, Amparo; Moya, Andrés

    2015-01-01

    Symbiotic relationships occur naturally throughout the tree of life, either in a commensal, mutualistic or pathogenic manner. The genomes of multiple organisms involved in symbiosis are rapidly being sequenced and becoming available, especially those from the microbial world. Currently, there are numerous databases that offer information on specific organisms or models, but none offer a global understanding on relationships between organisms, their interactions and capabilities within their niche, as well as their role as part of a system, in this case, their role in symbiosis. We have developed the SymbioGenomesDB as a community database resource for laboratories which intend to investigate and use information on the genetics and the genomics of organisms involved in these relationships. The ultimate goal of SymbioGenomesDB is to host and support the growing and vast symbiotic-host relationship information, to uncover the genetic basis of such associations. SymbioGenomesDB maintains a comprehensive organization of information on genomes of symbionts from diverse hosts throughout the Tree of Life, including their sequences, their metadata and their genomic features. This catalog of relationships was generated using computational tools, custom R scripts and manual integration of data available in public literature. As a highly curated and comprehensive systems database, SymbioGenomesDB provides web access to all the information of symbiotic organisms, their features and links to the central database NCBI. Three different tools can be found within the database to explore symbiosis-related organisms, their genes and their genomes. Also, we offer an orthology search for one or multiple genes in one or multiple organisms within symbiotic relationships, and every table, graph and output file is downloadable and easy to parse for further analysis. The robust SymbioGenomesDB will be constantly updated to cope with all the data being generated and included in major

  13. Ontological Enrichment of the Genes-to-Systems Breast Cancer Database

    Science.gov (United States)

    Viti, Federica; Mosca, Ettore; Merelli, Ivan; Calabria, Andrea; Alfieri, Roberta; Milanesi, Luciano

    Breast cancer research need the development of specific and suitable tools to appropriately manage biomolecular knowledge. The presented work deals with the integrative storage of breast cancer related biological data, in order to promote a system biology approach to this network disease. To increase data standardization and resource integration, annotations maintained in Genes-to-Systems Breast Cancer (G2SBC) database are associated to ontological terms, which provide a hierarchical structure to organize data enabling more effective queries, statistical analysis and semantic web searching. Exploited ontologies, which cover all levels of the molecular environment, from genes to systems, are among the most known and widely used bioinformatics resources. In G2SBC database ontology terms both provide a semantic layer to improve data storage, accessibility and analysis and represent a user friendly instrument to identify relations among biological components.

  14. LmSmdB: an integrated database for metabolic and gene regulatory network in Leishmania major and Schistosoma mansoni.

    Science.gov (United States)

    Patel, Priyanka; Mandlik, Vineetha; Singh, Shailza

    2016-03-01

    A database that integrates all the information required for biological processing is essential to be stored in one platform. We have attempted to create one such integrated database that can be a one stop shop for the essential features required to fetch valuable result. LmSmdB (L. major and S. mansoni database) is an integrated database that accounts for the biological networks and regulatory pathways computationally determined by integrating the knowledge of the genome sequences of the mentioned organisms. It is the first database of its kind that has together with the network designing showed the simulation pattern of the product. This database intends to create a comprehensive canopy for the regulation of lipid metabolism reaction in the parasite by integrating the transcription factors, regulatory genes and the protein products controlled by the transcription factors and hence operating the metabolism at genetic level.

  15. An integrated Korean biodiversity and genetic information retrieval system.

    Science.gov (United States)

    Lim, Jeongheui; Bhak, Jong; Oh, Hee-Mock; Kim, Chang-Bae; Park, Yong-Ha; Paek, Woon Kee

    2008-12-12

    On-line biodiversity information databases are growing quickly and being integrated into general bioinformatics systems due to the advances of fast gene sequencing technologies and the Internet. These can reduce the cost and effort of performing biodiversity surveys and genetic searches, which allows scientists to spend more time researching and less time collecting and maintaining data. This will cause an increased rate of knowledge build-up and improve conservations. The biodiversity databases in Korea have been scattered among several institutes and local natural history museums with incompatible data types. Therefore, a comprehensive database and a nation wide web portal for biodiversity information is necessary in order to integrate diverse information resources, including molecular and genomic databases. The Korean Natural History Research Information System (NARIS) was built and serviced as the central biodiversity information system to collect and integrate the biodiversity data of various institutes and natural history museums in Korea. This database aims to be an integrated resource that contains additional biological information, such as genome sequences and molecular level diversity. Currently, twelve institutes and museums in Korea are integrated by the DiGIR (Distributed Generic Information Retrieval) protocol, with Darwin Core2.0 format as its metadata standard for data exchange. Data quality control and statistical analysis functions have been implemented. In particular, integrating molecular and genetic information from the National Center for Biotechnology Information (NCBI) databases with NARIS was recently accomplished. NARIS can also be extended to accommodate other institutes abroad, and the whole system can be exported to establish local biodiversity management servers. A Korean data portal, NARIS, has been developed to efficiently manage and utilize biodiversity data, which includes genetic resources. NARIS aims to be integral in maximizing

  16. Establishment of a integrative multi-omics expression database CKDdb in the context of chronic kidney disease (CKD)

    Science.gov (United States)

    Fernandes, Marco; Husi, Holger

    2017-01-01

    Complex human traits such as chronic kidney disease (CKD) are a major health and financial burden in modern societies. Currently, the description of the CKD onset and progression at the molecular level is still not fully understood. Meanwhile, the prolific use of high-throughput omic technologies in disease biomarker discovery studies yielded a vast amount of disjointed data that cannot be easily collated. Therefore, we aimed to develop a molecule-centric database featuring CKD-related experiments from available literature publications. We established the Chronic Kidney Disease database CKDdb, an integrated and clustered information resource that covers multi-omic studies (microRNAs, genomics, peptidomics, proteomics and metabolomics) of CKD and related disorders by performing literature data mining and manual curation. The CKDdb database contains differential expression data from 49395 molecule entries (redundant), of which 16885 are unique molecules (non-redundant) from 377 manually curated studies of 230 publications. This database was intentionally built to allow disease pathway analysis through a systems approach in order to yield biological meaning by integrating all existing information and therefore has the potential to unravel and gain an in-depth understanding of the key molecular events that modulate CKD pathogenesis. PMID:28079125

  17. Secure integrated circuits and systems

    CERN Document Server

    Verbauwhede, Ingrid MR

    2010-01-01

    On any advanced integrated circuit or 'system-on-chip' there is a need for security. In many applications the actual implementation has become the weakest link in security rather than the algorithms or protocols. The purpose of the book is to give the integrated circuits and systems designer an insight into the basics of security and cryptography from the implementation point of view. As a designer of integrated circuits and systems it is important to know both the state-of-the-art attacks as well as the countermeasures. Optimizing for security is different from optimizations for speed, area,

  18. Database Management System Construction for the Evaluation Results of Intensive Land Use in the Development Areas of Hunan Province

    Institute of Scientific and Technical Information of China (English)

    Mingliang; LIU

    2013-01-01

    Using spatial data integration and database technology,analyzing and integrating the assessment results in all the development zones at different time in Hunan Province,the paper is intended to construct the database and managerial system for the assessment results of land use intensity in development zones,thus formulating"one map"of Hunan Development zones and realizing the integrated management and application of the assessment results in all the development zones at any time of Hunan above the provincial level.It has been proved that the system has good application effect and promising development in land management for land management departments and development zones.

  19. An integrated system for genetic analysis

    Directory of Open Access Journals (Sweden)

    Duan Xiao

    2006-04-01

    Full Text Available Abstract Background Large-scale genetic mapping projects require data management systems that can handle complex phenotypes and detect and correct high-throughput genotyping errors, yet are easy to use. Description We have developed an Integrated Genotyping System (IGS to meet this need. IGS securely stores, edits and analyses genotype and phenotype data. It stores information about DNA samples, plates, primers, markers and genotypes generated by a genotyping laboratory. Data are structured so that statistical genetic analysis of both case-control and pedigree data is straightforward. Conclusion IGS can model complex phenotypes and contain genotypes from whole genome association studies. The database makes it possible to integrate genetic analysis with data curation. The IGS web site http://bioinformatics.well.ox.ac.uk/project-igs.shtml contains further information.

  20. STINGRAY: system for integrated genomic resources and analysis

    OpenAIRE

    Wagner, Glauber; Jardim, Rodrigo; Tschoeke, Diogo A; Loureiro, Daniel R.; Ocaña, Kary ACS; Ribeiro, Antonio CB; Vanessa E. Emmel; Probst, Christian M.; Pitaluga, André N; Grisard, Edmundo C; Cavalcanti, Maria C; Campos, Maria LM; Mattoso, Marta; Dávila, Alberto MR

    2014-01-01

    Background The STINGRAY system has been conceived to ease the tasks of integrating, analyzing, annotating and presenting genomic and expression data from Sanger and Next Generation Sequencing (NGS) platforms. Findings STINGRAY includes: (a) a complete and integrated workflow (more than 20 bioinformatics tools) ranging from functional annotation to phylogeny; (b) a MySQL database schema, suitable for data integration and user access control; and (c) a user-friendly graphical web-based interfac...

  1. 9th Asian Conference on Intelligent Information and Database Systems

    CERN Document Server

    Nguyen, Ngoc; Shirai, Kiyoaki

    2017-01-01

    This book presents recent research in intelligent information and database systems. The carefully selected contributions were initially accepted for presentation as posters at the 9th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2017) held from to 5 April 2017 in Kanazawa, Japan. While the contributions are of an advanced scientific level, several are accessible for non-expert readers. The book brings together 47 chapters divided into six main parts: • Part I. From Machine Learning to Data Mining. • Part II. Big Data and Collaborative Decision Support Systems, • Part III. Computer Vision Analysis, Detection, Tracking and Recognition, • Part IV. Data-Intensive Text Processing, • Part V. Innovations in Web and Internet Technologies, and • Part VI. New Methods and Applications in Information and Software Engineering. The book is an excellent resource for researchers and those working in algorithmics, artificial and computational intelligence, collaborative systems, decisio...

  2. 8th Asian Conference on Intelligent Information and Database Systems

    CERN Document Server

    Madeyski, Lech; Nguyen, Ngoc

    2016-01-01

    The objective of this book is to contribute to the development of the intelligent information and database systems with the essentials of current knowledge, experience and know-how. The book contains a selection of 40 chapters based on original research presented as posters during the 8th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2016) held on 14–16 March 2016 in Da Nang, Vietnam. The papers to some extent reflect the achievements of scientific teams from 17 countries in five continents. The volume is divided into six parts: (a) Computational Intelligence in Data Mining and Machine Learning, (b) Ontologies, Social Networks and Recommendation Systems, (c) Web Services, Cloud Computing, Security and Intelligent Internet Systems, (d) Knowledge Management and Language Processing, (e) Image, Video, Motion Analysis and Recognition, and (f) Advanced Computing Applications and Technologies. The book is an excellent resource for researchers, those working in artificial intelligence, mu...

  3. Structure design and establishment of database application system for alien species in Shandong Province, China

    Institute of Scientific and Technical Information of China (English)

    GUO Wei-hua; LIU Heng; DU Ning; ZHANG Xin-shi; WANG Ren-qing

    2007-01-01

    This paper presents a case study on structure design and establishment of database application system for alien species in Shandong Province, integrating with Geographic Information System, computer network, and database technology to the research of alien species. The modules of alien species database, including classified data input, statistics and analysis, species pictures and distribution maps,and out date input, were approached by Visual Studio.net 2003 and Microsoft SQL server 2000. The alien species information contains the information of classification, species distinction characteristics, biological characteristics, original area, distribution area, the entering fashion and route, invasion time, invasion reason, interaction with the endemic species, growth state, danger state and spatial information, i.e.distribution map. Based on the above bases, several models including application, checking, modifying, printing, adding and returning models were developed. Furthermore, through the establishment of index tables and index maps, we can also spatially query the data like picture,text and GIS map data. This research established the technological platform of sharing information about scientific resource of alien species in Shandong Province, offering the basis for the dynamic inquiry of alien species, the warning technology of prevention and the fast reaction system. The database application system possessed the principles of good practicability, friendly user interface and convenient usage. It can supply full and accurate information inquiry services of alien species for the users and provide functions of dynamically managing the database for the administrator.

  4. lexiDB:a scalable corpus database management system

    OpenAIRE

    Coole, Matt; Rayson, Paul Edward; Mariani, John Amedeo

    2016-01-01

    lexiDB is a scalable corpus database management system designed to fulfill corpus linguistics retrieval queries on multi-billion-word multiply-annotated corpora. It is based on a distributed architecture that allows the system to scale out to support ever larger text collections. This paper presents an overview of the architecture behind lexiDB as well as a demonstration of its functionality. We present lexiDB's performance metrics based on the AWS (Amazon Web Services) infrastructure with tw...

  5. Haantjes Manifolds and Integrable Systems

    CERN Document Server

    Tempesta, Piergiulio

    2014-01-01

    A general theory of integrable systems is proposed, based on the theory of Haantjes manifolds. We introduce the notion of symplectic-Haantjes manifold (or $\\omega \\mathcal{H}$ manifold), as the natural setting where the notion of integrability can be formulated. We propose an approach to the separation of variables for classical systems, related to the geometry of Haantjes manifolds. A special class of coordinates, called Darboux-Haantjes coordinates, will be constructed from the Haantjes structure associated with an integrable systems. They enable the additive separation of variables of the Hamilton-Jacobi equation. We also present an application of our approach to the study of some finite-dimensional integrable models, as the H\\'enon-Heiles systems and a stationary reduction of the KdV hierarchy.

  6. Integrated Risk Information System (IRIS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — EPA?s Integrated Risk Information System (IRIS) is a compilation of electronic reports on specific substances found in the environment and their potential to cause...

  7. An integrated scheduling and program management system

    Science.gov (United States)

    Porter, D.; Gibson, J. D.; Williams, G. G.

    2012-09-01

    An integrated scheduling and program management system is being developed for the MMT Observatory (MMTO), Arizona, USA. A systems engineering approach is used to combine existing and new relational databases, spreadsheets, file storage systems, and web-based user interfaces into a single unified system. An overview of software design, data management, user interfaces, and techniques for performance assessment is presented. Goals of this system include streamlined data management and an optimized user experience. The MMTO has over a dozen different telescope configurations, including three secondary mirrors and a wide range of observing instruments. Scheduling is complex for the varying telescope configurations, limited available observing time, and appropriate astronomic conditions (e.g., lunar phase) for each science project. Scheduled telescope configurations can be used to perform safety checks of actual configuration during telescope operations. Programmatic information is automatically input into nightly telescope operator (TO) logs by the system. The TO's provide additional information into the system on telescope usage, observing conditions (e.g., weather conditions), and observatory closure (e.g., from instrument malfunction or inclement weather). All of this information is synthesized to assess telescope and observatory performance. Web interfaces to the system can be used by observers to submit information, such as travel plans, instrumentation requirements, and observing catalogs. A service request (SR) (i.e., trouble report) system has also been developed for tracking operational issues. The specific needs of the MMTO have been met through in-house software development of this integrated scheduling and program management system.

  8. VaProS: a database-integration approach for protein/genome information retrieval

    KAUST Repository

    Gojobori, Takashi

    2016-12-24

    Life science research now heavily relies on all sorts of databases for genome sequences, transcription, protein three-dimensional (3D) structures, protein–protein interactions, phenotypes and so forth. The knowledge accumulated by all the omics research is so vast that a computer-aided search of data is now a prerequisite for starting a new study. In addition, a combinatory search throughout these databases has a chance to extract new ideas and new hypotheses that can be examined by wet-lab experiments. By virtually integrating the related databases on the Internet, we have built a new web application that facilitates life science researchers for retrieving experts’ knowledge stored in the databases and for building a new hypothesis of the research target. This web application, named VaProS, puts stress on the interconnection between the functional information of genome sequences and protein 3D structures, such as structural effect of the gene mutation. In this manuscript, we present the notion of VaProS, the databases and tools that can be accessed without any knowledge of database locations and data formats, and the power of search exemplified in quest of the molecular mechanisms of lysosomal storage disease. VaProS can be freely accessed at http://p4d-info.nig.ac.jp/vapros/.

  9. Spatial variation of volcanic rock geochemistry in the Virunga Volcanic Province: Statistical analysis of an integrated database

    Science.gov (United States)

    Barette, Florian; Poppe, Sam; Smets, Benoît; Benbakkar, Mhammed; Kervyn, Matthieu

    2017-10-01

    We present an integrated, spatially-explicit database of existing geochemical major-element analyses available from (post-) colonial scientific reports, PhD Theses and international publications for the Virunga Volcanic Province, located in the western branch of the East African Rift System. This volcanic province is characterised by alkaline volcanism, including silica-undersaturated, alkaline and potassic lavas. The database contains a total of 908 geochemical analyses of eruptive rocks for the entire volcanic province with a localisation for most samples. A preliminary analysis of the overall consistency of the database, using statistical techniques on sets of geochemical analyses with contrasted analytical methods or dates, demonstrates that the database is consistent. We applied a principal component analysis and cluster analysis on whole-rock major element compositions included in the database to study the spatial variation of the chemical composition of eruptive products in the Virunga Volcanic Province. These statistical analyses identify spatially distributed clusters of eruptive products. The known geochemical contrasts are highlighted by the spatial analysis, such as the unique geochemical signature of Nyiragongo lavas compared to other Virunga lavas, the geochemical heterogeneity of the Bulengo area, and the trachyte flows of Karisimbi volcano. Most importantly, we identified separate clusters of eruptive products which originate from primitive magmatic sources. These lavas of primitive composition are preferentially located along NE-SW inherited rift structures, often at distance from the central Virunga volcanoes. Our results illustrate the relevance of a spatial analysis on integrated geochemical data for a volcanic province, as a complement to classical petrological investigations. This approach indeed helps to characterise geochemical variations within a complex of magmatic systems and to identify specific petrologic and geochemical investigations

  10. 基于开源GIS类库开发DLG入库整合系统%Development of DLG Input Database Integration System Based on Open-Source GIS Class Library

    Institute of Scientific and Technical Information of China (English)

    高国勇

    2014-01-01

    介绍了开源GIS软件的特点与优势,并结合DLG生产的实际情况,采用C#语言在AutoCAD 平台下,实现了基于开源GIS类库NTS的数据入库整合系统的研发,阐述主要关键模块的实现方法。%The paper introduces features and advantages of open -source GIS class library , combines the actual situation of DLG pro-duction on the platform of AutoCAD , makes use of C#to achieve the data input integration system based on open -source GIS class li-brary-NTS, and expounds the realization method of the main key module .

  11. Dataspace: an automated visualization system for large databases

    Science.gov (United States)

    Petajan, Eric D.; Jean, Yves D.; Lieuwen, Dan; Anupam, Vinod

    1997-04-01

    DataSpace is a multi-platform software system for easily visualizing relational databases using a set of flexible 3D graphics tools. Typically, five attributes are selected for a given visualization session where two of the attributes are used to generate 2D plots and the other three attributes are used to position the 2D plots in a regular 3D lattice. Mouse-based 3D navigation with constraints allows the user to see the 'forest and the trees' without getting 'lost in space'. DataSpace uses the Standard Query Language to allow connection popular database systems. DataSpace also incorporates a variety of additional tools e.g., aggregation, data 'drill down', multidimensional scaling, variable transparency, query by example, and display of graphics from external applications. Labeling of node contents is automatic. 3D strokefonts are used to provide readable yet scalable text in a 3D environment. Since interactive 3D navigation is essential to DataSpace, we have incorporated several methods for adaptively reducing graphical detail without losing information when the host machine is overloaded. DataSpace has been sued to visualized databases containing over 1 million records with interactive performance. In particular, large databases containing stock price information and telecommunications customer profiles have been analyzed using DataSpace.

  12. A HYBRID INTRUSION PREVENTION SYSTEM (HIPS FOR WEB DATABASE SECURITY

    Directory of Open Access Journals (Sweden)

    Eslam Mohsin Hassib

    2010-07-01

    Full Text Available Web database security is a challenging issue that should be taken into consideration when designing and building business based web applications. Those applications usually include critical processes such as electronic-commerce web applications that include money transfer via visa or master cards. Security is a critical issue in other web based application such as sites for military weapons companies and national security of countries. The main contributionof this paper is to introduce a new web database security model that includes a combination of triple system ; (i Host Identity protocol(HIP in a new authentication method called DSUC (Data Security Unique Code, (ii a strong filtering rules that detects intruders with high accuracy, and (iii a real time monitoring system that employs the Uncertainty Degree Model (UDM using fuzzy sets theory. It was shown that the combination of those three powerful security issues results in very strong security model. Accordingly, the proposed web database security model has the ability to detect and provide a real time prevention of intruder access with high precision. Experimental results have shown that the proposed model introduces satisfactory web database protection levels which reach in some cases to detect and prevent more that 93% of the intruders.

  13. NMO-DBr: the Brazilian Neuromyelitis Optica Database System

    Directory of Open Access Journals (Sweden)

    Marco A. Lana-Peixoto

    2011-08-01

    Full Text Available OBJECTIVE: To present the Brazilian Neuromyelitis Optica Database System (NMO-DBr, a database system which collects, stores, retrieves, and analyzes information from patients with NMO and NMO-related disorders. METHOD: NMO-DBr uses Flux, a LIMS (Laboratory Information Management Systems for data management. We used information from medical records of patients with NMO spectrum disorders, and NMO variants, the latter defined by the presence of neurological symptoms associated with typical lesions on brain magnetic resonance imaging (MRI or aquaporin-4 antibody seropositivity. RESULTS: NMO-DBr contains data related to patient's identification, symptoms, associated conditions, index events, recurrences, family history, visual and spinal cord evaluation, disability, cerebrospinal fluid and blood tests, MRI, optic coherence tomography, diagnosis and treatment. It guarantees confidentiality, performs cross-checking and statistical analysis. CONCLUSION: NMO-DBr is a tool which guides professionals to take the history, record and analyze information making medical practice more consistent and improving research in the area.

  14. MAGIC Database and Interfaces: An Integrated Package for Gene Discovery and Expression

    Directory of Open Access Journals (Sweden)

    Lee H. Pratt

    2006-03-01

    Full Text Available The rapidly increasing rate at which biological data is being produced requires a corresponding growth in relational databases and associated tools that can help laboratories contend with that data. With this need in mind, we describe here a Modular Approach to a Genomic, Integrated and Comprehensive (MAGIC Database. This Oracle 9i database derives from an initial focus in our laboratory on gene discovery via production and analysis of expressed sequence tags (ESTs, and subsequently on gene expression as assessed by both EST clustering and microarrays. The MAGIC Gene Discovery portion of the database focuses on information derived from DNA sequences and on its biological relevance. In addition to MAGIC SEQ-LIMS, which is designed to support activities in the laboratory, it contains several additional subschemas. The latter include MAGIC Admin for database administration, MAGIC Sequence for sequence processing as well as sequence and clone attributes, MAGIC Cluster for the results of EST clustering, MAGIC Polymorphism in support of microsatellite and single-nucleotide-polymorphism discovery, and MAGIC Annotation for electronic annotation by BLAST and BLAT. The MAGIC Microarray portion is a MIAME-compliant database with two components at present. These are MAGIC Array-LIMS, which makes possible remote entry of all information into the database, and MAGIC Array Analysis, which provides data mining and visualization. Because all aspects of interaction with the MAGIC Database are via a web browser, it is ideally suited not only for individual research laboratories but also for core facilities that serve clients at any distance.

  15. Improved Integrity Constraints Checking in Distributed Databases by Exploiting Local Checking

    Institute of Scientific and Technical Information of China (English)

    Ali A.Alwan; Hamidah Ibrahim; Nur Izura Udzir

    2009-01-01

    Most of the previous studies concerning checking the integrity constraints in distributed database derive simplified forms of the initial integrity constraints with the sufficiency property, since the sufficient test is known to be cheaper than the complete test and its initial integrity constraint as it involves less data to be transferred across the network and can always be evaluated at the target site (single site). Their studies are limited as they depend strictly on the assumption that an update operation will be executed at a site where the relation specified in the update operation is located, which is not always true. Hence, the sufficient test, which is proven to be local test by previous study, is no longer appropriate. This paper proposes an approach to checking integrity constraints in a distributed database by utilizing as much as possible the local information stored at the target site. The proposed approach derives support tests as an alternative to the existing complete and sufficient tests proposed by previous researchers with the intention to increase the number of local checking regardless the location of the submitted update operation. Several analyses have been performed to evaluate the proposed approach, and the results show that support tests can benefit the distributed database, where local constraint checking can be achieved.

  16. Voice integrated systems

    Science.gov (United States)

    Curran, P. Mike

    1977-01-01

    The program at Naval Air Development Center was initiated to determine the desirability of interactive voice systems for use in airborne weapon systems crew stations. A voice recognition and synthesis system (VRAS) was developed and incorporated into a human centrifuge. The speech recognition aspect of VRAS was developed using a voice command system (VCS) developed by Scope Electronics. The speech synthesis capability was supplied by a Votrax, VS-5, speech synthesis unit built by Vocal Interface. The effects of simulated flight on automatic speech recognition were determined by repeated trials in the VRAS-equipped centrifuge. The relationship of vibration, G, O2 mask, mission duration, and cockpit temperature and voice quality was determined. The results showed that: (1) voice quality degrades after 0.5 hours with an O2 mask; (2) voice quality degrades under high vibration; and (3) voice quality degrades under high levels of G. The voice quality studies are summarized. These results were obtained with a baseline of 80 percent recognition accuracy with VCS.

  17. Construction of an ortholog database using the semantic web technology for integrative analysis of genomic data.

    Science.gov (United States)

    Chiba, Hirokazu; Nishide, Hiroyo; Uchiyama, Ikuo

    2015-01-01

    Recently, various types of biological data, including genomic sequences, have been rapidly accumulating. To discover biological knowledge from such growing heterogeneous data, a flexible framework for data integration is necessary. Ortholog information is a central resource for interlinking corresponding genes among different organisms, and the Semantic Web provides a key technology for the flexible integration of heterogeneous data. We have constructed an ortholog database using the Semantic Web technology, aiming at the integration of numerous genomic data and various types of biological information. To formalize the structure of the ortholog information in the Semantic Web, we have constructed the Ortholog Ontology (OrthO). While the OrthO is a compact ontology for general use, it is designed to be extended to the description of database-specific concepts. On the basis of OrthO, we described the ortholog information from our Microbial Genome Database for Comparative Analysis (MBGD) in the form of Resource Description Framework (RDF) and made it available through the SPARQL endpoint, which accepts arbitrary queries specified by users. In this framework based on the OrthO, the biological data of different organisms can be integrated using the ortholog information as a hub. Besides, the ortholog information from different data sources can be compared with each other using the OrthO as a shared ontology. Here we show some examples demonstrating that the ortholog information described in RDF can be used to link various biological data such as taxonomy information and Gene Ontology. Thus, the ortholog database using the Semantic Web technology can contribute to biological knowledge discovery through integrative data analysis.

  18. Overview of the Integrated Genomic Data system (IGD)

    Energy Technology Data Exchange (ETDEWEB)

    Hagstrom, R.; Overbeek, R.; Price, M. (Argonne National Lab., IL (United States)); Micheals, G.S.; Taylor, R. (National Insts. of Health, Bethesda, MD (United States). Div. of Computer Resources and Technology)

    1992-01-01

    In previous work, we developed a database system to support analysis of the E. coli genome. That system provided a pidgin-English query facility, rudimentary pattern-matching capabilities, and the ability to rapidly extract answers to a wide variety of questions about the organization of the E. coli genome. To enable the comparative analysis of the genomes from different species, we have designed and implemented a new prototype database system, called the Integrated Genomic Database (IGD). IGD extends our earlier effort by incorporating a set of curator's tools that facilitate the incorporation of physical and genetic data, together with the results of genome organization analysis, into a common database system. Additional tools for extracting, manipulating, and analyzing data are planned.

  19. Overview of the Integrated Genomic Data system (IGD)

    Energy Technology Data Exchange (ETDEWEB)

    Hagstrom, R.; Overbeek, R.; Price, M. [Argonne National Lab., IL (United States); Micheals, G.S.; Taylor, R. [National Insts. of Health, Bethesda, MD (United States). Div. of Computer Resources and Technology

    1992-12-01

    In previous work, we developed a database system to support analysis of the E. coli genome. That system provided a pidgin-English query facility, rudimentary pattern-matching capabilities, and the ability to rapidly extract answers to a wide variety of questions about the organization of the E. coli genome. To enable the comparative analysis of the genomes from different species, we have designed and implemented a new prototype database system, called the Integrated Genomic Database (IGD). IGD extends our earlier effort by incorporating a set of curator`s tools that facilitate the incorporation of physical and genetic data, together with the results of genome organization analysis, into a common database system. Additional tools for extracting, manipulating, and analyzing data are planned.

  20. State analysis requirements database for engineering complex embedded systems

    Science.gov (United States)

    Bennett, Matthew B.; Rasmussen, Robert D.; Ingham, Michel D.

    2004-01-01

    It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer's intent, potentially leading to software errors. This problem is addressed by a systems engineering tool called the State Analysis Database, which provides a tool for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using the State Analysis Database.

  1. State analysis requirements database for engineering complex embedded systems

    Science.gov (United States)

    Bennett, Matthew B.; Rasmussen, Robert D.; Ingham, Michel D.

    2004-01-01

    It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer's intent, potentially leading to software errors. This problem is addressed by a systems engineering tool called the State Analysis Database, which provides a tool for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using the State Analysis Database.

  2. SjTPdb: integrated transcriptome and proteome database and analysis platform for Schistosoma japonicum

    Directory of Open Access Journals (Sweden)

    Wang Zhi-Qin

    2008-06-01

    Full Text Available Abstract Background Schistosoma japonicum is one of the three major blood fluke species, the etiological agents of schistosomiasis which remains a serious public health problem with an estimated 200 million people infected in 76 countries. In recent years, enormous amounts of both transcriptomic and proteomic data of schistosomes have become available, providing information on gene expression profiles for developmental stages and tissues of S. japonicum. Here, we establish a public searchable database, termed SjTPdb, with integrated transcriptomic and proteomic data of S. japonicum, to enable more efficient access and utility of these data and to facilitate the study of schistosome biology, physiology and evolution. Description All the available ESTs, EST clusters, and the proteomic dataset of S. japonicum are deposited in SjTPdb. The core of the database is the 8,420 S. japonicum proteins translated from the EST clusters, which are well annotated for sequence similarity, structural features, functional ontology, genomic variations and expression patterns across developmental stages and tissues including the tegument and eggshell of this flatworm. The data can be queried by simple text search, BLAST search, search based on developmental stage of the life cycle, and an integrated search for more specific information. A PHP-based web interface allows users to browse and query SjTPdb, and moreover to switch to external databases by the following embedded links. Conclusion SjTPdb is the first schistosome database with detailed annotations for schistosome proteins. It is also the first integrated database of both transcriptome and proteome of S. japonicum, providing a comprehensive data resource and research platform to facilitate functional genomics of schistosome. SjTPdb is available from URL: http://function.chgc.sh.cn/sj-proteome/index.htm.

  3. CancerHSP: anticancer herbs database of systems pharmacology

    OpenAIRE

    Weiyang Tao; Bohui Li; Shuo Gao; Yaofei Bai; Piar Ali Shar; Wenjuan Zhang; Zihu Guo; Ke Sun; Yingxue Fu; Chao Huang; Chunli Zheng; Jiexin Mu; Tianli Pei; Yuan Wang; Yan Li

    2015-01-01

    The numerous natural products and their bioactivity potentially afford an extraordinary resource for new drug discovery and have been employed in cancer treatment. However, the underlying pharmacological mechanisms of most natural anticancer compounds remain elusive, which has become one of the major obstacles in developing novel effective anticancer agents. Here, to address these unmet needs, we developed an anticancer herbs database of systems pharmacology (CancerHSP), which records antican...

  4. Soft Biometrics Database: A Benchmark For Keystroke Dynamics Biometric Systems

    OpenAIRE

    Syed Idrus, Syed Zulkarnain; Cherrier, Estelle; Rosenberger, Christophe; Bours, Patrick

    2013-01-01

    International audience; Among all the existing biometric modalities, authentication systems based on keystroke dynamics are particularly interesting for usability reasons. Many researchers proposed in the last decades some algorithms to increase the efficiency of this biometric modality. Propose in this paper: a benchmark testing suite composed of a database containing multiple data (keystroke dynamics templates, soft biometric traits . . . ), which will be made available for the research com...

  5. Photo-z-SQL: integrated, flexible photometric redshift computation in a database

    CERN Document Server

    Beck, Róbert; Budavári, Tamás; Szalay, Alexander S; Csabai, István

    2016-01-01

    We present a flexible template-based photometric redshift estimation framework, implemented in C#, that can be seamlessly integrated into a SQL database (or DB) server and executed on-demand in SQL. The DB integration eliminates the need to move large photometric datasets outside a database for redshift estimation, and utilizes the computational capabilities of DB hardware. The code is able to perform both maximum likelihood and Bayesian estimation, and can handle inputs of variable photometric filter sets and corresponding broad-band magnitudes. It is possible to take into account the full covariance matrix between filters, and filter zero points can be empirically calibrated using measurements with given redshifts. The list of spectral templates and the prior can be specified flexibly, and the expensive synthetic magnitude computations are done via lazy evaluation, coupled with a caching of results. Parallel execution is fully supported. For large upcoming photometric surveys such as the LSST, the ability t...

  6. Integrated Satellite-HAP Systems

    DEFF Research Database (Denmark)

    Cianca, Ernestina; De Sanctis, Mauro; De Luise, Aldo

    2005-01-01

    for an efficient hybrid terrestrial-satellite communication system. Two integrated HAP-satellite scenarios are presented, in which the HAP is used to overcome some of the shortcomings of satellite- based communications. Moreover, it is shown that the integration of HAPs with satellite systems can be used......Thus far, high-altitude platform (HAP)-based systems have been mainly conceived as an alternative to satellites for complementing the terrestrial network. This article aims to show that HAP should no longer be seen as a competitor technology by investors of satellites, but as a key element...

  7. Integrated Building Management System (IBMS)

    Energy Technology Data Exchange (ETDEWEB)

    Anita Lewis

    2012-07-01

    This project provides a combination of software and services that more easily and cost-effectively help to achieve optimized building performance and energy efficiency. Featuring an open-platform, cloud- hosted application suite and an intuitive user experience, this solution simplifies a traditionally very complex process by collecting data from disparate building systems and creating a single, integrated view of building and system performance. The Fault Detection and Diagnostics algorithms developed within the IBMS have been designed and tested as an integrated component of the control algorithms running the equipment being monitored. The algorithms identify the normal control behaviors of the equipment without interfering with the equipment control sequences. The algorithms also work without interfering with any cooperative control sequences operating between different pieces of equipment or building systems. In this manner the FDD algorithms create an integrated building management system.

  8. Computerized database management system for breast cancer patients.

    Science.gov (United States)

    Sim, Kok Swee; Chong, Sze Siang; Tso, Chih Ping; Nia, Mohsen Esmaeili; Chong, Aun Kee; Abbas, Siti Fathimah

    2014-01-01

    Data analysis based on breast cancer risk factors such as age, race, breastfeeding, hormone replacement therapy, family history, and obesity was conducted on breast cancer patients using a new enhanced computerized database management system. My Structural Query Language (MySQL) is selected as the application for database management system to store the patient data collected from hospitals in Malaysia. An automatic calculation tool is embedded in this system to assist the data analysis. The results are plotted automatically and a user-friendly graphical user interface is developed that can control the MySQL database. Case studies show breast cancer incidence rate is highest among Malay women, followed by Chinese and Indian. The peak age for breast cancer incidence is from 50 to 59 years old. Results suggest that the chance of developing breast cancer is increased in older women, and reduced with breastfeeding practice. The weight status might affect the breast cancer risk differently. Additional studies are needed to confirm these findings.

  9. Quantization of noncommutative completely integrable Hamiltonian systems

    CERN Document Server

    Giachetta, G; Sardanashvily, G

    2007-01-01

    Integrals of motion of a Hamiltonian system need not be commutative. The classical Mishchenko-Fomenko theorem enables one to quantize a noncommutative completely integrable Hamiltonian system around its invariant submanifold as an abelian completely integrable Hamiltonian system.

  10. Integrated monitoring and surveillance system demonstration project

    Energy Technology Data Exchange (ETDEWEB)

    Aumeier, S.E.; Walters, G. [Argonne National Lab., Idaho Falls, ID (United States); Kotter, D.; Walrath, W.M.; Zamecnik, R.J. [Lockheed-Martin Idaho Technologies Company, Idaho Falls, ID (United States)

    1997-07-01

    We present a summary of efforts associated with the installation of an integrated system for the surveillance and monitoring of stabilized plutonium metals and oxides in long-term storage. The product of this effort will include a Pu storage requirements document, baseline integrated monitoring and surveillance system (IMSS) prototype and test bed that will be installed in the Fuel Manufacturing Facility (FMF) nuclear material vault at Argonne National Laboratory - West (ANL-W), and a Pu tracking database including data analysis capabilities. The prototype will be based on a minimal set of vault and package monitoring requirements as derived from applicable DOE documentation and guidelines, detailed in the requirements document, including DOE-STD-3013-96. The use of standardized requirements will aid individual sites in the selection of sensors that best suit their needs while the prototype IMSS, located at ANL-W, will be used as a test bed to compare and contrast sensor performance against a baseline integrated system (the IMSS), demonstrate system capabilities, evaluate potential technology gaps, and test new hardware and software designs using various storage configurations. With efforts currently underway to repackage and store a substantial quantity of plutonium and plutonium-bearing material within the DOE complex, this is an opportune time to undertake such a project. 4 refs.

  11. The Future of Asset Management for Human Space Exploration: Supply Classification and an Integrated Database

    Science.gov (United States)

    Shull, Sarah A.; Gralla, Erica L.; deWeck, Olivier L.; Shishko, Robert

    2006-01-01

    One of the major logistical challenges in human space exploration is asset management. This paper presents observations on the practice of asset management in support of human space flight to date and discusses a functional-based supply classification and a framework for an integrated database that could be used to improve asset management and logistics for human missions to the Moon, Mars and beyond.

  12. DKIST facility management system integration

    Science.gov (United States)

    White, Charles R.; Phelps, LeEllen

    2016-07-01

    The Daniel K. Inouye Solar Telescope (DKIST) Observatory is under construction at Haleakalā, Maui, Hawai'i. When complete, the DKIST will be the largest solar telescope in the world. The Facility Management System (FMS) is a subsystem of the high-level Facility Control System (FCS) and directly controls the Facility Thermal System (FTS). The FMS receives operational mode information from the FCS while making process data available to the FCS and includes hardware and software to integrate and control all aspects of the FTS including the Carousel Cooling System, the Telescope Chamber Environmental Control Systems, and the Temperature Monitoring System. In addition it will integrate the Power Energy Management System and several service systems such as heating, ventilation, and air conditioning (HVAC), the Domestic Water Distribution System, and the Vacuum System. All of these subsystems must operate in coordination to provide the best possible observing conditions and overall building management. Further, the FMS must actively react to varying weather conditions and observational requirements. The physical impact of the facility must not interfere with neighboring installations while operating in a very environmentally and culturally sensitive area. The FMS system will be comprised of five Programmable Automation Controllers (PACs). We present a pre-build overview of the functional plan to integrate all of the FMS subsystems.

  13. First Integrals and Integral Invariants of Relativistic Birkhoffian Systems

    Institute of Scientific and Technical Information of China (English)

    LUOShao-Kai

    2003-01-01

    For a relativistic Birkhoflan system, the first integrals and the construction of integral invariants are studied. Firstly, the cyclic integrals and the generalized energy integral of the system are found by using the perfect differential method. Secondly, the equations of nonsimultaneous variation of the system are established by using the relation between the simultaneous variation and the nonsimultaneous variation. Thirdly, the relation between the first integral and the integral invariant of the system is studied, and it is proved that, using a t~rst integral, we can construct an integral invarlant of the system. Finally, the relation between the relativistic Birkhoflan dynamics and the relativistic Hamilton;an dynamics is discussed, and the first integrals and the integral invariants of the relativistic Hamiltonian system are obtained. Two examples are given to illustrate the application of the results.

  14. Development of database and searching system for tool grinding

    Directory of Open Access Journals (Sweden)

    J.Y. Chen

    2008-02-01

    Full Text Available Purpose: For achieving the goal of saving time on the tool grinding and design, an efficient method of developing the data management and searching system for the standard cutting tools is proposed in this study.Design/methodology/approach: At first the tool grinding software with open architecture was employed to design and plan grinding processes for seven types of tools. According to the characteristics of tools (e.g. types, diameter, radius and so on, 4802 tool data were established in the relational database. Then, the SQL syntax was utilized to write the searching algorithms, and the human machine interfaces of the searching system for the tool database were developed by C++ Builder.Findings: For grinding a square end mill with two-flute, a half of time on the tool design and the change of production line for grinding other types of tools can be saved by means of our system. More specifically, the efficiency in terms of the approach and retract time was improved up to 40%, and an improvement of approximately 10.6% in the overall machining time can be achieved.Research limitations/implications: In fact, the used tool database in this study only includes some specific tools such as the square end mill. The step drill, taper tools, and special tools can also be taken into account in the database for future research.Practical implications: The most commercial tool grinding software is the modular-based design and use tool shapes to construct the CAM interface. Some limitations on the tool design are undesirable for customers. On the contrary, employing not only the grinding processes to construct the grinding path of tools but the searching system combined with the grinding software, it gives more flexible for one to design new tools.Originality/value: A novel tool database and searching system is presented for tool grinding. Using this system can save time and provide more convenience on designing tools and grinding. In other words, the

  15. The Integrated Information System for Natural Disaster Mitigation

    Directory of Open Access Journals (Sweden)

    Junxiu Wu

    2007-08-01

    Full Text Available Supported by the World Bank, the Integrated Information System for Natural Disaster Mitigation (ISNDM, including the operational service system and network telecommunication system, has been in development for three years in the Center of Disaster Reduction, Chinese Academy of Sciences, based on the platform of the GIS software Arcview. It has five main modules: disaster background information, socio- economic information, disaster-induced factors database, disaster scenarios database, and disaster assessment. ISNDM has several significant functions, which include information collection, information processing, data storage, and information distribution. It is a simple but comprehensive demonstration system for our national center for natural disaster reduction.

  16. Automated granularity to integrate digital information: the "Antarctic Treaty Searchable Database" case study

    Directory of Open Access Journals (Sweden)

    Paul Arthur Berkman

    2006-06-01

    Full Text Available Access to information is necessary, but not sufficient in our digital era. The challenge is to objectively integrate digital resources based on user-defined objectives for the purpose of discovering information relationships that facilitate interpretations and decision making. The Antarctic Treaty Searchable Database (http://aspire.nvi.net, which is in its sixth edition, provides an example of digital integration based on the automated generation of information granules that can be dynamically combined to reveal objective relationships within and between digital information resources. This case study further demonstrates that automated granularity and dynamic integration can be accomplished simply by utilizing the inherent structure of the digital information resources. Such information integration is relevant to library and archival programs that require long-term preservation of authentic digital resources.

  17. Development and implementation of a custom integrated database with dashboards to assist with hematopathology specimen triage and traffic

    Directory of Open Access Journals (Sweden)

    Elizabeth M Azzato

    2014-01-01

    Full Text Available Background: At some institutions, including ours, bone marrow aspirate specimen triage is complex, with hematopathology triage decisions that need to be communicated to downstream ancillary testing laboratories and many specimen aliquot transfers that are handled outside of the laboratory information system (LIS. We developed a custom integrated database with dashboards to facilitate and streamline this workflow. Methods: We developed user-specific dashboards that allow entry of specimen information by technologists in the hematology laboratory, have custom scripting to present relevant information for the hematopathology service and ancillary laboratories and allow communication of triage decisions from the hematopathology service to other laboratories. These dashboards are web-accessible on the local intranet and accessible from behind the hospital firewall on a computer or tablet. Secure user access and group rights ensure that relevant users can edit or access appropriate records. Results: After database and dashboard design, two-stage beta-testing and user education was performed, with the first focusing on technologist specimen entry and the second on downstream users. Commonly encountered issues and user functionality requests were resolved with database and dashboard redesign. Final implementation occurred within 6 months of initial design; users report improved triage efficiency and reduced need for interlaboratory communications. Conclusions: We successfully developed and implemented a custom database with dashboards that facilitates and streamlines our hematopathology bone marrow aspirate triage. This provides an example of a possible solution to specimen communications and traffic that are outside the purview of a standard LIS.

  18. Low Quality Image Retrieval System For Generic Databases

    Directory of Open Access Journals (Sweden)

    W.A.D.N. Wijesekera

    2015-08-01

    Full Text Available Abstract Content Based Image Retrieval CBIR systems have become the trend in image retrieval technologies as the index or notation based image retrieval algorithms give less efficient results in high usage of images. These CBIR systems are mostly developed considering the availability of high or normal quality images. High availability of low quality images in databases due to usage of different quality equipment to capture images and different environmental conditions the photos are being captured has opened up a new path in image retrieval research area. The algorithms which are developed for low quality image based image retrieval are only a few and have been performed only for specific domains. Low quality image based image retrieval algorithm on a generic database with a considerable accuracy level for different industries is an area which remains unsolved. Through this study an algorithm has been developed to achieve above mentioned gaps. By using images with inappropriate brightness and compressed images as low quality images the proposed algorithm is tested on a generic database which includes many categories of data instead of using a specific domain. The new algorithm developed gives better precision and recall values when they are clustered into the most appropriate number of clusters which changes according to the level of quality of the image. As the quality of the image decreases the accuracy of the algorithm also tends to be reduced a space for further improvement.

  19. The GEISA Spectroscopic Database System in its latest Edition

    Science.gov (United States)

    Jacquinet-Husson, N.; Crépeau, L.; Capelle, V.; Scott, N. A.; Armante, R.; Chédin, A.

    2009-04-01

    GEISA (Gestion et Etude des Informations Spectroscopiques Atmosphériques: Management and Study of Spectroscopic Information)[1] is a computer-accessible spectroscopic database system, designed to facilitate accurate forward planetary radiative transfer calculations using a line-by-line and layer-by-layer approach. It was initiated in 1976. Currently, GEISA is involved in activities related to the assessment of the capabilities of IASI (Infrared Atmospheric Sounding Interferometer on board the METOP European satellite -http://earth-sciences.cnes.fr/IASI/)) through the GEISA/IASI database[2] derived from GEISA. Since the Metop (http://www.eumetsat.int) launch (October 19th 2006), GEISA/IASI is the reference spectroscopic database for the validation of the level-1 IASI data, using the 4A radiative transfer model[3] (4A/LMD http://ara.lmd.polytechnique.fr; 4A/OP co-developed by LMD and Noveltis with the support of CNES). Also, GEISA is involved in planetary research, i.e.: modelling of Titan's atmosphere, in the comparison with observations performed by Voyager: http://voyager.jpl.nasa.gov/, or by ground-based telescopes, and by the instruments on board the Cassini-Huygens mission: http://www.esa.int/SPECIALS/Cassini-Huygens/index.html. The updated 2008 edition of GEISA (GEISA-08), a system comprising three independent sub-databases devoted, respectively, to line transition parameters, infrared and ultraviolet/visible absorption cross-sections, microphysical and optical properties of atmospheric aerosols, will be described. Spectroscopic parameters quality requirement will be discussed in the context of comparisons between observed or simulated Earth's and other planetary atmosphere spectra. GEISA is implemented on the CNES/CNRS Ether Products and Services Centre WEB site (http://ether.ipsl.jussieu.fr), where all archived spectroscopic data can be handled through general and user friendly associated management software facilities. More than 350 researchers are

  20. The Systems Librarian: Re-Integrating the "Integrated" Library System

    Science.gov (United States)

    Breeding, Marshall

    2005-01-01

    This article discusses the current environment of the ILS (Integrated Library System) plus add-ons tailored for electronic content and its future. It suggests that while the ILS may be mature, the supplemental products are not and the linkages among them are even less so. On the technical front, the recent interest in Web services gives reason to…

  1. Adaptive Tuning Algorithm for Performance tuning of Database Management System

    CERN Document Server

    Rodd, S F

    2010-01-01

    Performance tuning of Database Management Systems(DBMS) is both complex and challenging as it involves identifying and altering several key performance tuning parameters. The quality of tuning and the extent of performance enhancement achieved greatly depends on the skill and experience of the Database Administrator (DBA). As neural networks have the ability to adapt to dynamically changing inputs and also their ability to learn makes them ideal candidates for employing them for tuning purpose. In this paper, a novel tuning algorithm based on neural network estimated tuning parameters is presented. The key performance indicators are proactively monitored and fed as input to the Neural Network and the trained network estimates the suitable size of the buffer cache, shared pool and redo log buffer size. The tuner alters these tuning parameters using the estimated values using a rate change computing algorithm. The preliminary results show that the proposed method is effective in improving the query response tim...

  2. Integrating multiple genome annotation databases improves the interpretation of microarray gene expression data

    Directory of Open Access Journals (Sweden)

    Kennedy Breandan

    2010-01-01

    Full Text Available Abstract Background The Affymetrix GeneChip is a widely used gene expression profiling platform. Since the chips were originally designed, the genome databases and gene definitions have been considerably updated. Thus, more accurate interpretation of microarray data requires parallel updating of the specificity of GeneChip probes. We propose a new probe remapping protocol, using the zebrafish GeneChips as an example, by removing nonspecific probes, and grouping the probes into transcript level probe sets using an integrated zebrafish genome annotation. This genome annotation is based on combining transcript information from multiple databases. This new remapping protocol, especially the new genome annotation, is shown here to be an important factor in improving the interpretation of gene expression microarray data. Results Transcript data from the RefSeq, GenBank and Ensembl databases were downloaded from the UCSC genome browser, and integrated to generate a combined zebrafish genome annotation. Affymetrix probes were filtered and remapped according to the new annotation. The influence of transcript collection and gene definition methods was tested using two microarray data sets. Compared to remapping using a single database, this new remapping protocol results in up to 20% more probes being retained in the remapping, leading to approximately 1,000 more genes being detected. The differentially expressed gene lists are consequently increased by up to 30%. We are also able to detect up to three times more alternative splicing events. A small number of the bioinformatics predictions were confirmed using real-time PCR validation. Conclusions By combining gene definitions from multiple databases, it is possible to greatly increase the numbers of genes and splice variants that can be detected in microarray gene expression experiments.

  3. DaVIE: Database for the Visualization and Integration of Epigenetic data.

    Directory of Open Access Journals (Sweden)

    Anthony Peter Fejes

    2014-09-01

    Full Text Available One of the challenges in the analysis of large data sets, particularly in a population-based setting, is the ability to perform comparisons across projects. This has to be done in such a way that the integrity of each individual project is maintained, while ensuring that the data are comparable across projects. These issues are beginning to be observed in human DNA methylation studies, as the Illumina 450k platform and next generation sequencing-based assays grow in popularity and decrease in price. This increase in productivity is enabling new insights into epigenetics, but also requires the development of pipelines and software capable of handling the large volumes of data. The specific problems inherent in creating a platform for the storage, comparison, integration and visualization of DNA methylation data include data storage, algorithm efficiency and ability to interpret the results to derive biological meaning from them. Databases provide a ready-made solution to these issues, but as yet no tools exist that that leverage these advantages while providing an intuitive user interface for interpreting results in a genomic context.We have addressed this void by integrating a database to store DNA methylation data with a web interface to query and visualize the database and a set of libraries for more complex analysis. The resulting platform is called DaVIE: Database for the Visualization and I of Epigenetics data. DaVIE can use data culled from a variety of sources, and the web interface includes the ability to group samples by sub-type, compare multiple projects and visualize genomic features in relation to sites of interest. We have used DaVIE to identify patterns of DNA methylation in specific project and across different projects, identify outlier samples, and cross-check differentially methylated CpG sites identified in specific projects across large numbers of samples.

  4. A Novel Database Design for Student Information System

    Directory of Open Access Journals (Sweden)

    Noraziah Ahmad

    2010-01-01

    Full Text Available Problem statement: A new system designed, where necessary and alternative solutions given to solve the different problems and the most feasible solution were selected. Approach: This study presents the database design for student information system. Computerization of a system means to change it from a manual to a computer-based, system to automate the work and to provide efficiency, accuracy, timelessness, security and economy. Results: After undertaking an in-depth examination of the Ayub Medical Collage's (AMC existing manual student information system and analyzing its short comings, it has been found necessary to remove its deficiencies and provide a suitable solution for presently encountered problem. Conclusion: The proposed algorithm can help the management to exercise an effective and timely decision making.

  5. Technical report on implementation of reactor internal 3D modeling and visual database system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yeun Seung; Eom, Young Sam; Lee, Suk Hee; Ryu, Seung Hyun [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-06-01

    In this report was described a prototype of reactor internal 3D modeling and VDB system for NSSS design quality improvement. For improving NSSS design quality several cases of the nuclear developed nation`s integrated computer aided engineering system, such as Mitsubishi`s NUWINGS (Japan), AECL`s CANDID (Canada) and Duke Power`s PASCE (USA) were studied. On the basis of these studies the strategy for NSSS design improvement system was extracted and detail work scope was implemented as follows : 3D modelling of the reactor internals were implemented by using the parametric solid modeler, a prototype system of design document computerization and database was suggested, and walk-through simulation integrated with 3D modeling and VDB was accomplished. Major effects of NSSS design quality improvement system by using 3D modeling and VDB are the plant design optimization by simulation, improving the reliability through the single design database system and engineering cost reduction by improving productivity and efficiency. For applying the VDB to full scope of NSSS system design, 3D modelings of reactor coolant system and nuclear fuel assembly and fuel rod were attached as appendix. 2 tabs., 31 figs., 7 refs. (Author) .new.

  6. Implementation of integrated management system

    Energy Technology Data Exchange (ETDEWEB)

    Gaspar Junior, Joao Carlos A.; Fonseca, Victor Zidan da [Industrias Nucleares do Brasil (INB-RJ) Resende, RJ (Brazil)]. E-mail: joaojunior@inb.gov.br; victorfonseca@inb.gov.br

    2007-07-01

    In present day exist quality assurance system, environment, occupational health and safety such as ISO9001, ISO14001 and OHSAS18001 and others standards will can create. These standards can be implemented and certified they guarantee one record system, quality assurance, documents control, operational control, responsibility definition, training, preparing and serve to emergency, monitoring, internal audit, corrective action, continual improvement, prevent of pollution, write procedure, reduce costs, impact assessment, risk assessment , standard, decree, legal requirements of municipal, state, federal and local scope. These procedure and systems when isolate applied cause many management systems and bureaucracy. Integration Management System reduce to bureaucracy, excess of documents, documents storage and conflict documents and easy to others standards implementation in future. The Integrated Management System (IMS) will be implemented in 2007. INB created a management group for implementation, this group decides planing, works, policy and advertisement. Legal requirements were surveyed, internal audits, pre-audits and audits were realized. INB is partially in accordance with ISO14001, OSHAS18001 standards. But very soon, it will be totally in accordance with this norms. Many studies and works were contracted to deal with legal requirements. This work have intention of show implementation process of ISO14001, OHSAS18001 and Integrated Management System on INB. (author)

  7. G-InforBIO: integrated system for microbial genomics

    Directory of Open Access Journals (Sweden)

    Abe Takashi

    2006-08-01

    Full Text Available Abstract Background Genome databases contain diverse kinds of information, including gene annotations and nucleotide and amino acid sequences. It is not easy to integrate such information for genomic study. There are few tools for integrated analyses of genomic data, therefore, we developed software that enables users to handle, manipulate, and analyze genome data with a variety of sequence analysis programs. Results The G-InforBIO system is a novel tool for genome data management and sequence analysis. The system can import genome data encoded as eXtensible Markup Language documents as formatted text documents, including annotations and sequences, from DNA Data Bank of Japan and GenBank encoded as flat files. The genome database is constructed automatically after importing, and the database can be exported as documents formatted with eXtensible Markup Language or tab-deliminated text. Users can retrieve data from the database by keyword searches, edit annotation data of genes, and process data with G-InforBIO. In addition, information in the G-InforBIO database can be analyzed seamlessly with nine different software programs, including programs for clustering and homology analyses. Conclusion The G-InforBIO system simplifies genome analyses by integrating several available software programs to allow efficient handling and manipulation of genome data. G-InforBIO is freely available from the download site.

  8. Human Ageing Genomic Resources: integrated databases and tools for the biology and genetics of ageing.

    Science.gov (United States)

    Tacutu, Robi; Craig, Thomas; Budovsky, Arie; Wuttke, Daniel; Lehmann, Gilad; Taranukha, Dmitri; Costa, Joana; Fraifeld, Vadim E; de Magalhães, João Pedro

    2013-01-01

    The Human Ageing Genomic Resources (HAGR, http://genomics.senescence.info) is a freely available online collection of research databases and tools for the biology and genetics of ageing. HAGR features now several databases with high-quality manually curated data: (i) GenAge, a database of genes associated with ageing in humans and model organisms; (ii) AnAge, an extensive collection of longevity records and complementary traits for >4000 vertebrate species; and (iii) GenDR, a newly incorporated database, containing both gene mutations that interfere with dietary restriction-mediated lifespan extension and consistent gene expression changes induced by dietary restriction. Since its creation about 10 years ago, major efforts have been undertaken to maintain the quality of data in HAGR, while further continuing to develop, improve and extend it. This article briefly describes the content of HAGR and details the major updates since its previous publications, in terms of both structure and content. The completely redesigned interface, more intuitive and more integrative of HAGR resources, is also presented. Altogether, we hope that through its improvements, the current version of HAGR will continue to provide users with the most comprehensive and accessible resources available today in the field of biogerontology.

  9. Human Ageing Genomic Resources: Integrated databases and tools for the biology and genetics of ageing

    Science.gov (United States)

    Tacutu, Robi; Craig, Thomas; Budovsky, Arie; Wuttke, Daniel; Lehmann, Gilad; Taranukha, Dmitri; Costa, Joana; Fraifeld, Vadim E.; de Magalhães, João Pedro

    2013-01-01

    The Human Ageing Genomic Resources (HAGR, http://genomics.senescence.info) is a freely available online collection of research databases and tools for the biology and genetics of ageing. HAGR features now several databases with high-quality manually curated data: (i) GenAge, a database of genes associated with ageing in humans and model organisms; (ii) AnAge, an extensive collection of longevity records and complementary traits for >4000 vertebrate species; and (iii) GenDR, a newly incorporated database, containing both gene mutations that interfere with dietary restriction-mediated lifespan extension and consistent gene expression changes induced by dietary restriction. Since its creation about 10 years ago, major efforts have been undertaken to maintain the quality of data in HAGR, while further continuing to develop, improve and extend it. This article briefly describes the content of HAGR and details the major updates since its previous publications, in terms of both structure and content. The completely redesigned interface, more intuitive and more integrative of HAGR resources, is also presented. Altogether, we hope that through its improvements, the current version of HAGR will continue to provide users with the most comprehensive and accessible resources available today in the field of biogerontology. PMID:23193293

  10. LegumeIP: an integrative database for comparative genomics and transcriptomics of model legumes.

    Science.gov (United States)

    Li, Jun; Dai, Xinbin; Liu, Tingsong; Zhao, Patrick Xuechun

    2012-01-01

    Legumes play a vital role in maintaining the nitrogen cycle of the biosphere. They conduct symbiotic nitrogen fixation through endosymbiotic relationships with bacteria in root nodules. However, this and other characteristics of legumes, including mycorrhization, compound leaf development and profuse secondary metabolism, are absent in the typical model plant Arabidopsis thaliana. We present LegumeIP (http://plantgrn.noble.org/LegumeIP/), an integrative database for comparative genomics and transcriptomics of model legumes, for studying gene function and genome evolution in legumes. LegumeIP compiles gene and gene family information, syntenic and phylogenetic context and tissue-specific transcriptomic profiles. The database holds the genomic sequences of three model legumes, Medicago truncatula, Glycine max and Lotus japonicus plus two reference plant species, A. thaliana and Populus trichocarpa, with annotations based on UniProt, InterProScan, Gene Ontology and the Kyoto Encyclopedia of Genes and Genomes databases. LegumeIP also contains large-scale microarray and RNA-Seq-based gene expression data. Our new database is capable of systematic synteny analysis across M. truncatula, G. max, L. japonicas and A. thaliana, as well as construction and phylogenetic analysis of gene families across the five hosted species. Finally, LegumeIP provides comprehensive search and visualization tools that enable flexible queries based on gene annotation, gene family, synteny and relative gene expression.

  11. Development of an integrative database with 499 novel microsatellite markers for Macaca fascicularis

    Directory of Open Access Journals (Sweden)

    Higashino Atsunori

    2009-06-01

    Full Text Available Abstract Background Cynomolgus macaques (Macaca fascicularis are a valuable resource for linkage studies of genetic disorders, but their microsatellite markers are not sufficient. In genetic studies, a prerequisite for mapping genes is development of a genome-wide set of microsatellite markers in target organisms. A whole genome sequence and its annotation also facilitate identification of markers for causative mutations. The aim of this study is to establish hundreds of microsatellite markers and to develop an integrative cynomolgus macaque genome database with a variety of datasets including marker and gene information that will be useful for further genetic analyses in this species. Results We investigated the level of polymorphisms in cynomolgus monkeys for 671 microsatellite markers that are covered by our established Bacterial Artificial Chromosome (BAC clones. Four hundred and ninety-nine (74.4% of the markers were found to be polymorphic using standard PCR analysis. The average number of alleles and average expected heterozygosity at these polymorphic loci in ten cynomolgus macaques were 8.20 and 0.75, respectively. Conclusion BAC clones and novel microsatellite markers were assigned to the rhesus genome sequence and linked with our cynomolgus macaque cDNA database (QFbase. Our novel microsatellite marker set and genomic database will be valuable integrative resources in analyzing genetic disorders in cynomolgus macaques.

  12. Integrating protein structures and precomputed genealogies in the Magnum database: Examples with cellular retinoid binding proteins

    Directory of Open Access Journals (Sweden)

    Bradley Michael E

    2006-02-01

    Full Text Available Abstract Background When accurate models for the divergent evolution of protein sequences are integrated with complementary biological information, such as folded protein structures, analyses of the combined data often lead to new hypotheses about molecular physiology. This represents an excellent example of how bioinformatics can be used to guide experimental research. However, progress in this direction has been slowed by the lack of a publicly available resource suitable for general use. Results The precomputed Magnum database offers a solution to this problem for ca. 1,800 full-length protein families with at least one crystal structure. The Magnum deliverables include 1 multiple sequence alignments, 2 mapping of alignment sites to crystal structure sites, 3 phylogenetic trees, 4 inferred ancestral sequences at internal tree nodes, and 5 amino acid replacements along tree branches. Comprehensive evaluations revealed that the automated procedures used to construct Magnum produced accurate models of how proteins divergently evolve, or genealogies, and correctly integrated these with the structural data. To demonstrate Magnum's capabilities, we asked for amino acid replacements requiring three nucleotide substitutions, located at internal protein structure sites, and occurring on short phylogenetic tree branches. In the cellular retinoid binding protein family a site that potentially modulates ligand binding affinity was discovered. Recruitment of cellular retinol binding protein to function as a lens crystallin in the diurnal gecko afforded another opportunity to showcase the predictive value of a browsable database containing branch replacement patterns integrated with protein structures. Conclusion We integrated two areas of protein science, evolution and structure, on a large scale and created a precomputed database, known as Magnum, which is the first freely available resource of its kind. Magnum provides evolutionary and structural

  13. Integrating protein structures and precomputed genealogies in the Magnum database: Examples with cellular retinoid binding proteins

    Science.gov (United States)

    Bradley, Michael E; Benner, Steven A

    2006-01-01

    Background When accurate models for the divergent evolution of protein sequences are integrated with complementary biological information, such as folded protein structures, analyses of the combined data often lead to new hypotheses about molecular physiology. This represents an excellent example of how bioinformatics can be used to guide experimental research. However, progress in this direction has been slowed by the lack of a publicly available resource suitable for general use. Results The precomputed Magnum database offers a solution to this problem for ca. 1,800 full-length protein families with at least one crystal structure. The Magnum deliverables include 1) multiple sequence alignments, 2) mapping of alignment sites to crystal structure sites, 3) phylogenetic trees, 4) inferred ancestral sequences at internal tree nodes, and 5) amino acid replacements along tree branches. Comprehensive evaluations revealed that the automated procedures used to construct Magnum produced accurate models of how proteins divergently evolve, or genealogies, and correctly integrated these with the structural data. To demonstrate Magnum's capabilities, we asked for amino acid replacements requiring three nucleotide substitutions, located at internal protein structure sites, and occurring on short phylogenetic tree branches. In the cellular retinoid binding protein family a site that potentially modulates ligand binding affinity was discovered. Recruitment of cellular retinol binding protein to function as a lens crystallin in the diurnal gecko afforded another opportunity to showcase the predictive value of a browsable database containing branch replacement patterns integrated with protein structures. Conclusion We integrated two areas of protein science, evolution and structure, on a large scale and created a precomputed database, known as Magnum, which is the first freely available resource of its kind. Magnum provides evolutionary and structural bioinformatics resources that

  14. Geometric transitions and integrable systems

    NARCIS (Netherlands)

    Diaconescu, D.-E.; Dijkgraaf, R.H.; Donagi, R.; Hofman, C.; Pantev, T.

    2006-01-01

    We consider B-model large N duality for a new class of noncompact Calabi-Yau spaces modeled on the neighborhood of a ruled surface in a Calabi-Yau threefold. The closed string side of the transition is governed at genus zero by an A(1) Hitchin integrable system on a genus g Riemann surface Sigma. Th

  15. Human-System task integration

    NARCIS (Netherlands)

    Schraagen, J.M.C.

    2005-01-01

    The Dutch Ministry of Defence research programme Human-System Task Integration aims at acquiring knowledge for the optimal cooperation between human and computer, under the following constraints: freedom of choice in decisions to automate and multiple, dynamic task distributions. This paper describe

  16. Integrated Systems Health Management for Intelligent Systems

    Science.gov (United States)

    Figueroa, Fernando; Melcher, Kevin

    2011-01-01

    The implementation of an integrated system health management (ISHM) capability is fundamentally linked to the management of data, information, and knowledge (DIaK) with the purposeful objective of determining the health of a system. It is akin to having a team of experts who are all individually and collectively observing and analyzing a complex system, and communicating effectively with each other in order to arrive at an accurate and reliable assessment of its health. In this paper, concepts, procedures, and approaches are presented as a foundation for implementing an intelligent systems ]relevant ISHM capability. The capability stresses integration of DIaK from all elements of a system. Both ground-based (remote) and on-board ISHM capabilities are compared and contrasted. The information presented is the result of many years of research, development, and maturation of technologies, and of prototype implementations in operational systems.

  17. Jacobi fields of completely integrable Hamiltonian systems

    Energy Technology Data Exchange (ETDEWEB)

    Giachetta, G.; Mangiarotti, L.; Sardanashvily, G

    2003-03-31

    We show that Jacobi fields of a completely integrable Hamiltonian system of m degrees of freedom make up an extended completely integrable system of 2m degrees of freedom, where m additional first integrals characterize a relative motion.

  18. Semantic Integration of Information Systems

    Directory of Open Access Journals (Sweden)

    Anna Lisa Guido

    2010-01-01

    Full Text Available In the previous years Information System manage only information inside the company, today a companymay search and manage information of the other companies. In this scenario the problem ofcommunication between Information Systems is of the highest importance. Up to the present moment,several types of integration have been used but all were founded on the agreement (about data to shareand the exchange format between the interested Information Systems. Today, thanks to the newtechnologies, it is possible that an Information System uses data of another Information System without aprevious agreement. The problem is that, often, the Information System may refer to the same data butwith different names. In this paper we present a methodology that, using ontology, and thus the intrinsicsemantic of each data of the Information System, allow to create a global ontology useful to enable asemantic communication between Information Systems.

  19. Advancing Exposure Science through Chemical Data Curation and Integration in the Comparative Toxicogenomics Database

    Science.gov (United States)

    Grondin, Cynthia J.; Davis, Allan Peter; Wiegers, Thomas C.; King, Benjamin L.; Wiegers, Jolene A.; Reif, David M.; Hoppin, Jane A.; Mattingly, Carolyn J.

    2016-01-01

    Background: Exposure science studies the interactions and outcomes between environmental stressors and human or ecological receptors. To augment its role in understanding human health and the exposome, we aimed to centralize and integrate exposure science data into the broader biological framework of the Comparative Toxicogenomics Database (CTD), a public resource that promotes understanding of environmental chemicals and their effects on human health. Objectives: We integrated exposure data within the CTD to provide a centralized, freely available resource that facilitates identification of connections between real-world exposures, chemicals, genes/proteins, diseases, biological processes, and molecular pathways. Methods: We developed a manual curation paradigm that captures exposure data from the scientific literature using controlled vocabularies and free text within the context of four primary exposure concepts: stressor, receptor, exposure event, and exposure outcome. Using data from the Agricultural Health Study, we have illustrated the benefits of both centralization and integration of exposure information with CTD core data. Results: We have described our curation process, demonstrated how exposure data can be accessed and analyzed in the CTD, and shown how this integration provides a broad biological context for exposure data to promote mechanistic understanding of environmental influences on human health. Conclusions: Curation and integration of exposure data within the CTD provides researchers with new opportunities to correlate exposures with human health outcomes, to identify underlying potential molecular mechanisms, and to improve understanding about the exposome. Citation: Grondin CJ, Davis AP, Wiegers TC, King BL, Wiegers JA, Reif DM, Hoppin JA, Mattingly CJ. 2016. Advancing exposure science through chemical data curation and integration in the Comparative Toxicogenomics Database. Environ Health Perspect 124:1592–1599; http://dx.doi.org/10

  20. Designing an efficient electroencephalography system using database with embedded images management approach.

    Science.gov (United States)

    Yu, Tzu-Yi; Ho, Hsu-Hua

    2014-01-01

    Many diseases associated with mental deterioration among aged patients can be effectively treated using neurological treatments. Research shows that electroencephalography (EEG) can be used as an independent prognostic indicator of morbidity and mortality. Unfortunately, EEG data are typically inaccessible to modern software. It is therefore important to design a comprehensive approach to integrate EEG results into institutional medical systems. A customized EEG system utilizing a database management approach was designed to bridge the gap between the commercial EEG software and hospital data management platforms. Practical and useful medical findings are discoursed from statistical analysis of large amounts of EEG data. © 2013 Published by Elsevier Ltd.

  1. 78 FR 62616 - Integrated System Power Rates

    Science.gov (United States)

    2013-10-22

    ... Southwestern Power Administration Integrated System Power Rates AGENCY: Southwestern Power Administration, DOE... Integrated System pursuant to the Integrated System Rate Schedules to supersede the existing rate schedules... into effect on an interim basis, increases the power rates for the Integrated System pursuant to...

  2. A Replication Protocol for Real Time database System

    Directory of Open Access Journals (Sweden)

    Ashish Srivastava

    2012-06-01

    Full Text Available Database replication protocols for real time system based on a certification approach are usually the best ones for achieving good performance. The weak voting approach achieves a slightly longer transaction completion time, but with a lower abortion rate. So, both techniques can be considered as the best ones for replication when performance is a must, and both of them take advantage of the properties provided by atomic broadcast. We propose a new database replication strategy that shares many characteristics with such previous strategies. It is also based on totally ordering the application of writesets, using only an unordered reliable broadcast, instead of an atomic broadcast. Additionally, the writesets of transactions that are aborted in the final validation phase along with verification phase incorporated in the new system are not broadcast in our strategy rather than only validation phase. Thus, this new approach certainly reducesc the communication traffic and also achieves a good transaction response time (even shorter than those previous strategies associated with only validation phase in some system configurations.

  3. Compilation of 3D soil texture dataset applying indicator kriging method, integrating soil- and agrogeological databases

    Directory of Open Access Journals (Sweden)

    Zsófia Bakacsi

    2012-06-01

    Full Text Available In the frame of the WateRisk Project (2009-2011 hydrological model has been developed for flood risk analysis, demanding the spatial distribution of soil physical properties. 3D, regional scale, spatial datasets were elaborated for pilots, based on the thematic harmonization, horizontal and vertical fitting and interpolation of soil physical parameters originating from two different databases. The profile dataset of the Digital Kreybig Soil Information System is owned by the Research Institute for Soil Science and Agricultural Chemistry; the Shallow Boring Database is managed by the Hungarian Geological Institute. The resultant databases describe the physical properties by texture classes of each of the soil layers (10 cm steps till 1 m depth and geological formations (50 cm steps below 1 m down to the ground water table depth

  4. Integrated Information System for reserving rooms in Hotels

    Directory of Open Access Journals (Sweden)

    Dr. Safarini Osama

    2011-10-01

    Full Text Available It is very important to build new and modern flexible dynamic effective compatible reusable information systems including database to help manipulate different processes and deal with many parts around it. One of these is managing the room reservations for groups and individual, to focus necessary needs in hotels and to be integrated with accounting system. This system provides many tools can be used in taking decision.

  5. Integrated Large-Scale Environmental Information Systems: A Short Survey

    OpenAIRE

    Kolios, Stavros; Maurodimou, Olga; Stylios, Chrysostomos

    2013-01-01

    Part 6: Performance Management; International audience; The installation and operation of instrument/sensor networks has great importance in monitoring the physical environment from local to global scale. Nowadays, such networks comprise vital parts of integrated information systems that are called Environmental Information Systems (EIS). Such systems provide real time monitoring, forecasts and interesting conclusions extracted from the collected data sets that are stored in huge databases. T...

  6. Record Linkage system in a complex relational database - MINPHIS example.

    Science.gov (United States)

    Achimugu, Philip; Soriyan, Abimbola; Oluwagbemi, Oluwatolani; Ajayi, Anu

    2010-01-01

    In the health sector, record linkage is of paramount importance as clinical data can be distributed across different data repositories leading to duplication. Record Linkage is the process of tracking duplicate records that actually refers to the same entity. This paper proposes a fast and efficient method for duplicates detection within the healthcare domain. The first step is to standardize the data in the database using SQL. The second is to match similar pair records, and third step is to organize records into match and non-match status. The system was developed in Unified Modeling Language and Java. In the batch analysis of 31, 177 "supposedly" distinct identities, our method isolates 25, 117 true unique records and 6, 060 suspected duplicates using a healthcare system called MINPHIS (Made in Nigeria Primary Healthcare Information System) as the test bed.

  7. microPIR: an integrated database of microRNA target sites within human promoter sequences.

    Directory of Open Access Journals (Sweden)

    Jittima Piriyapongsa

    Full Text Available BACKGROUND: microRNAs are generally understood to regulate gene expression through binding to target sequences within 3'-UTRs of mRNAs. Therefore, computational prediction of target sites is usually restricted to these gene regions. Recent experimental studies though have suggested that microRNAs may alternatively modulate gene expression by interacting with promoters. A database of potential microRNA target sites in promoters would stimulate research in this field leading to more understanding of complex microRNA regulatory mechanism. METHODOLOGY: We developed a database hosting predicted microRNA target sites located within human promoter sequences and their associated genomic features, called microPIR (microRNA-Promoter Interaction Resource. microRNA seed sequences were used to identify perfect complementary matching sequences in the human promoters and the potential target sites were predicted using the RNAhybrid program. >15 million target sites were identified which are located within 5000 bp upstream of all human genes, on both sense and antisense strands. The experimentally confirmed argonaute (AGO binding sites and EST expression data including the sequence conservation across vertebrate species of each predicted target are presented for researchers to appraise the quality of predicted target sites. The microPIR database integrates various annotated genomic sequence databases, e.g. repetitive elements, transcription factor binding sites, CpG islands, and SNPs, offering users the facility to extensively explore relationships among target sites and other genomic features. Furthermore, functional information of target genes including gene ontologies, KEGG pathways, and OMIM associations are provided. The built-in genome browser of microPIR provides a comprehensive view of multidimensional genomic data. Finally, microPIR incorporates a PCR primer design module to facilitate experimental validation. CONCLUSIONS: The proposed micro

  8. Characteristic Classes and Integrable Systems

    CERN Document Server

    Levin, A; Smirnov, A; Zotov, A

    2010-01-01

    The classical Calogero-Moser (CM) system related to a simple Lie group $G$ can be described as the Hitchin system coming from a topologically trivial Higgs $G$-bundle. We consider topologically non-trivial Higgs bundles and construct corresponding integrable systems. We call them the modified Calogero-Moser systems (MCM systems). Their phase space has the same dimension as the phase space of the standard CM systems with spin, but have less number of particles but greater number of the spin variables. Topology of the underlying holomorphic bundles are defined by their characteristic classes. Such bundles occur if $G$ has a non-trivial center, i.e. classical simply-connected groups, $E_6$ and $E_7$. Starting with these bundles we construct new integrable systems, their Lax operators, quadratic Hamiltonians, define the phase spaces and the Poisson structure using dynamical r-matrices. To describe the systems we construct a special basis in the Lie algebras that generalizes the basis of t'Hooft matrices for sl(N)...

  9. First Integrals and Integral Invariants of Relativistic Birkhoffian Systems

    Institute of Scientific and Technical Information of China (English)

    LUO Shao-Kai

    2003-01-01

    For a relativistic Birkhoffian system, the first integrals and the construction of integral invariants arestudied. Firstly, the cyclic integrals and the generalized energy integral of the system are found by using the perfectdifferential method. Secondly, the equations of nonsimultaneous variation of the system are established by using therelation between the simultaneous variation and the nonsimultaneous variation. Thirdly, the relation between the firstintegral and the integral invariant of the system is studied, and it is proved that, using a first integral, we can construct anintegral invariant of the system. Finally, the relation between the relativistic Birkhoffian dynamics and the relativisticHamiltonian dynamics is discussed, and the first integrals and the integral invariants of the relativistic Hamiltoniansystem are obtained. Two examples are given to illustrate the application of the results.

  10. Building scars for integrable systems

    CERN Document Server

    Baldo, Marcello

    1995-01-01

    It is shown, by means of a simple specific example, that for integrable systems it is possible to build up approximate eigenfunctions, called {\\it asymptotic eigenfunctions}, which are concentrated as much as one wants to a classical trajectory and have a lifetime as long as one wants. These states are directly related to the presence of shell structures in the quantal spectrum of the system. It is argued that the result can be extended to classically chaotic system, at least in the asymptotic regime.

  11. Integrated Database And Knowledge Base For Genomic Prospective Cohort Study In Tohoku Medical Megabank Toward Personalized Prevention And Medicine.

    Science.gov (United States)

    Ogishima, Soichi; Takai, Takako; Shimokawa, Kazuro; Nagaie, Satoshi; Tanaka, Hiroshi; Nakaya, Jun

    2015-01-01

    The Tohoku Medical Megabank project is a national project to revitalization of the disaster area in the Tohoku region by the Great East Japan Earthquake, and have conducted large-scale prospective genome-cohort study. Along with prospective genome-cohort study, we have developed integrated database and knowledge base which will be key database for realizing personalized prevention and medicine.

  12. GeNNet: an integrated platform for unifying scientific workflows and graph databases for transcriptome data analysis.

    Science.gov (United States)

    Costa, Raquel L; Gadelha, Luiz; Ribeiro-Alves, Marcelo; Porto, Fábio

    2017-01-01

    There are many steps in analyzing transcriptome data, from the acquisition of raw data to the selection of a subset of representative genes that explain a scientific hypothesis. The data produced can be represented as networks of interactions among genes and these may additionally be integrated with other biological databases, such as Protein-Protein Interactions, transcription factors and gene annotation. However, the results of these analyses remain fragmented, imposing difficulties, either for posterior inspection of results, or for meta-analysis by the incorporation of new related data. Integrating databases and tools into scientific workflows, orchestrating their execution, and managing the resulting data and its respective metadata are challenging tasks. Additionally, a great amount of effort is equally required to run in-silico experiments to structure and compose the information as needed for analysis. Different programs may need to be applied and different files are produced during the experiment cycle. In this context, the availability of a platform supporting experiment execution is paramount. We present GeNNet, an integrated transcriptome analysis platform that unifies scientific workflows with graph databases for selecting relevant genes according to the evaluated biological systems. It includes GeNNet-Wf, a scientific workflow that pre-loads biological data, pre-processes raw microarray data and conducts a series of analyses including normalization, differential expression inference, clusterization and gene set enrichment analysis. A user-friendly web interface, GeNNet-Web, allows for setting parameters, executing, and visualizing the results of GeNNet-Wf executions. To demonstrate the features of GeNNet, we performed case studies with data retrieved from GEO, particularly using a single-factor experiment in different analysis scenarios. As a result, we obtained differentially expressed genes for which biological functions were analyzed. The results

  13. GeNNet: an integrated platform for unifying scientific workflows and graph databases for transcriptome data analysis

    Science.gov (United States)

    Gadelha, Luiz; Ribeiro-Alves, Marcelo; Porto, Fábio

    2017-01-01

    There are many steps in analyzing transcriptome data, from the acquisition of raw data to the selection of a subset of representative genes that explain a scientific hypothesis. The data produced can be represented as networks of interactions among genes and these may additionally be integrated with other biological databases, such as Protein-Protein Interactions, transcription factors and gene annotation. However, the results of these analyses remain fragmented, imposing difficulties, either for posterior inspection of results, or for meta-analysis by the incorporation of new related data. Integrating databases and tools into scientific workflows, orchestrating their execution, and managing the resulting data and its respective metadata are challenging tasks. Additionally, a great amount of effort is equally required to run in-silico experiments to structure and compose the information as needed for analysis. Different programs may need to be applied and different files are produced during the experiment cycle. In this context, the availability of a platform supporting experiment execution is paramount. We present GeNNet, an integrated transcriptome analysis platform that unifies scientific workflows with graph databases for selecting relevant genes according to the evaluated biological systems. It includes GeNNet-Wf, a scientific workflow that pre-loads biological data, pre-processes raw microarray data and conducts a series of analyses including normalization, differential expression inference, clusterization and gene set enrichment analysis. A user-friendly web interface, GeNNet-Web, allows for setting parameters, executing, and visualizing the results of GeNNet-Wf executions. To demonstrate the features of GeNNet, we performed case studies with data retrieved from GEO, particularly using a single-factor experiment in different analysis scenarios. As a result, we obtained differentially expressed genes for which biological functions were analyzed. The results

  14. Development of materials database system for cae system of heat treatment based on data mining technology

    Institute of Scientific and Technical Information of China (English)

    GU Qiang; ZHONG Rui; JU Dong-ying

    2006-01-01

    Computer simulation for materials processing needs a huge database containing a great deal of various physical properties of materials. In order to employ the accumulated large data on materials heat treatment in the past years,it is significant to develop an intelligent database system. Based on the data mining technology for data analysis,an intelligent database web tool system of computer simulation for heat treatment process named as IndBASEweb-HT was built up. The architecture and the arithmetic of this system as well as its application were introduced.

  15. Insight: An ontology-based integrated database and analysis platform for epilepsy self-management research.

    Science.gov (United States)

    Sahoo, Satya S; Ramesh, Priya; Welter, Elisabeth; Bukach, Ashley; Valdez, Joshua; Tatsuoka, Curtis; Bamps, Yvan; Stoll, Shelley; Jobst, Barbara C; Sajatovic, Martha

    2016-10-01

    We present Insight as an integrated database and analysis platform for epilepsy self-management research as part of the national Managing Epilepsy Well Network. Insight is the only available informatics platform for accessing and analyzing integrated data from multiple epilepsy self-management research studies with several new data management features and user-friendly functionalities. The features of Insight include, (1) use of Common Data Elements defined by members of the research community and an epilepsy domain ontology for data integration and querying, (2) visualization tools to support real time exploration of data distribution across research studies, and (3) an interactive visual query interface for provenance-enabled research cohort identification. The Insight platform contains data from five completed epilepsy self-management research studies covering various categories of data, including depression, quality of life, seizure frequency, and socioeconomic information. The data represents over 400 participants with 7552 data points. The Insight data exploration and cohort identification query interface has been developed using Ruby on Rails Web technology and open source Web Ontology Language Application Programming Interface to support ontology-based reasoning. We have developed an efficient ontology management module that automatically updates the ontology mappings each time a new version of the Epilepsy and Seizure Ontology is released. The Insight platform features a Role-based Access Control module to authenticate and effectively manage user access to different research studies. User access to Insight is managed by the Managing Epilepsy Well Network database steering committee consisting of representatives of all current collaborating centers of the Managing Epilepsy Well Network. New research studies are being continuously added to the Insight database and the size as well as the unique coverage of the dataset allows investigators to conduct

  16. Generic Natural Systems Evaluation - Thermodynamic Database Development and Data Management

    Energy Technology Data Exchange (ETDEWEB)

    Wolery, T W; Sutton, M

    2011-09-19

    , meaning that they use a large body of thermodynamic data, generally from a supporting database file, to sort out the various important reactions from a wide spectrum of possibilities, given specified inputs. Usually codes of this kind are used to construct models of initial aqueous solutions that represent initial conditions for some process, although sometimes these calculations also represent a desired end point. Such a calculation might be used to determine the major chemical species of a dissolved component, the solubility of a mineral or mineral-like solid, or to quantify deviation from equilibrium in the form of saturation indices. Reactive transport codes such as TOUGHREACT and NUFT generally require the user to determine which chemical species and reactions are important, and to provide the requisite set of information including thermodynamic data in an input file. Usually this information is abstracted from the output of a geochemical modeling code and its supporting thermodynamic data file. The Yucca Mountain Project (YMP) developed two qualified thermodynamic databases to model geochemical processes, including ones involving repository components such as spent fuel. The first of the two (BSC, 2007a) was for systems containing dilute aqueous solutions only, the other (BSC, 2007b) for systems involving concentrated aqueous solutions and incorporating a model for such based on Pitzer's (1991) equations. A 25 C-only database with similarities to the latter was also developed for the Waste Isolation Pilot Plant (WIPP, cf. Xiong, 2005). The NAGRA/PSI database (Hummel et al., 2002) was developed to support repository studies in Europe. The YMP databases are often used in non-repository studies, including studies of geothermal systems (e.g., Wolery and Carroll, 2010) and CO2 sequestration (e.g., Aines et al., 2011).

  17. PhID: an open-access integrated pharmacology interactions database for drugs, targets, diseases, genes, side-effects and pathways.

    Science.gov (United States)

    Deng, Zhe; Tu, Weizhong; Deng, Zixin; Hu, Qian-Nan

    2017-09-14

    The current network pharmacology study encountered a bottleneck with a lot of public data scattered in different databases. There is the lack of open-access and consolidated platform that integrates this information for systemic research. To address this issue, we have developed PhID, an integrated pharmacology database which integrates >400,000 pharmacology elements (drug, target, disease, gene, side-effect, and pathway) and >200,000 element interactions in branches of public databases. The PhID has three major applications: (1) assists scientists searching through the overwhelming amount of pharmacology elements interaction data by names, public IDs, molecule structures, or molecular sub-structures; (2) helps visualizing pharmacology elements and their interactions with a web-based network graph; (3) provides prediction of drug-target interactions through two modules: PreDPI-ki and FIM, by which users can predict drug-target interactions of the PhID entities or some drug-target pairs they interest. To get a systems-level understanding of drug action and disease complexity, PhID as a network pharmacology tool was established from the perspective of data layer, visualization layer and prediction model layer to present information untapped by current databases. Database URL: http://phid.ditad.org/.

  18. Integrable systems, geometry, and topology

    CERN Document Server

    Terng, Chuu-Lian

    2006-01-01

    The articles in this volume are based on lectures from a program on integrable systems and differential geometry held at Taiwan's National Center for Theoretical Sciences. As is well-known, for many soliton equations, the solutions have interpretations as differential geometric objects, and thereby techniques of soliton equations have been successfully applied to the study of geometric problems. The article by Burstall gives a beautiful exposition on isothermic surfaces and their relations to integrable systems, and the two articles by Guest give an introduction to quantum cohomology, carry out explicit computations of the quantum cohomology of flag manifolds and Hirzebruch surfaces, and give a survey of Givental's quantum differential equations. The article by Heintze, Liu, and Olmos is on the theory of isoparametric submanifolds in an arbitrary Riemannian manifold, which is related to the n-wave equation when the ambient manifold is Euclidean. Mukai-Hidano and Ohnita present a survey on the moduli space of ...

  19. Evaluation of Database Modeling Methods for Geographic Information Systems

    Directory of Open Access Journals (Sweden)

    Thanasis Hadzilacos

    1998-11-01

    Full Text Available We present a systematic evaluation of different modeling techniques for the design of Geographic Information Systems as we experienced them through theoretical research and real world applications. A set of exemplary problems for spatial systems on which the suitability of models can be tested is discussed. We analyse the use of a specific database design methodology including the phases of conceptual, logical and physical modeling. By employing, at each phase, representative models of classical and object-oriented approaches we assess their efficiency in spatial data handling. At the conceptual phase, we show how the Entity-Relationship, EFO and OMT models deal with the geographic needs; at the logical phase we argue why the relational model is good to serve as a basis to accommodate these requirements, but not good enough as a stand alone solution.

  20. 基于粗集分类和遗传算法的知识库集成方法%The Methods of Knowledge Database Integration Based on the Rough Set Classification and Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    郭平; 程代杰

    2003-01-01

    As the base of intelligent system, it is very important to guarantee the consistency and non-redundancy of knowledge in knowledge database. Since the variety of knowledge sources, it is necessary to dispose knowledge with redundancy, inclusion and even contradiction during the integration of knowledge database. This paper researches the integration method based on the multi-knowledge database. Firstly, it finds out the inconsistent knowledge sets between the knowledge databases by rough set classification and presents one method eliminating the inconsistency by test data. Then, it regards consistent knowledge sets as the initial population of genetic calculation and constructs a genetic adaptive function based on accuracy, practicability and spreadability of knowledge representation to carry on the genetic calculation. Lastly, classifying the results of genetic calculation reduces the knowledge redundancy of knowledge database. This paper also presents a frameworkfor knowledge database integration based on the rough set classification and genetic algorithm.

  1. Advanced Integrated Power Systems (AIPS)

    Science.gov (United States)

    2012-10-08

    microgrid environment. Fault current distribution during islanded and grid-connected operation was the primary issue investigated. Their paper...describes an earthing or grounding system design for integration into a microgrid . Their design would rely on over current sensing technologies that are...were compared to actual data to verify the validity of the model. The model can now be used with microgrid models to simulate control and response

  2. Integration of published information into a resistance-associated mutation database for Mycobacterium tuberculosis.

    Science.gov (United States)

    Salamon, Hugh; Yamaguchi, Ken D; Cirillo, Daniela M; Miotto, Paolo; Schito, Marco; Posey, James; Starks, Angela M; Niemann, Stefan; Alland, David; Hanna, Debra; Aviles, Enrique; Perkins, Mark D; Dolinger, David L

    2015-04-01

    Tuberculosis remains a major global public health challenge. Although incidence is decreasing, the proportion of drug-resistant cases is increasing. Technical and operational complexities prevent Mycobacterium tuberculosis drug susceptibility phenotyping in the vast majority of new and retreatment cases. The advent of molecular technologies provides an opportunity to obtain results rapidly as compared to phenotypic culture. However, correlations between genetic mutations and resistance to multiple drugs have not been systematically evaluated. Molecular testing of M. tuberculosis sampled from a typical patient continues to provide a partial picture of drug resistance. A database of phenotypic and genotypic testing results, especially where prospectively collected, could document statistically significant associations and may reveal new, predictive molecular patterns. We examine the feasibility of integrating existing molecular and phenotypic drug susceptibility data to identify associations observed across multiple studies and demonstrate potential for well-integrated M. tuberculosis mutation data to reveal actionable findings.

  3. PathSys: integrating molecular interaction graphs for systems biology

    Directory of Open Access Journals (Sweden)

    Raval Alpan

    2006-02-01

    Full Text Available Abstract Background The goal of information integration in systems biology is to combine information from a number of databases and data sets, which are obtained from both high and low throughput experiments, under one data management scheme such that the cumulative information provides greater biological insight than is possible with individual information sources considered separately. Results Here we present PathSys, a graph-based system for creating a combined database of networks of interaction for generating integrated view of biological mechanisms. We used PathSys to integrate over 14 curated and publicly contributed data sources for the budding yeast (S. cerevisiae and Gene Ontology. A number of exploratory questions were formulated as a combination of relational and graph-based queries to the integrated database. Thus, PathSys is a general-purpose, scalable, graph-data warehouse of biological information, complete with a graph manipulation and a query language, a storage mechanism and a generic data-importing mechanism through schema-mapping. Conclusion Results from several test studies demonstrate the effectiveness of the approach in retrieving biologically interesting relations between genes and proteins, the networks connecting them, and of the utility of PathSys as a scalable graph-based warehouse for interaction-network integration and a hypothesis generator system. The PathSys's client software, named BiologicalNetworks, developed for navigation and analyses of molecular networks, is available as a Java Web Start application at http://brak.sdsc.edu/pub/BiologicalNetworks.

  4. RiceDB: A Web-Based Integrated Database for Annotating Rice Microarray

    Institute of Scientific and Technical Information of China (English)

    HE Fei; SHI Qing-yun; CHEN Ming; WU Ping

    2007-01-01

    RiceDB, a web-based integrated database to annotate rice microarray in various biological contexts was developed. It is composed of eight modules. RiceMap module archives the process of Affymetrix probe sets mapping to different databases about rice, and aims to the genes represented by a microarray set by retrieving annotation information via the identifier or accession number of every database; RiceGO module indicates the association between a microarray set and gene ontology (GO) categories; RiceKO module is used to annotate a microarray set based on the KEGG biochemical pathways; RiceDO module indicates the information of domain associated with a microarray set; RiceUP module is used to obtain promoter sequences for all genes represented by a microarray set; RiceMR module lists potential microRNA which regulated the genes represented by a microarray set; RiceCD and RiceGF are used to annotate the genes represented by a microarray set in the context of chromosome distribution and rice paralogous family distribution. The results of automatic annotation are mostly consistent with manual annotation. Biological interpretation of the microarray data is quickened by the help of RiceDB.

  5. AtlasT4SS: A curated database for type IV secretion systems

    Directory of Open Access Journals (Sweden)

    Souza Rangel C

    2012-08-01

    Full Text Available Abstract Background The type IV secretion system (T4SS can be classified as a large family of macromolecule transporter systems, divided into three recognized sub-families, according to the well-known functions. The major sub-family is the conjugation system, which allows transfer of genetic material, such as a nucleoprotein, via cell contact among bacteria. Also, the conjugation system can transfer genetic material from bacteria to eukaryotic cells; such is the case with the T-DNA transfer of Agrobacterium tumefaciens to host plant cells. The system of effector protein transport constitutes the second sub-family, and the third one corresponds to the DNA uptake/release system. Genome analyses have revealed numerous T4SS in Bacteria and Archaea. The purpose of this work was to organize, classify, and integrate the T4SS data into a single database, called AtlasT4SS - the first public database devoted exclusively to this prokaryotic secretion system. Description The AtlasT4SS is a manual curated database that describes a large number of proteins related to the type IV secretion system reported so far in Gram-negative and Gram-positive bacteria, as well as in Archaea. The database was created using the RDBMS MySQL and the Catalyst Framework based in the Perl programming language and using the Model-View-Controller (MVC design pattern for Web. The current version holds a comprehensive collection of 1,617 T4SS proteins from 58 Bacteria (49 Gram-negative and 9 Gram-Positive, one Archaea and 11 plasmids. By applying the bi-directional best hit (BBH relationship in pairwise genome comparison, it was possible to obtain a core set of 134 clusters of orthologous genes encoding T4SS proteins. Conclusions In our database we present one way of classifying orthologous groups of T4SSs in a hierarchical classification scheme with three levels. The first level comprises four classes that are based on the organization of genetic determinants, shared homologies, and

  6. Databases: Computerized Resource Retrieval Systems. Inservice Series No. 5.

    Science.gov (United States)

    Wilson, Mary Alice

    This document defines and describes electronic databases and provides guidance for organizing a useful database and for selecting hardware and software. Alternatives such as using larger machines are discussed, as are the computer skills necessary to use an electronic database and the use of the computer in the classroom. Files, records, and…

  7. Developing Visualization Support System for Teaching/Learning Database Normalization

    Science.gov (United States)

    Folorunso, Olusegun; Akinwale, AdioTaofeek

    2010-01-01

    Purpose: In tertiary institution, some students find it hard to learn database design theory, in particular, database normalization. The purpose of this paper is to develop a visualization tool to give students an interactive hands-on experience in database normalization process. Design/methodology/approach: The model-view-controller architecture…

  8. A Method of Rapid Generation of an Expert System Based on SQL Database

    Institute of Scientific and Technical Information of China (English)

    ShunxiangWu; WentingHuang; XiaoshengWang; JiandeGu; MaoqingLi; ShifengLiu

    2004-01-01

    This paper applies the relevant principles and methods of SQL database and Expert System, trying to research the methods and techniques of combining them and design a template of production expert system that is based on SQL database and drived by the database so as to simplify the structure of the knowledge base carried by database, generating the system more conveniently and operating it more effectively.

  9. Fossil-Fuel C02 Emissions Database and Exploration System

    Science.gov (United States)

    Krassovski, M.; Boden, T.; Andres, R. J.; Blasing, T. J.

    2012-12-01

    tabular, national, mass-emissions data and distribute them spatially on a one degree latitude by one degree longitude grid. The within-country spatial distribution is achieved through a fixed population distribution as reported in Andres et al. (1996). This presentation introduces newly build database and web interface, reflects the present state and functionality of the Fossil-Fuel CO2 Emissions Database and Exploration System as well as future plans for expansion.

  10. Integration of glacier databases within the Global Terrestrial Network for Glaciers (GTN-G)

    Science.gov (United States)

    Zemp, M.; Raup, B. H.; Armstrong, R.; Ballagh, L.; Gärtner-Roer, I.; Haeberli, W.; Hoelzle, M.; Kääb, A.; Kargel, J.; Paul, F.

    2009-04-01

    Changes in glaciers and ice caps provide some of the clearest evidence of climate change and have impacts on global sea level fluctuations, regional hydrological cycles and local natural hazard situations. Internationally coordinated collection and distribution of standardized information about glaciers and ice caps was initiated in 1894 and is today coordinated within the Global Terrestrial Network for Glaciers (GTN-G). A recently established GTN-G Steering Committee coordinates, supports and advices the operational bodies responsible for the international glacier monitoring, which are the World Glacier Monitoring Service (WGMS), the US National Snow and Ice Data Center (NSIDC) and the Global Land Ice Measurements from Space (GLIMS) initiative. In this presentation, we provide an overview of (i) the integration of the various operational databases, (ii) the development of a one-stop web-interface to these databases, and (iii) the available datasets. By joint efforts consistency and interoperability of the different glacier databases is elaborated. Thereby, the lack of a complete worldwide, detailed glacier inventory as well as different historical developments and methodological contexts of the datasets are major challenges for linking individual glaciers throughout the databases. A map-based web-interface, implemented based on OpenLayer 2.0 and Web Map/Feature Services, is elaborated to spatially link the available data and to provide data users a fast overview of all available data. With this new online service, GTN-G provides fast access to information on glacier inventory data from 100,000 glaciers mainly based on aerial photographs and from 80,000 glaciers mainly based on satellite images, length change series from 1,800 glaciers, mass balance series from 230 glaciers, special events (e.g., hazards, surges, calving instabilities) from 130 glaciers, as well as 10,000 photographs from some 470 glaciers.

  11. Serum uric acid level as a risk factor for acute kidney injury in hospitalized patients: a retrospective database analysis using the integrated medical information system at Kochi Medical School hospital.

    Science.gov (United States)

    Otomo, Kazunori; Horino, Taro; Miki, Takeo; Kataoka, Hiromi; Hatakeyama, Yutaka; Matsumoto, Tatsuki; Hamada-Ode, Kazu; Shimamura, Yoshiko; Ogata, Koji; Inoue, Kosuke; Taniguchi, Yoshinori; Terada, Yoshio; Okuhara, Yoshiyasu

    2016-04-01

    Recent studies have shown that both low and high levels of serum uric acid (SUA) before cardiovascular surgery are independent risk factors for postoperative acute kidney injury (AKI). However, these studies were limited by their small sample sizes. Here, we investigated the association between SUA levels and AKI by performing a retrospective database analysis of almost 30 years of data from 81,770 hospitalized patients. Hospitalized patients aged ≥18 years were retrospectively enrolled. AKI was diagnosed according to the Kidney Disease: Improving Global Outcomes 2012 Clinical Practice Guideline (KDIGO) criteria. Multivariate logistic regression analyses were performed to investigate the independent association between SUA levels and the incidence of AKI. SUA levels were treated as categorical variables because the relationship between SUA and the incidence of AKI has been suggested to be J-shaped or U-shaped. In addition to stratified SUA levels, we considered kidney function and related comorbidities, medications, and procedures performed prior to AKI onset as possible confounding risk factors. The final study cohort included 59,219 adult patients. Adjusted odds ratios of AKI incidence were higher in both the high- and low-SUA strata. Odds ratios tended to become larger in the higher range of SUA levels in women than in men. Additionally, this study showed that AKI risk was elevated in patients with SUA levels ≤7 mg/dL. An SUA level >7 mg/dL is considered the point of initiation of uric acid crystallization. SUA level could be an independent risk factor for AKI development in hospitalized patients. Additionally, our results might suggest that intervention to lower SUA levels is necessary, even in cases of moderate elevation that does not warrant hyperuricemia treatment. Results also showed that SUA levels that require attention are lower for women than for men.

  12. Animal model integration to AutDB, a genetic database for autism

    Directory of Open Access Journals (Sweden)

    Kollu Ravi

    2011-01-01

    Full Text Available Abstract Background In the post-genomic era, multi-faceted research on complex disorders such as autism has generated diverse types of molecular information related to its pathogenesis. The rapid accumulation of putative candidate genes/loci for Autism Spectrum Disorders (ASD and ASD-related animal models poses a major challenge for systematic analysis of their content. We previously created the Autism Database (AutDB to provide a publicly available web portal for ongoing collection, manual annotation, and visualization of genes linked to ASD. Here, we describe the design, development, and integration of a new module within AutDB for ongoing collection and comprehensive cataloguing of ASD-related animal models. Description As with the original AutDB, all data is extracted from published, peer-reviewed scientific literature. Animal models are annotated with a new standardized vocabulary of phenotypic terms developed by our researchers which is designed to reflect the diverse clinical manifestations of ASD. The new Animal Model module is seamlessly integrated to AutDB for dissemination of diverse information related to ASD. Animal model entries within the new module are linked to corresponding candidate genes in the original "Human Gene" module of the resource, thereby allowing for cross-modal navigation between gene models and human gene studies. Although the current release of the Animal Model module is restricted to mouse models, it was designed with an expandable framework which can easily incorporate additional species and non-genetic etiological models of autism in the future. Conclusions Importantly, this modular ASD database provides a platform from which data mining, bioinformatics, and/or computational biology strategies may be adopted to develop predictive disease models that may offer further insights into the molecular underpinnings of this disorder. It also serves as a general model for disease-driven databases curating phenotypic

  13. Androgen-responsive gene database: integrated knowledge on androgen-responsive genes.

    Science.gov (United States)

    Jiang, Mei; Ma, Yunsheng; Chen, Congcong; Fu, Xuping; Yang, Shu; Li, Xia; Yu, Guohua; Mao, Yumin; Xie, Yi; Li, Yao

    2009-11-01

    Androgen signaling plays an important role in many biological processes. Androgen Responsive Gene Database (ARGDB) is devoted to providing integrated knowledge on androgen-controlled genes. Gene records were collected on the basis of PubMed literature collections. More than 6000 abstracts and 950 original publications were manually screened, leading to 1785 human genes, 993 mouse genes, and 583 rat genes finally included in the database. All the collected genes were experimentally proved to be regulated by androgen at the expression level or to contain androgen-responsive regions. For each gene important details of the androgen regulation experiments were collected from references, such as expression change, androgen-responsive sequence, response time, tissue/cell type, experimental method, ligand identity, and androgen amount, which will facilitate further evaluation by researchers. Furthermore, the database was integrated with multiple annotation resources, including National Center for Biotechnology Information, Gene Ontology, and Kyoto Encyclopedia of Genes and Genomes pathway, to reveal the biological characteristics and significance of androgen-regulated genes. The ARGDB web site is mainly composed of the Browse, Search, Element Scan, and Submission modules. It is user friendly and freely accessible at http://argdb.fudan.edu.cn. Preliminary analysis of the collected data was performed. Many disease pathways, such as prostate carcinogenesis, were found to be enriched in androgen-regulated genes. The discovered androgen-response motifs were similar to those in previous reports. The analysis results are displayed in the web site. In conclusion, ARGDB provides a unified gateway to storage, retrieval, and update of information on androgen-regulated genes.

  14. Agents in an Integrated System Architecture

    DEFF Research Database (Denmark)

    Hartvig, Susanne C; Andersen, Tom

    1997-01-01

    This paper presents research findings from development of an expert system and its integration into an integrated environment. Expert systems has proven hard to integrate because of their interactive nature. A prototype environment was developed using new integration technologies, and research...... findings concerning the use of OLE technology to integrate stand alone applications are discussed. The prototype shows clear advantages of using OLE technology when developing integrated environments....

  15. An integrable Hamiltonian hierarchy and associated integrable couplings system

    Institute of Scientific and Technical Information of China (English)

    Chen Xiao-Hong; Xia Tie-Cheng; Zhu Lian-Cheng

    2007-01-01

    This paper establishes a new isospectral problem. By making use of the Tu scheme, a new integrable system is obtained. It gives integrable couplings of the system obtained. Finally, the Hamiltonian form of a binary symmetric constrained flow of the system obtained is presented.

  16. Performance Improvement with Web Based Database on Library Information System of SMK Yadika 5

    Directory of Open Access Journals (Sweden)

    Pualam Dipa Nusantara

    2015-12-01

    Full Text Available The difficulty in managing the data of books collection in the library is a problem that is often faced by the librarian that effect the quality of service. Arrangement and recording a collection of books in the file system of separate applications Word and Excel, as well as transaction handling borrowing and returning books, there has been no integrated records. Library system can manage the book collection. This system can reduce the problems often experienced by library staff when serving students in borrowing books. There so frequent difficulty in managing the books that still in borrowed state. This system will also record a collection of late fees or lost library conducted by students (borrowers. The conclusion of this study is library performance can be better with the library system using web database.

  17. GabiPD – The GABI Primary Database integrates plant proteomic data with gene-centric information

    Directory of Open Access Journals (Sweden)

    Björn eUsadel

    2012-07-01

    Full Text Available GabiPD is an integrative plant omics database that has been established as part of the German initiative for Genome Analysis of the Plant Biological System (GABI. Data from different omics disciplines are integrated and interactively visualized. Proteomics is represented by data and tools aiding studies on the identification of posttranslational modification and function of proteins. Annotated 2DE-gel images are offered to inspect protein sets expressed in different tissues of Arabidopsis thaliana and Brassica napus. From a given protein spot, a link will direct the user to the related GreenCard Gene entry where detailed gene-centric information will support the functional annotation. Beside MapMan- and GO-classification, information on conserved protein domains and on orthologs is integrated in this GreenCard service. Moreover, all other GabiPD data related to the gene, including transcriptomic data, as well as gene-specific links to external resources are provided. Researches interested in plant protein phosphorylation will find information on potential MAP kinase substrates identified in different protein microarray studies integrated in GabiPD’s Phosphoproteomics page. These data can be easily compared to experimentally identified or predicted phosphorylation sites in PhosPhAt via the related Gene GreenCard. This will allow the selection of interesting candidates for further experimental validation of their phosphorylation.

  18. Visualizing Concurrency Control Algorithms for Real-Time Database Systems

    Directory of Open Access Journals (Sweden)

    Olusegun Folorunso

    2008-11-01

    Full Text Available This paper describes an approach to visualizing concurrency control (CC algorithms for real-time database systems (RTDBs. This approach is based on the principle of software visualization, which has been applied in related fields. The Model-View-controller (MVC architecture is used to alleviate the black box syndrome associated with the study of algorithm behaviour for RTDBs Concurrency Controls. We propose a Visualization "exploratory" tool that assists the RTDBS designer in understanding the actual behaviour of the concurrency control algorithms of choice and also in evaluating the performance quality of the algorithm. We demonstrate the feasibility of our approach using an optimistic concurrency control model as our case study. The developed tool substantiates the earlier simulation-based performance studies by exposing spikes at some points when visualized dynamically that are not observed using usual static graphs. Eventually this tool helps solve the problem of contradictory assumptions of CC in RTDBs.

  19. Ground-target detection system for digital video database

    Science.gov (United States)

    Liang, Yiqing; Huang, Jeffrey R.; Wolf, Wayne H.; Liu, Bede

    1998-07-01

    As more and more visual information is available on video, information indexing and retrieval of digital video data is becoming important. A digital video database embedded with visual information processing using image analysis and image understanding techniques such as automated target detection, classification, and identification can provide query results of higher quality. We address in this paper a robust digital video database system within which a target detection module is implemented and applied onto the keyframe images extracted by our digital library system. The tasks and application scenarios under consideration involve indexing video with information about detection and verification of artificial objects that exist in video scenes. Based on the scenario that the video sequences are acquired by an onboard camera mounted on Predator unmanned aircraft, we demonstrate how an incoming video stream is structured into different levels -- video program level, scene level, shot level, and object level, based on the analysis of video contents using global imagery information. We then consider that the keyframe representation is most appropriate for video processing and it holds the property that can be used as the input for our detection module. As a result, video processing becomes feasible in terms of decreased computational resources spent and increased confidence in the (detection) decisions reached. The architecture we proposed can respond to the query of whether artificial structures and suspected combat vehicles are detected. The architecture for ground detection takes advantage of the image understanding paradigm and it involves different methods to locate and identify the artificial object rather than nature background such as tree, grass, and cloud. Edge detection, morphological transformation, line and parallel line detection using Hough transform applied on key frame images at video shot level are introduced in our detection module. This function can

  20. Marshall Space Flight Center Ground Systems Development and Integration

    Science.gov (United States)

    Wade, Gina

    2016-01-01

    Ground Systems Development and Integration performs a variety of tasks in support of the Mission Operations Laboratory (MOL) and other Center and Agency projects. These tasks include various systems engineering processes such as performing system requirements development, system architecture design, integration, verification and validation, software development, and sustaining engineering of mission operations systems that has evolved the Huntsville Operations Support Center (HOSC) into a leader in remote operations for current and future NASA space projects. The group is also responsible for developing and managing telemetry and command configuration and calibration databases. Personnel are responsible for maintaining and enhancing their disciplinary skills in the areas of project management, software engineering, software development, software process improvement, telecommunications, networking, and systems management. Domain expertise in the ground systems area is also maintained and includes detailed proficiency in the areas of real-time telemetry systems, command systems, voice, video, data networks, and mission planning systems.

  1. Experimenting with recursive queries in database and logic programming systems

    CERN Document Server

    Terracina, Giorgio; Lio, Vincenzino; Panetta, Claudio

    2007-01-01

    This paper considers the problem of reasoning on massive amounts of (possibly distributed) data. Presently, existing proposals show some limitations: {\\em (i)} the quantity of data that can be handled contemporarily is limited, due to the fact that reasoning is generally carried out in main-memory; {\\em (ii)} the interaction with external (and independent) DBMSs is not trivial and, in several cases, not allowed at all; {\\em (iii)} the efficiency of present implementations is still not sufficient for their utilization in complex reasoning tasks involving massive amounts of data. This paper provides a contribution in this setting; it presents a new system, called DLV$^{DB}$, which aims to solve these problems. Moreover, the paper reports the results of a thorough experimental analysis we have carried out for comparing our system with several state-of-the-art systems (both logic and databases) on some classical deductive problems; the other tested systems are: LDL++, XSB, Smodels and three top-level commercial D...

  2. 75 FR 1363 - Integrated System Power Rates

    Science.gov (United States)

    2010-01-11

    ... Southwestern Power Administration Integrated System Power Rates AGENCY: Southwestern Power Administration, DOE... System pursuant to the following Integrated System Rate Schedules: Rate Schedule P-09, Wholesale Rates...) Administrator has determined based on the 2009 Integrated System Current Power Repayment Study, that...

  3. An Interoperable Cartographic Database

    Directory of Open Access Journals (Sweden)

    Slobodanka Ključanin

    2007-05-01

    Full Text Available The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on the Internet. 

  4. Integrated risk information system (IRIS)

    Energy Technology Data Exchange (ETDEWEB)

    Tuxen, L. [Environmental Protection Agency, Washington, DC (United States)

    1990-12-31

    The Integrated Risk Information System (IRIS) is an electronic information system developed by the US Environmental Protection Agency (EPA) containing information related to health risk assessment. IRIS is the Agency`s primary vehicle for communication of chronic health hazard information that represents Agency consensus following comprehensive review by intra-Agency work groups. The original purpose for developing IRIS was to provide guidance to EPA personnel in making risk management decisions. This original purpose for developing IRIS was to guidance to EPA personnel in making risk management decisions. This role has expanded and evolved with wider access and use of the system. IRIS contains chemical-specific information in summary format for approximately 500 chemicals. IRIS is available to the general public on the National Library of Medicine`s Toxicology Data Network (TOXNET) and on diskettes through the National Technical Information Service (NTIS).

  5. Database Design Methodology and Database Management System for Computer-Aided Structural Design Optimization.

    Science.gov (United States)

    1984-12-01

    1983). Several researchers Lillehagen and Dokkar (1982), Grabowski, Eigener and Ranch (1978), and Eberlein and Wedekind (1982) have worked on database...Proceedings of International Federation of Information Processing. pp. 335-366. Eberlein, W. and Wedekind , H., 1982, "A Methodology for Embedding Design

  6. Integrability of some generalized Lotka - Volterra systems

    Energy Technology Data Exchange (ETDEWEB)

    Bountis, T.C.; Bier, M.; Hijmans, J.

    1983-08-08

    Several integrable systems of nonlinear ordinary differential equations of the Lotka-Volterra type are identified by the Painleve property and completely integrated. One such integrable case of N first order ode's is found, with N - 2 free parameters and N arbitrary. The concept of integrability of a general dynamical system, not necessarily derived from a hamiltonian, is also discussed.

  7. DIRECT INTEGRATION METHODS WITH INTEGRAL MODEL FOR DYNAMIC SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    吕和祥; 于洪洁; 裘春航

    2001-01-01

    A new approach which is a direct integration method with integral model ( DIM IM) to solve dynamic governing equations is presented. The governing equations are integrated into the integral equations. An algorithm with explicit and predict-correct and selfstarting and fourth-order accuracy to integrate the integral equations is given.Theoretical analysis and numerical examples show that DIM-IM discribed in this paper suitable for strong nonlinear and non-conservative system have higher accuracy than central difference, Houbolt , Newmark and Wilson- Theta methods.

  8. Slimplectic Integrators: Variational Integrators for General Nonconservative Systems

    CERN Document Server

    Tsang, David; Stein, Leo C; Turner, Alec

    2015-01-01

    Symplectic integrators are widely used for long-term integration of conservative astrophysical problems due to their ability to preserve the constants of motion; however, they cannot in general be applied in the presence of nonconservative interactions. In this Letter, we develop the "slimplectic" integrator, a new type of numerical integrator that shares many of the benefits of traditional symplectic integrators yet is applicable to general nonconservative systems. We utilize a fixed time-step variational integrator formalism applied to the principle of stationary nonconservative action developed in Galley, 2013; Galley, Tsang & Stein, 2014. As a result, the generalized momenta and energy (Noether current) evolutions are well-tracked. We discuss several example systems, including damped harmonic oscillators, Poynting-Robertson drag, and gravitational radiation reaction, by utilizing our new publicly available code to demonstrate the slimplectic integrator algorithm. Slimplectic integrators are well-suite...

  9. DORIS system and integrity survey

    Science.gov (United States)

    Jayles, C.; Chauveau, J. P.; Didelot, F.; Auriol, A.; Tourain, C.

    2016-12-01

    DORIS, as other techniques for space geodesy (SLR, VLBI, GPS) has regularly progressed to meet the ever increasing needs of the scientific community in oceanography, geodesy or geophysics. Over the past 10 years, a particular emphasis has been placed on integrity monitoring of the system, which has contributed to the enhancement of the overall availability and quality of DORIS data products. A high level of monitoring is now provided by a centralized control of the whole system, including the global network of beacons and the onboard instruments, which perform a constant end-to-end survey. At first signs of any unusual behavior, a dedicated team is activated with well-established tools to investigate, to anticipate and to contain the impact of any potential failures. The procedure has increased the availability of DORIS beacons to 90%. The core topic of this article is to demonstrate that DORIS has implemented a high-level integrity control of its data. Embedded in the DORIS receiver, DIODE (DORIS Immediate Orbit Determination) is a Real-Time On-Board Orbit Determination software. Its accuracy has also been dramatically improved when compared to Precise Orbit Ephemeris (P.O.E.), down to 2.7 cm RMS on Jason-2, 3.0 cm on Saral and 3.3 cm on CryoSat-2. Specific quality indices were derived from the DIODE-based Kalman filters and are used to monitor network and system performance. This paper covers the definition of these indices and how the reliability and the reactiveness to incidents or anomalies of the system are improved. From these indices, we have provided detailed diagnostic information about the DORIS system, which is available in real-time, on-board each DORIS satellite. Using these capabilities, we have developed real-time functions that give an immediate diagnosis of the status of key components in the DORIS system. The Near-Real Time navigation system was improved and can distinguish and handle both satellite events and beacon anomalies. The next missions

  10. miRFANs: an integrated database for Arabidopsis thaliana microRNA function annotations

    Directory of Open Access Journals (Sweden)

    Liu Hui

    2012-05-01

    Full Text Available Abstract Background Plant microRNAs (miRNAs have been revealed to play important roles in developmental control, hormone secretion, cell differentiation and proliferation, and response to environmental stresses. However, our knowledge about the regulatory mechanisms and functions of miRNAs remains very limited. The main difficulties lie in two aspects. On one hand, the number of experimentally validated miRNA targets is very limited and the predicted targets often include many false positives, which constrains us to reveal the functions of miRNAs. On the other hand, the regulation of miRNAs is known to be spatio-temporally specific, which increases the difficulty for us to understand the regulatory mechanisms of miRNAs. Description In this paper we present miRFANs, an online database for Arabidopsis thalianamiRNA function annotations. We integrated various type of datasets, including miRNA-target interactions, transcription factor (TF and their targets, expression profiles, genomic annotations and pathways, into a comprehensive database, and developed various statistical and mining tools, together with a user-friendly web interface. For each miRNA target predicted by psRNATarget, TargetAlign and UEA target-finder, or recorded in TarBase and miRTarBase, the effect of its up-regulated or down-regulated miRNA on the expression level of the target gene is evaluated by carrying out differential expression analysis of both miRNA and targets expression profiles acquired under the same (or similar experimental condition and in the same tissue. Moreover, each miRNA target is associated with gene ontology and pathway terms, together with the target site information and regulating miRNAs predicted by different computational methods. These associated terms may provide valuable insight for the functions of each miRNA. Conclusion First, a comprehensive collection of miRNA targets for Arabidopsis thaliana provides valuable information about the functions of

  11. Integrated system for seismic evaluations

    Energy Technology Data Exchange (ETDEWEB)

    Xu, J.; Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.; Graves, H.

    1989-01-01

    This paper describes the various features of the Seismic Module of the CARES system (Computer Analysis for Rapid Evaluation of Structures). This system was developed by Brookhaven National Laboratory (BNL) for the US Nuclear Regulatory Commission to perform rapid evaluations of structural behavior and capability of nuclear power plant facilities. The CARES is structured in a modular format. Each module performs a specific type of analysis i.e., static or dynamic, linear or nonlinear, etc. This paper describes the features of the Seismic Module in particular. The development of the Seismic Module of the CARES system is based on an approach which incorporates all major aspects of seismic analysis currently employed by the industry into an integrated system that allows for carrying out interactively computations of structural response to seismic motions. The code operates on a PC computer system and has multi-graphics capabilities. It has been designed with user friendly features and it allows for interactive manipulation of various analysis phases during the seismic design process. The capabilities of the seismic module include (a) generation of artificial time histories compatible with given design ground response spectra, (b) development of Power Spectral Density (PSD) functions associated with the seismic input, (c) deconvolution analysis using vertically propagating shear waves through a given soil profile, and (d) development of in-structure response spectra or corresponding PSD's. It should be pointed out that these types of analyses can also be performed individually by using available computer codes such as FLUSH, SAP, etc. The uniqueness of the CARES, however, lies on its ability to perform all required phases of the seismic analysis in an integrated manner. 5 refs., 6 figs.

  12. Robust integral stabilization of regular linear systems

    Institute of Scientific and Technical Information of China (English)

    XU Chengzheng; FENG Dexing

    2004-01-01

    We consider regular systems with control and observation. We prove some necessary and sufficient condition for an exponentially stable regular system to admit an integral stabilizing controller. We propose also some robust integral controllers when they exist.

  13. Quantization of noncommutative completely integrable Hamiltonian systems

    Energy Technology Data Exchange (ETDEWEB)

    Giachetta, G. [Department of Mathematics and Informatics, University of Camerino, 62032 Camerino (Italy); Mangiarotti, L. [Department of Mathematics and Informatics, University of Camerino, 62032 Camerino (Italy); Sardanashvily, G. [Department of Theoretical Physics, Moscow State University, 117234 Moscow (Russian Federation)]. E-mail: gennadi.sardanashvily@unicam.it

    2007-02-26

    Integrals of motion of a Hamiltonian system need not commute. The classical Mishchenko-Fomenko theorem enables one to quantize a noncommutative completely integrable Hamiltonian system around its invariant submanifold as the Abelian one.

  14. The Energy Science and Technology Database on a local library system: A case study at the Los Alamos National Research Library

    Energy Technology Data Exchange (ETDEWEB)

    Holtkamp, I.S.

    1994-10-01

    This paper presents an overview of efforts at Los Alamos National Laboratory to acquire and mount the Energy Science and Technology Database (EDB) as a citation database on the Research Library`s Geac Advance system. The rationale for undertaking this project and expected benefits are explained. Significant issues explored are loading non-USMARC records into a MARC-based library system, the use of EDB records to replace or supplement in-house cataloging of technical reports, the impact of different cataloging standards and database size on searching and retrieval, and how integrating an external database into the library`s online catalog may affect staffing and workflow.

  15. MSblender: A probabilistic approach for integrating peptide identifications from multiple database search engines.

    Science.gov (United States)

    Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M

    2011-07-01

    Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.

  16. A Multi-Index Integrated Change detection method for updating the National Land Cover Database

    Science.gov (United States)

    Jin, Suming; Yang, Limin; Xian, George Z.; Danielson, Patrick; Homer, Collin G.

    2010-01-01

    Land cover change is typically captured by comparing two or more dates of imagery and associating spectral change with true thematic change. A new change detection method, Multi-Index Integrated Change (MIIC), has been developed to capture a full range of land cover disturbance patterns for updating the National Land Cover Database (NLCD). Specific indices typically specialize in identifying only certain types of disturbances; for example, the Normalized Burn Ratio (NBR) has been widely used for monitoring fire disturbance. Recognizing the potential complementary nature of multiple indices, we integrated four indices into one model to more accurately detect true change between two NLCD time periods. The four indices are NBR, Normalized Difference Vegetation Index (NDVI), Change Vector (CV), and a newly developed index called the Relative Change Vector (RCV). The model is designed to provide both change location and change direction (e.g. biomass increase or biomass decrease). The integrated change model has been tested on five image pairs from different regions exhibiting a variety of disturbance types. Compared with a simple change vector method, MIIC can better capture the desired change without introducing additional commission errors. The model is particularly accurate at detecting forest disturbances, such as forest harvest, forest fire, and forest regeneration. Agreement between the initial change map areas derived from MIIC and the retained final land cover type change areas will be showcased from the pilot test sites.

  17. TheSNPpit—A High Performance Database System for Managing Large Scale SNP Data

    Science.gov (United States)

    Groeneveld, Eildert; Lichtenberg, Helmut

    2016-01-01

    The fast development of high throughput genotyping has opened up new possibilities in genetics while at the same time producing considerable data handling issues. TheSNPpit is a database system for managing large amounts of multi panel SNP genotype data from any genotyping platform. With an increasing rate of genotyping in areas like animal and plant breeding as well as human genetics, already now hundreds of thousand of individuals need to be managed. While the common database design with one row per SNP can manage hundreds of samples this approach becomes progressively slower as the size of the data sets increase until it finally fails completely once tens or even hundreds of thousands of individuals need to be managed. TheSNPpit has implemented three ideas to also accomodate such large scale experiments: highly compressed vector storage in a relational database, set based data manipulation, and a very fast export written in C with Perl as the base for the framework and PostgreSQL as the database backend. Its novel subset system allows the creation of named subsets based on the filtering of SNP (based on major allele frequency, no-calls, and chromosomes) and manually applied sample and SNP lists at negligible storage costs, thus avoiding the issue of proliferating file copies. The named subsets are exported for down stream analysis. PLINK ped and map files are processed as in- and outputs. TheSNPpit allows management of different panel sizes in the same population of individuals when higher density panels replace previous lower density versions as it occurs in animal and plant breeding programs. A completely generalized procedure allows storage of phenotypes. TheSNPpit only occupies 2 bits for storing a single SNP implying a capacity of 4 mio SNPs per 1MB of disk storage. To investigate performance scaling, a database with more than 18.5 mio samples has been created with 3.4 trillion SNPs from 12 panels ranging from 1000 through 20 mio SNPs resulting in a

  18. TheSNPpit-A High Performance Database System for Managing Large Scale SNP Data.

    Science.gov (United States)

    Groeneveld, Eildert; Lichtenberg, Helmut

    2016-01-01

    The fast development of high throughput genotyping has opened up new possibilities in genetics while at the same time producing considerable data handling issues. TheSNPpit is a database system for managing large amounts of multi panel SNP genotype data from any genotyping platform. With an increasing rate of genotyping in areas like animal and plant breeding as well as human genetics, already now hundreds of thousand of individuals need to be managed. While the common database design with one row per SNP can manage hundreds of samples this approach becomes progressively slower as the size of the data sets increase until it finally fails completely once tens or even hundreds of thousands of individuals need to be managed. TheSNPpit has implemented three ideas to also accomodate such large scale experiments: highly compressed vector storage in a relational database, set based data manipulation, and a very fast export written in C with Perl as the base for the framework and PostgreSQL as the database backend. Its novel subset system allows the creation of named subsets based on the filtering of SNP (based on major allele frequency, no-calls, and chromosomes) and manually applied sample and SNP lists at negligible storage costs, thus avoiding the issue of proliferating file copies. The named subsets are exported for down stream analysis. PLINK ped and map files are processed as in- and outputs. TheSNPpit allows management of different panel sizes in the same population of individuals when higher density panels replace previous lower density versions as it occurs in animal and plant breeding programs. A completely generalized procedure allows storage of phenotypes. TheSNPpit only occupies 2 bits for storing a single SNP implying a capacity of 4 mio SNPs per 1MB of disk storage. To investigate performance scaling, a database with more than 18.5 mio samples has been created with 3.4 trillion SNPs from 12 panels ranging from 1000 through 20 mio SNPs resulting in a

  19. Geometric transitions and integrable systems

    Energy Technology Data Exchange (ETDEWEB)

    Diaconescu, D.-E. [New High Energy Theory Center, Rutgers University, 126 Frelinghuysen Road, Piscataway, NJ 08854 (United States)]. E-mail: duiliu@physics.rutgers.edu; Dijkgraaf, R. [Institute for Theoretical Physics and KdV Institute for Mathematics, University of Amsterdam, Valckenierstraat 65, 1018 XE Amsterdam (Netherlands); Donagi, R. [Department of Mathematics, University of Pennsylvania, David Rittenhouse Laboratory, 209 South 33rd Street, Philadelphia, PA 19104-6395 (United States); Hofman, C. [The Weizmann Institute for Science, Department of Particle Physics, Herzl Street 2, 76100 Rehovot (Israel); Pantev, T. [Department of Mathematics, University of Pennsylvania, David Rittenhouse Laboratory, 209 South 33rd Street, Philadelphia, PA 19104-6395 (United States)

    2006-09-25

    We consider B-model large N duality for a new class of noncompact Calabi-Yau spaces modeled on the neighborhood of a ruled surface in a Calabi-Yau threefold. The closed string side of the transition is governed at genus zero by an A{sub 1} Hitchin integrable system on a genus g Riemann surface {sigma}. The open string side is described by a holomorphic Chern-Simons theory which reduces to a generalized matrix model in which the eigenvalues lie on the compact Riemann surface {sigma}. We show that the large N planar limit of the generalized matrix model is governed by the same A{sub 1} Hitchin system therefore proving genus zero large N duality for this class of transitions.

  20. Geometric transitions and integrable systems

    CERN Document Server

    Diaconescu, D E; Dijkgraaf, R; Hofman, C M; Pantev, T; Diaconescu, Duiliu-Emanuel; Donagi, Ron; Dijkgraaf, Robbert; Hofman, Christiaan; Pantev, Tony

    2005-01-01

    We consider {\\bf B}-model large $N$ duality for a new class of noncompact Calabi-Yau spaces modeled on the neighborhood of a ruled surface in a Calabi-Yau threefold. The closed string side of the transition is governed at genus zero by an $A_1$ Hitchin integrable system on a genus $g$ Riemann surface $\\Sigma$. The open string side is described by a holomorphic Chern-Simons theory which reduces to a generalized matrix model in which the eigenvalues lie on the compact Riemann surface $\\Sigma$. We show that the large $N$ planar limit of the generalized matrix model is governed by the same $A_1$ Hitchin system therefore proving genus zero large $N$ duality for this class of transitions.

  1. Geometric transitions and integrable systems

    Science.gov (United States)

    Diaconescu, D.-E.; Dijkgraaf, R.; Donagi, R.; Hofman, C.; Pantev, T.

    2006-09-01

    We consider B-model large N duality for a new class of noncompact Calabi-Yau spaces modeled on the neighborhood of a ruled surface in a Calabi-Yau threefold. The closed string side of the transition is governed at genus zero by an A Hitchin integrable system on a genus g Riemann surface Σ. The open string side is described by a holomorphic Chern-Simons theory which reduces to a generalized matrix model in which the eigenvalues lie on the compact Riemann surface Σ. We show that the large N planar limit of the generalized matrix model is governed by the same A Hitchin system therefore proving genus zero large N duality for this class of transitions.

  2. SPAN: A Network Providing Integrated, End-to-End, Sensor-to-Database Solutions for Environmental Sciences

    Science.gov (United States)

    Benzel, T.; Cho, Y. H.; Deschon, A.; Gullapalli, S.; Silva, F.

    2009-12-01

    In recent years, advances in sensor network technology have shown great promise to revolutionize environmental data collection. Still, wide spread adoption of these systems by domain experts has been lacking, and these have remained the purview of the engineers who design them. While there are many data logging options for basic data collection in the field currently, scientists are often required to visit the deployment sites to retrieve their data and manually import it into spreadsheets. Some advanced commercial software systems do allow scientists to collect data remotely, but most of these systems only allow point-to-point access, and require proprietary hardware. Furthermore, these commercial solutions preclude the use of sensors from other manufacturers or integration with internet based database repositories and compute engines. Therefore, scientists often must download and manually reformat their data before uploading it to the repositories if they wish to share their data. We present an open-source, low-cost, extensible, turnkey solution called Sensor Processing and Acquisition Network (SPAN) which provides a robust and flexible sensor network service. At the deployment site, SPAN leverages low-power generic embedded processors to integrate variety of commercially available sensor hardware to the network of environmental observation systems. By bringing intelligence close to the sensed phenomena, we can remotely control configuration and re-use, establish rules to trigger sensor activity, manage power requirements, and control the two-way flow of sensed data as well as control information to the sensors. Key features of our design include (1) adoption of a hardware agnostic architecture: our solutions are compatible with several programmable platforms, sensor systems, communication devices and protocols. (2) information standardization: our system supports several popular communication protocols and data formats, and (3) extensible data support: our

  3. The Multi-level Recovery of Main-memory Real-time Database Systems with ECBH

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Storing the whole database in the main-memory is a common method to process real-time transaction in real-time database systems. The recovery mechanism of Main-memory Real-time Database Systems (MMRTDBS) should reflect the characteristics of the main-memory database and real-time database because their structures are quite different from other conventional database systems. In this paper, therefore, we propose a multi-level recovery mechanism for main-memory real-time database systems with Extendible Chained Bucket Hashing (ECBH). Owing to the occurrence of real-time data in real-time systems, we should also consider it in our recovery mechanism. According to our performance test, this mechanism can improve the transaction concurrency, reducing transactions' deadline missing rate.

  4. Integrated roof wind energy system

    Directory of Open Access Journals (Sweden)

    Moonen S.P.G.

    2012-10-01

    Full Text Available Wind is an attractive renewable source of energy. Recent innovations in research and design have reduced to a few alternatives with limited impact on residential construction. Cost effective solutions have been found at larger scale, but storage and delivery of energy to the actual location it is used, remain a critical issue. The Integrated Roof Wind Energy System is designed to overcome the current issues of urban and larger scale renewable energy system. The system is built up by an axial array of skewed shaped funnels that make use of the Venturi Effect to accelerate the wind flow. This inventive use of shape and geometry leads to a converging air capturing inlet to create high wind mass flow and velocity toward a vertical-axis wind turbine in the top of the roof for generation of a relatively high amount of energy. The methods used in this overview of studies include an array of tools from analytical modelling, PIV wind tunnel testing, and CFD simulation studies. The results define the main design parameters for an efficient system, and show the potential for the generation of high amounts of renewable energy with a novel and effective system suited for the built environment.

  5. Integrated roof wind energy system

    Science.gov (United States)

    Suma, A. B.; Ferraro, R. M.; Dano, B.; Moonen, S. P. G.

    2012-10-01

    Wind is an attractive renewable source of energy. Recent innovations in research and design have reduced to a few alternatives with limited impact on residential construction. Cost effective solutions have been found at larger scale, but storage and delivery of energy to the actual location it is used, remain a critical issue. The Integrated Roof Wind Energy System is designed to overcome the current issues of urban and larger scale renewable energy system. The system is built up by an axial array of skewed shaped funnels that make use of the Venturi Effect to accelerate the wind flow. This inventive use of shape and geometry leads to a converging air capturing inlet to create high wind mass flow and velocity toward a vertical-axis wind turbine in the top of the roof for generation of a relatively high amount of energy. The methods used in this overview of studies include an array of tools from analytical modelling, PIV wind tunnel testing, and CFD simulation studies. The results define the main design parameters for an efficient system, and show the potential for the generation of high amounts of renewable energy with a novel and effective system suited for the built environment.

  6. 76 FR 48159 - Integrated System Power Rates

    Science.gov (United States)

    2011-08-08

    ... Southwestern Power Administration Integrated System Power Rates AGENCY: Southwestern Power Administration, DOE... facilities. The Administrator has developed proposed Integrated System rates, which are supported by a rate... 24 projects are repaid via revenues received under the Integrated System rates, as are those...

  7. Integration of Remotely Sensed Data Into Geospatial Reference Information Databases. Un-Ggim National Approach

    Science.gov (United States)

    Arozarena, A.; Villa, G.; Valcárcel, N.; Pérez, B.

    2016-06-01

    Remote sensing satellites, together with aerial and terrestrial platforms (mobile and fixed), produce nowadays huge amounts of data coming from a wide variety of sensors. These datasets serve as main data sources for the extraction of Geospatial Reference Information (GRI), constituting the "skeleton" of any Spatial Data Infrastructure (SDI). Since very different situations can be found around the world in terms of geographic information production and management, the generation of global GRI datasets seems extremely challenging. Remotely sensed data, due to its wide availability nowadays, is able to provide fundamental sources for any production or management system present in different countries. After several automatic and semiautomatic processes including ancillary data, the extracted geospatial information is ready to become part of the GRI databases. In order to optimize these data flows for the production of high quality geospatial information and to promote its use to address global challenges several initiatives at national, continental and global levels have been put in place, such as European INSPIRE initiative and Copernicus Programme, and global initiatives such as the Group on Earth Observation/Global Earth Observation System of Systems (GEO/GEOSS) and United Nations Global Geospatial Information Management (UN-GGIM). These workflows are established mainly by public organizations, with the adequate institutional arrangements at national, regional or global levels. Other initiatives, such as Volunteered Geographic Information (VGI), on the other hand may contribute to maintain the GRI databases updated. Remotely sensed data hence becomes one of the main pillars underpinning the establishment of a global SDI, as those datasets will be used by public agencies or institutions as well as by volunteers to extract the required spatial information that in turn will feed the GRI databases. This paper intends to provide an example of how institutional

  8. COMBINED DATABASE OF THESES - CONSTITUENT OF THE INTEGRATED INFORMATION RESOURСE

    Directory of Open Access Journals (Sweden)

    Svitlana H. Kovalenko

    2013-12-01

    Full Text Available The article is focused on the problem of creating of theses bibliographic combined database on education, pedagogy and psychology at V. Sukhomlynskyi State Scientific and Pedagogical Library of Ukraine. The actuality of the article is predetermined by the significance of the information that is contained in dissertations for scientists, post-graduate students, teachers and others. The aim of the article is to highlight the procedure of forming an integrated branch information resource (IBIR in the electronic form and making it manifold and accessible to the reader. The prospects of working at creating IBIR are to involve all pedagogical higher educational establishments of Ukraine, institutes of post-graduate studies, all libraries of the scientific institutions which belong to the National Academy of Pedagogical Science of Ukraine in the mentioned project.

  9. Checkpointing and Recovery in Distributed and Database Systems

    Science.gov (United States)

    Wu, Jiang

    2011-01-01

    A transaction-consistent global checkpoint of a database records a state of the database which reflects the effect of only completed transactions and not the results of any partially executed transactions. This thesis establishes the necessary and sufficient conditions for a checkpoint of a data item (or the checkpoints of a set of data items) to…

  10. Intelligent systems technology infrastructure for integrated systems

    Science.gov (United States)

    Lum, Henry

    1991-01-01

    A system infrastructure must be properly designed and integrated from the conceptual development phase to accommodate evolutionary intelligent technologies. Several technology development activities were identified that may have application to rendezvous and capture systems. Optical correlators in conjunction with fuzzy logic control might be used for the identification, tracking, and capture of either cooperative or non-cooperative targets without the intensive computational requirements associated with vision processing. A hybrid digital/analog system was developed and tested with a robotic arm. An aircraft refueling application demonstration is planned within two years. Initially this demonstration will be ground based with a follow-on air based demonstration. System dependability measurement and modeling techniques are being developed for fault management applications. This involves usage of incremental solution/evaluation techniques and modularized systems to facilitate reuse and to take advantage of natural partitions in system models. Though not yet commercially available and currently subject to accuracy limitations, technology is being developed to perform optical matrix operations to enhance computational speed. Optical terrain recognition using camera image sequencing processed with optical correlators is being developed to determine position and velocity in support of lander guidance. The system is planned for testing in conjunction with Dryden Flight Research Facility. Advanced architecture technology is defining open architecture design constraints, test bed concepts (processors, multiple hardware/software and multi-dimensional user support, knowledge/tool sharing infrastructure), and software engineering interface issues.

  11. Development of human protein reference database as an initial platform for approaching systems biology in humans

    DEFF Research Database (Denmark)

    Peri, Suraj; Navarro, J Daniel; Amanchy, Ramars

    2003-01-01

    Human Protein Reference Database (HPRD) is an object database that integrates a wealth of information relevant to the function of human proteins in health and disease. Data pertaining to thousands of protein-protein interactions, posttranslational modifications, enzyme/substrate relationships, di...

  12. The Erasmus insurance case and a related questionnaire for distributed database management systems

    NARCIS (Netherlands)

    S.C. van der Made-Potuijt

    1990-01-01

    textabstractThis is the third report concerning transaction management in the database environment. In the first report the role of the transaction manager in protecting the integrity of a database has been studied [van der Made-Potuijt 1989]. In the second report a model has been given for a transa

  13. Design of Student Information Management Database Application System for Office and Departmental Target Responsibility System

    Science.gov (United States)

    Zhou, Hui

    It is the inevitable outcome of higher education reform to carry out office and departmental target responsibility system, in which statistical processing of student's information is an important part of student's performance review. On the basis of the analysis of the student's evaluation, the student information management database application system is designed by using relational database management system software in this paper. In order to implement the function of student information management, the functional requirement, overall structure, data sheets and fields, data sheet Association and software codes are designed in details.

  14. Integrated Taxonomic Information System (ITIS)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — ITIS is an easily accessible database with reliable information on species names and their hierarchical classification that is publicly accessible with unlimited...

  15. Systems Integration Challenges for a National Space Launch System

    Science.gov (United States)

    May, Todd A.

    2011-01-01

    System Integration was refined through the complexity and early failures experienced in rocket flight. System Integration encompasses many different viewpoints of the system development. System Integration must ensure consistency in development and operations activities. Human Space Flight tends toward large, complex systems. Understanding the system fs operational and use context is the guiding principle for System Integration: (1) Sizeable costs can be driven into systems by not fully understanding context (2). Adhering to the system context throughout the system fs life cycle is essential to maintaining efficient System Integration. System Integration exists within the System Architecture. Beautiful systems are simple in use and operation -- Block upgrades facilitate manageable steps in functionality evolution. Effective System Integration requires a stable system concept. Communication is essential to system simplicity

  16. ParaLite: A Parallel Database System for Data-Intensive Workflows

    National Research Council Canada - National Science Library

    CHEN, Ting; TAURA, Kenjiro

    2014-01-01

    ...) and collective queries. UDX facilitates the description of workflows by enabling seamless integrations of external executables into SQL statements without any efforts to write programs confirming to strict specifications of database...

  17. A Conceptual Model and Database to Integrate Data and Project Management

    Science.gov (United States)

    Guarinello, M. L.; Edsall, R.; Helbling, J.; Evaldt, E.; Glenn, N. F.; Delparte, D.; Sheneman, L.; Schumaker, R.

    2015-12-01

    Data management is critically foundational to doing effective science in our data-intensive research era and done well can enhance collaboration, increase the value of research data, and support requirements by funding agencies to make scientific data and other research products available through publically accessible online repositories. However, there are few examples (but see the Long-term Ecological Research Network Data Portal) of these data being provided in such a manner that allows exploration within the context of the research process - what specific research questions do these data seek to answer? what data were used to answer these questions? what data would have been helpful to answer these questions but were not available? We propose an agile conceptual model and database design, as well as example results, that integrate data management with project management not only to maximize the value of research data products but to enhance collaboration during the project and the process of project management itself. In our project, which we call 'Data Map,' we used agile principles by adopting a user-focused approach and by designing our database to be simple, responsive, and expandable. We initially designed Data Map for the Idaho EPSCoR project "Managing Idaho's Landscapes for Ecosystem Services (MILES)" (see https://www.idahoecosystems.org//) and will present example results for this work. We consulted with our primary users- project managers, data managers, and researchers to design the Data Map. Results will be useful to project managers and to funding agencies reviewing progress because they will readily provide answers to the questions "For which research projects/questions are data available and/or being generated by MILES researchers?" and "Which research projects/questions are associated with each of the 3 primary questions from the MILES proposal?" To be responsive to the needs of the project, we chose to streamline our design for the prototype

  18. Integrating timescales with time-transfer functions: a practical approach for an INTIMATE database

    Science.gov (United States)

    Bronk Ramsey, Christopher; Albert, Paul; Blockley, Simon; Hardiman, Mark; Lane, Christine; Macleod, Alison; Matthews, Ian P.; Muscheler, Raimund; Palmer, Adrian; Staff, Richard A.

    2014-12-01

    The purpose of the INTIMATE project is to integrate palaeo-climate information from terrestrial, ice and marine records so that the timing of environmental response to climate forcing can be compared in both space and time. One of the key difficulties in doing this is the range of different methods of dating that can be used across different disciplines. For this reason, one of the main outputs of INTIMATE has been to use an event-stratigraphic approach which enables researchers to co-register synchronous events (such as the deposition of tephra from major volcanic eruptions) in different archives (Blockley et al., 2012). However, this only partly solves the problem, because it gives information only at particular short intervals where such information is present. Between these points the ability to compare different records is necessarily less precise chronologically. What is needed therefore is a way to quantify the uncertainties in the correlations between different records, even if they are dated by different methods, and make maximum use of the information available that links different records. This paper outlines the design of a database that is intended to provide integration of timescales and associated environmental proxy information. The database allows for the fact that all timescales have their own limitations, which should be quantified in terms of the uncertainties quoted. It also makes use of the fact that each timescale has strengths in terms of describing the data directly associated with it. For this reason the approach taken allows users to look at data on any timescale that can in some way be related to the data of interest, rather than specifying a specific timescale or timescales which should always be used. The information going into the database is primarily: proxy information (principally from sediments and ice cores) against depth, age depth models against reference chronologies (typically IntCal or ice core), and time-transfer functions

  19. DSSTox EPA Integrated Risk Information System Structure-Index Locator File: SDF File and Documentation

    Science.gov (United States)

    EPA's Integrated Risk Information System (IRIS) database was developed and is maintained by EPA's Office of Research and Developement, National Center for Environmental Assessment. IRIS is a database of human health effects that may result from exposure to various substances fou...

  20. INTEGRATION OF ENVIRONMENTAL MANAGEMENT SYSTEM

    Directory of Open Access Journals (Sweden)

    Tomescu Ada Mirela

    2012-07-01

    Full Text Available The relevance of management as significant factor of business activity can be established on various management systems. These will help to obtain, organise, administrate, evaluate and control particulars: information, quality, environmental protection, health and safety, various resources (time, human, finance, inventory etc. The complexity of nowadays days development, forced us to think ‘integrated’. Sustainable development principles require that environment management policies and practices are not good in themselves but also integrate with all other environmental objectives, and with social and economic development objectives. The principles of sustainable development involve that environment management policies and practices. These are not sound in them-self but also integrate with all other environmental objectives, and with social and economic development objectives. Those objectives were realized, and followed by development of strategies to effects the objective of sustainable development. Environmental management should embrace recent change in the area of environmental protection, and suit the recently regulations of the field -entire legal and economic, as well as perform management systems to meet the requirements of the contemporary model for economic development. These changes are trailed by abandon the conventional approach of environmental protection and it is replaced by sustainable development (SD. The keys and the aims of Cleaner Productions (CP are presented being implemented in various companies as a non-formalised environmental management system (EMS. This concept is suggested here as a proper model for practice where possible environmental harmful technologies are used -e.g. Rosia Montana. Showing the features and the power of CP this paper is a signal oriented to involve the awareness of policy-makers and top management of diverse Romanian companies. Many companies in European countries are developing

  1. GIDL: a rule based expert system for GenBank Intelligent Data Loading into the Molecular Biodiversity Database.

    Science.gov (United States)

    Pannarale, Paolo; Catalano, Domenico; De Caro, Giorgio; Grillo, Giorgio; Leo, Pietro; Pappadà, Graziano; Rubino, Francesco; Scioscia, Gaetano; Licciulli, Flavio

    2012-03-28

    In the scientific biodiversity community, it is increasingly perceived the need to build a bridge between molecular and traditional biodiversity studies. We believe that the information technology could have a preeminent role in integrating the information generated by these studies with the large amount of molecular data we can find in bioinformatics public databases. This work is primarily aimed at building a bioinformatic infrastructure for the integration of public and private biodiversity data through the development of GIDL, an Intelligent Data Loader coupled with the Molecular Biodiversity Database. The system presented here organizes in an ontological way and locally stores the sequence and annotation data contained in the GenBank primary database. The GIDL architecture consists of a relational database and of an intelligent data loader software. The relational database schema is designed to manage biodiversity information (Molecular Biodiversity Database) and it is organized in four areas: MolecularData, Experiment, Collection and Taxonomy. The MolecularData area is inspired to an established standard in Generic Model Organism Databases, the Chado relational schema. The peculiarity of Chado, and also its strength, is the adoption of an ontological schema which makes use of the Sequence Ontology. The Intelligent Data Loader (IDL) component of GIDL is an Extract, Transform and Load software able to parse data, to discover hidden information in the GenBank entries and to populate the Molecular Biodiversity Database. The IDL is composed by three main modules: the Parser, able to parse GenBank flat files; the Reasoner, which automatically builds CLIPS facts mapping the biological knowledge expressed by the Sequence Ontology; the DBFiller, which translates the CLIPS facts into ordered SQL statements used to populate the database. In GIDL Semantic Web technologies have been adopted due to their advantages in data representation, integration and processing

  2. Energy Systems Integration: Demonstrating Distributed Resource Communications

    Energy Technology Data Exchange (ETDEWEB)

    2017-01-01

    Overview fact sheet about the Electric Power Research Institute (EPRI) and Schneider Electric Integrated Network Testbed for Energy Grid Research and Technology Experimentation (INTEGRATE) project at the Energy Systems Integration Facility. INTEGRATE is part of the U.S. Department of Energy's Grid Modernization Initiative.

  3. A potential integration method for Birkhoffian system

    Institute of Scientific and Technical Information of China (English)

    Hu Chu-Le; Xie Jia-Fang

    2008-01-01

    This paper is intended to apply the potential integration method to the differential equations of the Birkhoffian system.The method is that,for a given Birkhoffian system,its diffcrential equations are first rewritten as 2n first-order differential equations.Secondly,the corresponding partial differential equations are obtained by potential integration method and the solution is expressed as a complete integral.Finally, the integral of the system is obtained.

  4. Integrated Standardization and Systems Engineering Management

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Integrated standardization is one of the fundamenta l forms of modern standardization. It is the combination of system science and the content of standardization. The development of system science has provided theo retic foundation and precondition for integrated standardization. The relevant r esearch on integrated standardization and system engineering illustrate that int egrated standardization is a advanced method which presented with the developmen t of modern science and technology . Integrated st...

  5. Towards the integration of mouse databases - definition and implementation of solutions to two use-cases in mouse functional genomics.

    Science.gov (United States)

    Gruenberger, Michael; Alberts, Rudi; Smedley, Damian; Swertz, Morris; Schofield, Paul; Schughart, Klaus

    2010-01-22

    The integration of information present in many disparate biological databases represents a major challenge in biomedical research. To define the problems and needs, and to explore strategies for database integration in mouse functional genomics, we consulted the biologist user community and implemented solutions to two user-defined use-cases. We organised workshops, meetings and used a questionnaire to identify the needs of biologist database users in mouse functional genomics. As a result, two use-cases were developed that can be used to drive future designs or extensions of mouse databases. Here, we present the use-cases and describe some initial computational solutions for them. The application for the gene-centric use-case, "MUSIG-Gen" starts from a list of gene names and collects a wide range of data types from several distributed databases in a "shopping cart"-like manner. The iterative user-driven approach is a response to strongly articulated requests from users, especially those without computational biology backgrounds. The application for the phenotype-centric use-case, "MUSIG-Phen", is based on a similar concept and starting from phenotype descriptions retrieves information for associated genes. The use-cases created, and their prototype software implementations should help to better define biologists' needs for database integration and may serve as a starting point for future bioinformatics solutions aimed at end-user biologists.

  6. Towards the integration of mouse databases - definition and implementation of solutions to two use-cases in mouse functional genomics

    Directory of Open Access Journals (Sweden)

    Schofield Paul

    2010-01-01

    Full Text Available Abstract Background The integration of information present in many disparate biological databases represents a major challenge in biomedical research. To define the problems and needs, and to explore strategies for database integration in mouse functional genomics, we consulted the biologist user community and implemented solutions to two user-defined use-cases. Results We organised workshops, meetings and used a questionnaire to identify the needs of biologist database users in mouse functional genomics. As a result, two use-cases were developed that can be used to drive future designs or extensions of mouse databases. Here, we present the use-cases and describe some initial computational solutions for them. The application for the gene-centric use-case, "MUSIG-Gen" starts from a list of gene names and collects a wide range of data types from several distributed databases in a "shopping cart"-like manner. The iterative user-driven approach is a response to strongly articulated requests from users, especially those without computational biology backgrounds. The application for the phenotype-centric use-case, "MUSIG-Phen", is based on a similar concept and starting from phenotype descriptions retrieves information for associated genes. Conclusion The use-cases created, and their prototype software implementations should help to better define biologists' needs for database integration and may serve as a starting point for future bioinformatics solutions aimed at end-user biologists.

  7. Integrative systems biology for data-driven knowledge discovery.

    Science.gov (United States)

    Greene, Casey S; Troyanskaya, Olga G

    2010-09-01

    Integrative systems biology is an approach that brings together diverse high-throughput experiments and databases to gain new insights into biological processes or systems at molecular through physiological levels. These approaches rely on diverse high-throughput experimental techniques that generate heterogeneous data by assaying varying aspects of complex biological processes. Computational approaches are necessary to provide an integrative view of these experimental results and enable data-driven knowledge discovery. Hypotheses generated from these approaches can direct definitive molecular experiments in a cost-effective manner. By using integrative systems biology approaches, we can leverage existing biological knowledge and large-scale data to improve our understanding of as yet unknown components of a system of interest and how its malfunction leads to disease.

  8. An integrated data warehouse system: development, implementation, and early outcomes.

    Science.gov (United States)

    Myers, D L; Burke, K C; Burke, J D; Culp, K S

    2000-03-01

    This paper describes a generic vision of global information flow and the development of an integrated data warehouse system, using clinical data on all patient encounters and administrative data on all operating transactions as part of an integrated health care system. This new integrated data warehouse system has been successfully used for multiple purposes, including patient care, health services research, resource utilization and feasibility studies. During 1999, core analyses included the electronic abstraction, aggregation, and analysis of data on over 400,000 patients. This approach to building a centralized data system comprised of multiple repositories efficiently meets a variety of individual and aggregate information needs, while reducing the need to create duplicate databases.

  9. Integrated solar energy system optimization

    Science.gov (United States)

    Young, S. K.

    1982-11-01

    The computer program SYSOPT, intended as a tool for optimizing the subsystem sizing, performance, and economics of integrated wind and solar energy systems, is presented. The modular structure of the methodology additionally allows simulations when the solar subsystems are combined with conventional technologies, e.g., a utility grid. Hourly energy/mass flow balances are computed for interconnection points, yielding optimized sizing and time-dependent operation of various subsystems. The program requires meteorological data, such as insolation, diurnal and seasonal variations, and wind speed at the hub height of a wind turbine, all of which can be taken from simulations like the TRNSYS program. Examples are provided for optimization of a solar-powered (wind turbine and parabolic trough-Rankine generator) desalinization plant, and a design analysis for a solar powered greenhouse.

  10. Integrated delivery systems. Evolving oligopolies.

    Science.gov (United States)

    Malone, T A

    1998-01-01

    The proliferation of Integrated Delivery Systems (IDSs) in regional health care markets has resulted in the movement of these markets from a monopolistic competitive model of behavior to an oligopoly. An oligopoly is synonymous with competition among the few, as a small number of firms supply a dominant share of an industry's total output. The basic characteristics of a market with competition among the few are: (1) A mutual interdependence among the actions and behaviors of competing firms; (2) competition tends to rely on the differentiation of products; (3) significant barriers to entering the market exist; (4) the demand curve for services may be kinked; and (5) firms can benefit from economies of scale. An understanding of these characteristics is essential to the survival of IDSs as regional managed care markets mature.

  11. CancerHSP: anticancer herbs database of systems pharmacology

    Science.gov (United States)

    Tao, Weiyang; Li, Bohui; Gao, Shuo; Bai, Yaofei; Shar, Piar Ali; Zhang, Wenjuan; Guo, Zihu; Sun, Ke; Fu, Yingxue; Huang, Chao; Zheng, Chunli; Mu, Jiexin; Pei, Tianli; Wang, Yuan; Li, Yan; Wang, Yonghua

    2015-06-01

    The numerous natural products and their bioactivity potentially afford an extraordinary resource for new drug discovery and have been employed in cancer treatment. However, the underlying pharmacological mechanisms of most natural anticancer compounds remain elusive, which has become one of the major obstacles in developing novel effective anticancer agents. Here, to address these unmet needs, we developed an anticancer herbs database of systems pharmacology (CancerHSP), which records anticancer herbs related information through manual curation. Currently, CancerHSP contains 2439 anticancer herbal medicines with 3575 anticancer ingredients. For each ingredient, the molecular structure and nine key ADME parameters are provided. Moreover, we also provide the anticancer activities of these compounds based on 492 different cancer cell lines. Further, the protein targets of the compounds are predicted by state-of-art methods or collected from literatures. CancerHSP will help reveal the molecular mechanisms of natural anticancer products and accelerate anticancer drug development, especially facilitate future investigations on drug repositioning and drug discovery. CancerHSP is freely available on the web at http://lsp.nwsuaf.edu.cn/CancerHSP.php.

  12. Seismic Monitoring System Calibration Using Ground Truth Database

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Winston; Wagner, Robert

    2002-12-22

    Calibration of a seismic monitoring system remains a major issue due to the lack of ground truth information and uncertainties in the regional geological parameters. Rapid and accurate identification of seismic events is currently not feasible due to the absence of a fundamental framework allowing immediate access to ground truth information for many parts of the world. Precise location and high-confidence identification of regional seismic events are the primary objectives of monitoring research in seismology. In the Department of Energy Knowledge Base (KB), ground truth information addresses these objectives and will play a critical role for event relocation and identification using advanced seismic analysis tools. Maintaining the KB with systematic compilation and analysis of comprehensive sets of geophysical data from various parts of the world is vital. The goal of this project is to identify a comprehensive database for China using digital seismic waveform data that are currently unavailable. These data may be analyzed along with ground truth information that becomes available. To date, arrival times for all regional phases are determined on all events above Mb 4.5 that occurred in China in 2000 and 2001. Travel-time models are constructed to compare with existing models. Seismic attenuation models may be constructed to provide better understanding of regional wave propagation in China with spatial resolution that has not previously been obtained.

  13. CancerHSP: anticancer herbs database of systems pharmacology

    Science.gov (United States)

    Tao, Weiyang; Li, Bohui; Gao, Shuo; Bai, Yaofei; Shar, Piar Ali; Zhang, Wenjuan; Guo, Zihu; Sun, Ke; Fu, Yingxue; Huang, Chao; Zheng, Chunli; Mu, Jiexin; Pei, Tianli; Wang, Yuan; Li, Yan; Wang, Yonghua

    2015-01-01

    The numerous natural products and their bioactivity potentially afford an extraordinary resource for new drug discovery and have been employed in cancer treatment. However, the underlying pharmacological mechanisms of most natural anticancer compounds remain elusive, which has become one of the major obstacles in developing novel effective anticancer agents. Here, to address these unmet needs, we developed an anticancer herbs database of systems pharmacology (CancerHSP), which records anticancer herbs related information through manual curation. Currently, CancerHSP contains 2439 anticancer herbal medicines with 3575 anticancer ingredients. For each ingredient, the molecular structure and nine key ADME parameters are provided. Moreover, we also provide the anticancer activities of these compounds based on 492 different cancer cell lines. Further, the protein targets of the compounds are predicted by state-of-art methods or collected from literatures. CancerHSP will help reveal the molecular mechanisms of natural anticancer products and accelerate anticancer drug development, especially facilitate future investigations on drug repositioning and drug discovery. CancerHSP is freely available on the web at http://lsp.nwsuaf.edu.cn/CancerHSP.php. PMID:26074488

  14. SENSORIMOTOR INTEGRATION BY CORTICOSPINAL SYSTEM

    Directory of Open Access Journals (Sweden)

    Yunuen eMoreno-López

    2016-03-01

    Full Text Available The corticospinal (CS tract is a complex system which targets several areas of the spinal cord. In particular, the CS descending projection plays a major role in motor command, which results from direct and indirect control of spinal cord pre-motor interneurons as well as motoneurons. But in addition, this system is also involved in a selective and complex modulation of sensory feedback. Despite recent evidence confirms that CS projections drive distinct segmental neural circuits that are part of the sensory and pre-motor pathways, little is known about the spinal networks engaged by the corticospinal tract, the organization of CS projections, the intracortical microcircuitry, and the synaptic interactions in the sensorimotor cortex that may encode different cortical outputs to the spinal cord. Here is stressed the importance of integrated approaches for the study of sensorimotor function of CS system, in order to understand the functional compartmentalization and hierarchical organization of layer 5 output neurons, who are key elements for motor control and hence, of behavior.

  15. Advanced integrated solvent extraction systems

    Energy Technology Data Exchange (ETDEWEB)

    Horwitz, E.P.; Dietz, M.L.; Leonard, R.A. [Argonne National Lab., IL (United States)

    1997-10-01

    Advanced integrated solvent extraction systems are a series of novel solvent extraction (SX) processes that will remove and recover all of the major radioisotopes from acidic-dissolved sludge or other acidic high-level wastes. The major focus of this effort during the last 2 years has been the development of a combined cesium-strontium extraction/recovery process, the Combined CSEX-SREX Process. The Combined CSEX-SREX Process relies on a mixture of a strontium-selective macrocyclic polyether and a novel cesium-selective extractant based on dibenzo 18-crown-6. The process offers several potential advantages over possible alternatives in a chemical processing scheme for high-level waste treatment. First, if the process is applied as the first step in chemical pretreatment, the radiation level for all subsequent processing steps (e.g., transuranic extraction/recovery, or TRUEX) will be significantly reduced. Thus, less costly shielding would be required. The second advantage of the Combined CSEX-SREX Process is that the recovered Cs-Sr fraction is non-transuranic, and therefore will decay to low-level waste after only a few hundred years. Finally, combining individual processes into a single process will reduce the amount of equipment required to pretreat the waste and therefore reduce the size and cost of the waste processing facility. In an ongoing collaboration with Lockheed Martin Idaho Technology Company (LMITCO), the authors have successfully tested various segments of the Advanced Integrated Solvent Extraction Systems. Eichrom Industries, Inc. (Darien, IL) synthesizes and markets the Sr extractant and can supply the Cs extractant on a limited basis. Plans are under way to perform a test of the Combined CSEX-SREX Process with real waste at LMITCO in the near future.

  16. Geometry and dynamics of integrable systems

    CERN Document Server

    Matveev, Vladimir

    2016-01-01

    Based on lectures given at an advanced course on integrable systems at the Centre de Recerca Matemàtica in Barcelona, these lecture notes address three major aspects of integrable systems: obstructions to integrability from differential Galois theory; the description of singularities of integrable systems on the basis of their relation to bi-Hamiltonian systems; and the generalization of integrable systems to the non-Hamiltonian settings. All three sections were written by top experts in their respective fields. Native to actual problem-solving challenges in mechanics, the topic of integrable systems is currently at the crossroads of several disciplines in pure and applied mathematics, and also has important interactions with physics. The study of integrable systems also actively employs methods from differential geometry. Moreover, it is extremely important in symplectic geometry and Hamiltonian dynamics, and has strong correlations with mathematical physics, Lie theory and algebraic geometry (including mir...

  17. The relational database system of KM3NeT

    Science.gov (United States)

    Albert, Arnauld; Bozza, Cristiano

    2016-04-01

    The KM3NeT Collaboration is building a new generation of neutrino telescopes in the Mediterranean Sea. For these telescopes, a relational database is designed and implemented for several purposes, such as the centralised management of accounts, the storage of all documentation about components and the status of the detector and information about slow control and calibration data. It also contains information useful during the construction and the data acquisition phases. Highlights in the database schema, storage and management are discussed along with design choices that have impact on performances. In most cases, the database is not accessed directly by applications, but via a custom designed Web application server.

  18. Traditional Medicine Collection Tracking System (TM-CTS): a database for ethnobotanically driven drug-discovery programs.

    Science.gov (United States)

    Harris, Eric S J; Erickson, Sean D; Tolopko, Andrew N; Cao, Shugeng; Craycroft, Jane A; Scholten, Robert; Fu, Yanling; Wang, Wenquan; Liu, Yong; Zhao, Zhongzhen; Clardy, Jon; Shamu, Caroline E; Eisenberg, David M

    2011-05-17

    Ethnobotanically driven drug-discovery programs include data related to many aspects of the preparation of botanical medicines, from initial plant collection to chemical extraction and fractionation. The Traditional Medicine Collection Tracking System (TM-CTS) was created to organize and store data of this type for an international collaborative project involving the systematic evaluation of commonly used Traditional Chinese Medicinal plants. The system was developed using domain-driven design techniques, and is implemented using Java, Hibernate, PostgreSQL, Business Intelligence and Reporting Tools (BIRT), and Apache Tomcat. The TM-CTS relational database schema contains over 70 data types, comprising over 500 data fields. The system incorporates a number of unique features that are useful in the context of ethnobotanical projects such as support for information about botanical collection, method of processing, quality tests for plants with existing pharmacopoeia standards, chemical extraction and fractionation, and historical uses of the plants. The database also accommodates data provided in multiple languages and integration with a database system built to support high throughput screening based drug discovery efforts. It is accessed via a web-based application that provides extensive, multi-format reporting capabilities. This new database system was designed to support a project evaluating the bioactivity of Chinese medicinal plants. The software used to create the database is open source, freely available, and could potentially be applied to other ethnobotanically driven natural product collection and drug-discovery programs. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  19. Traditional Medicine Collection Tracking System (TM-CTS): A Database for Ethnobotanically-Driven Drug-Discovery Programs

    Science.gov (United States)

    Harris, Eric S. J.; Erickson, Sean D.; Tolopko, Andrew N.; Cao, Shugeng; Craycroft, Jane A.; Scholten, Robert; Fu, Yanling; Wang, Wenquan; Liu, Yong; Zhao, Zhongzhen; Clardy, Jon; Shamu, Caroline E.; Eisenberg, David M.

    2011-01-01

    Aim of the study. Ethnobotanically-driven drug-discovery programs include data related to many aspects of the preparation of botanical medicines, from initial plant collection to chemical extraction and fractionation. The Traditional Medicine-Collection Tracking System (TM-CTS) was created to organize and store data of this type for an international collaborative project involving the systematic evaluation of commonly used Traditional Chinese Medicinal plants. Materials and Methods. The system was developed using domain-driven design techniques, and is implemented using Java, Hibernate, PostgreSQL, Business Intelligence and Reporting Tools (BIRT), and Apache Tomcat. Results. The TM-CTS relational database schema contains over 70 data types, comprising over 500 data fields. The system incorporates a number of unique features that are useful in the context of ethnobotanical projects such as support for information about botanical collection, method of processing, quality tests for plants with existing pharmacopoeia standards, chemical extraction and fractionation, and historical uses of the plants. The database also accommodates data provided in multiple languages and integration with a database system built to support high throughput screening based drug discovery efforts. It is accessed via a web-based application that provides extensive, multi-format reporting capabilities. Conclusions. This new database system was designed to support a project evaluating the bioactivity of Chinese medicinal plants. The software used to create the database is open source, freely available, and could potentially be applied to other ethnobotanically-driven natural product collection and drug-discovery programs. PMID:21420479

  20. Integrated Compliance Information System (ICIS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The purpose of ICIS is to meet evolving Enforcement and Compliance business needs for EPA and State users by integrating information into a single integrated data...