WorldWideScience

Sample records for solutions link database

  1. Authority Control and Linked Bibliographic Databases.

    Science.gov (United States)

    Clack, Doris H.

    1988-01-01

    Explores issues related to bibliographic database authority control, including the nature of standards, quality control, library cooperation, centralized and decentralized databases and authority control systems, and economic considerations. The implications of authority control for linking large scale databases are discussed. (18 references)…

  2. Using Biblio-Link...For Those Other Databases.

    Science.gov (United States)

    Joy, Albert

    1989-01-01

    Sidebar describes the use of the Biblio-Link software packages to download citations from online databases and convert them into a form that can be automatically uploaded into a Pro-Cite database. An example of this procedure using DIALOG2 is given. (CLB)

  3. Disbiome database: linking the microbiome to disease.

    Science.gov (United States)

    Janssens, Yorick; Nielandt, Joachim; Bronselaer, Antoon; Debunne, Nathan; Verbeke, Frederick; Wynendaele, Evelien; Van Immerseel, Filip; Vandewynckel, Yves-Paul; De Tré, Guy; De Spiegeleer, Bart

    2018-06-04

    Recent research has provided fascinating indications and evidence that the host health is linked to its microbial inhabitants. Due to the development of high-throughput sequencing technologies, more and more data covering microbial composition changes in different disease types are emerging. However, this information is dispersed over a wide variety of medical and biomedical disciplines. Disbiome is a database which collects and presents published microbiota-disease information in a standardized way. The diseases are classified using the MedDRA classification system and the micro-organisms are linked to their NCBI and SILVA taxonomy. Finally, each study included in the Disbiome database is assessed for its reporting quality using a standardized questionnaire. Disbiome is the first database giving a clear, concise and up-to-date overview of microbial composition differences in diseases, together with the relevant information of the studies published. The strength of this database lies within the combination of the presence of references to other databases, which enables both specific and diverse search strategies within the Disbiome database, and the human annotation which ensures a simple and structured presentation of the available data.

  4. Links in a distributed database: Theory and implementation

    International Nuclear Information System (INIS)

    Karonis, N.T.; Kraimer, M.R.

    1991-12-01

    This document addresses the problem of extending database links across Input/Output Controller (IOC) boundaries. It lays a foundation by reviewing the current system and proposing an implementation specification designed to guide all work in this area. The document also describes an implementation that is less ambitious than our formally stated proposal, one that does not extend the reach of all database links across IOC boundaries. Specifically, it introduces an implementation of input and output links and comments on that overall implementation. We include a set of manual pages describing each of the new functions the implementation provides

  5. High-throughput ab-initio dilute solute diffusion database.

    Science.gov (United States)

    Wu, Henry; Mayeshiba, Tam; Morgan, Dane

    2016-07-19

    We demonstrate automated generation of diffusion databases from high-throughput density functional theory (DFT) calculations. A total of more than 230 dilute solute diffusion systems in Mg, Al, Cu, Ni, Pd, and Pt host lattices have been determined using multi-frequency diffusion models. We apply a correction method for solute diffusion in alloys using experimental and simulated values of host self-diffusivity. We find good agreement with experimental solute diffusion data, obtaining a weighted activation barrier RMS error of 0.176 eV when excluding magnetic solutes in non-magnetic alloys. The compiled database is the largest collection of consistently calculated ab-initio solute diffusion data in the world.

  6. Linking international trademark databases to inform IP research and policy

    Energy Technology Data Exchange (ETDEWEB)

    Petrie, P.

    2016-07-01

    Researchers and policy makers are concerned with many international issues regarding trademarks, such as trademark squatting, cluttering, and dilution. Trademark application data can provide an evidence base to inform government policy regarding these issues, and can also produce quantitative insights into economic trends and brand dynamics. Currently, national trademark databases can provide insight into economic and brand dynamics at the national level, but gaining such insight at an international level is more difficult due to a lack of internationally linked trademark data. We are in the process of building a harmonised international trademark database (the “Patstat of trademarks”), in which equivalent trademarks have been identified across national offices. We have developed a pilot database that incorporates 6.4 million U.S., 1.3 million Australian, and 0.5 million New Zealand trademark applications, spanning over 100 years. The database will be extended to incorporate trademark data from other participating intellectual property (IP) offices as they join the project. Confirmed partners include the United Kingdom, WIPO, and OHIM. We will continue to expand the scope of the project, and intend to include many more IP offices from around the world. In addition to building the pilot database, we have developed a linking algorithm that identifies equivalent trademarks (TMs) across the three jurisdictions. The algorithm can currently be applied to all applications that contain TM text; i.e. around 96% of all applications. In its current state, the algorithm successfully identifies ~ 97% of equivalent TMs that are known to be linked a priori, as they have shared international registration number through the Madrid protocol. When complete, the internationally linked trademark database will be a valuable resource for researchers and policy-makers in fields such as econometrics, intellectual property rights, and brand policy. (Author)

  7. A database for extract solutions in general relativity

    International Nuclear Information System (INIS)

    Horvath, I.; Horvath, Zs.; Lukacs, B.

    1993-07-01

    The field of equations of General Relativity are coupled second order partial differential equations. Therefore no general method is known to generate solutions for prescribed initial and boundary conditions. In addition, the meaning of the particular coordinates cannot be known until the metric is not found. Therefore the result must permit arbitrary coordinate transformations, i.e. most kinds of approximating methods are improper. So exact solutions are necessary and each one is an individual product. For storage, retrieval and comparison database handling techniques are needed. A database of 1359 articles is shown (cross-referred at least once) published in 156 more important journals. It can be handled by dBase III plus on IBM PC's. (author) 5 refs.; 5 tabs

  8. Biomine: predicting links between biological entities using network models of heterogeneous databases

    Directory of Open Access Journals (Sweden)

    Eronen Lauri

    2012-06-01

    Full Text Available Abstract Background Biological databases contain large amounts of data concerning the functions and associations of genes and proteins. Integration of data from several such databases into a single repository can aid the discovery of previously unknown connections spanning multiple types of relationships and databases. Results Biomine is a system that integrates cross-references from several biological databases into a graph model with multiple types of edges, such as protein interactions, gene-disease associations and gene ontology annotations. Edges are weighted based on their type, reliability, and informativeness. We present Biomine and evaluate its performance in link prediction, where the goal is to predict pairs of nodes that will be connected in the future, based on current data. In particular, we formulate protein interaction prediction and disease gene prioritization tasks as instances of link prediction. The predictions are based on a proximity measure computed on the integrated graph. We consider and experiment with several such measures, and perform a parameter optimization procedure where different edge types are weighted to optimize link prediction accuracy. We also propose a novel method for disease-gene prioritization, defined as finding a subset of candidate genes that cluster together in the graph. We experimentally evaluate Biomine by predicting future annotations in the source databases and prioritizing lists of putative disease genes. Conclusions The experimental results show that Biomine has strong potential for predicting links when a set of selected candidate links is available. The predictions obtained using the entire Biomine dataset are shown to clearly outperform ones obtained using any single source of data alone, when different types of links are suitably weighted. In the gene prioritization task, an established reference set of disease-associated genes is useful, but the results show that under favorable

  9. Benchmarking Linked Open Data Management Systems

    NARCIS (Netherlands)

    R. Angles Rojas (Renzo); M.-D. Pham (Minh-Duc); P.A. Boncz (Peter)

    2014-01-01

    htmlabstractWith inherent support for storing and analysing highly interconnected data, graph and RDF databases appear as natural solutions for developing Linked Open Data applications. However, current benchmarks for these database technologies do not fully attain the desirable characteristics

  10. Reshaping Smart Businesses with Cloud Database Solutions

    Directory of Open Access Journals (Sweden)

    Bogdan NEDELCU

    2015-03-01

    Full Text Available The aim of this article is to show the importance of Big Data and its growing influence on companies. We can also see how much are the companies willing to invest in big data and how much are they currently gaining from their big data. In this big data era, there is a fiercely competition between the companies and the technologies they use when building their strategies. There are almost no boundaries when it comes to the possibilities and facilities some databases can offer. However, the most challenging part lays in the development of efficient solutions - where and when to take the right decision, which cloud service is the most accurate being given a certain scenario, what database is suitable for the business taking in consideration the data types. These are just a few aspects which will be dealt with in the following chapters as well as exemplifications of the most accurate cloud services (e.g. NoSQL databases used by business leaders nowadays.

  11. Numeric databases on the kinetics of transient species in solution

    International Nuclear Information System (INIS)

    Helman, W.P.; Hug, G.L.; Carmichael, Ian; Ross, A.B.

    1988-01-01

    A description is given of data compilations on the kinetics of transient species in solution. In particular information is available for the reactions of radicals in aqueous solution and for excited states such as singlet molecular oxygen and those of metal complexes in solution. Methods for compilation and use of the information in computer-readable form are also described. Emphasis is placed on making the database available for online searching. (author)

  12. Databases in Cloud - Solutions for Developing Renewable Energy Informatics Systems

    Directory of Open Access Journals (Sweden)

    Adela BARA

    2017-08-01

    Full Text Available The paper presents the data model of a decision support prototype developed for generation monitoring, forecasting and advanced analysis in the renewable energy filed. The solutions considered for developing this system include databases in cloud, XML integration, spatial data representation and multidimensional modeling. This material shows the advantages of Cloud databases and spatial data representation and their implementation in Oracle Database 12 c. Also, it contains a data integration part and a multidimensional analysis. The presentation of output data is made using dashboards.

  13. The CTBTO Link to the database of the International Seismological Centre (ISC)

    Science.gov (United States)

    Bondar, I.; Storchak, D. A.; Dando, B.; Harris, J.; Di Giacomo, D.

    2011-12-01

    The CTBTO Link to the database of the International Seismological Centre (ISC) is a project to provide access to seismological data sets maintained by the ISC using specially designed interactive tools. The Link is open to National Data Centres and to the CTBTO. By means of graphical interfaces and database queries tailored to the needs of the monitoring community, the users are given access to a multitude of products. These include the ISC and ISS bulletins, covering the seismicity of the Earth since 1904; nuclear and chemical explosions; the EHB bulletin; the IASPEI Reference Event list (ground truth database); and the IDC Reviewed Event Bulletin. The searches are divided into three main categories: The Area Based Search (a spatio-temporal search based on the ISC Bulletin), the REB search (a spatio-temporal search based on specific events in the REB) and the IMS Station Based Search (a search for historical patterns in the reports of seismic stations close to a particular IMS seismic station). The outputs are HTML based web-pages with a simplified version of the ISC Bulletin showing the most relevant parameters with access to ISC, GT, EHB and REB Bulletins in IMS1.0 format for single or multiple events. The CTBTO Link offers a tool to view REB events in context within the historical seismicity, look at observations reported by non-IMS networks, and investigate station histories and residual patterns for stations registered in the International Seismographic Station Registry.

  14. Development of research activity support system. 3. Automatic link generation/maintenance on self-evolving database; Kenkyu katsudo shien system no kaihatsu. 3. Jiko zoshokugata database deno bunsho link no jido sakusei/shufuku

    Energy Technology Data Exchange (ETDEWEB)

    Shimada, T.; Futakata, A. [Central Research Institute of Electric Power Industry, Tokyo (Japan)

    1997-04-01

    For a coordinated task to be accomplished in an organization, documents, charts, and data produced by plural workers need to be shared by the plural workers. This information sharing setup will function more effectively when the meanings and purposes of documents, etc., are arranged in good order relative to the other documents and when they are managed as a group of documents organically linked with each other and properly updated as the task approaches completion. In the self-evolving databases proposed so far, five types of document links representing the relations between documents are automatically generated and the documents are unifiedly managed for the documents yielded by coordinated work to be arranged in a proper order. A procedure for automatically generating document links are established on the basis of information received from the document retrieval system and Lotus Notes application. In a self-evolving database, the document on either side of a link is apt to be lost due to users` moving or deleting documents. An automatic procedure is developed in this report which will enable such document links to correctly restore themselves without loss of semantic relations. 12 refs., 11 figs., 3 tabs.

  15. Zebrafish Database: Customizable, Free, and Open-Source Solution for Facility Management.

    Science.gov (United States)

    Yakulov, Toma Antonov; Walz, Gerd

    2015-12-01

    Zebrafish Database is a web-based customizable database solution, which can be easily adapted to serve both single laboratories and facilities housing thousands of zebrafish lines. The database allows the users to keep track of details regarding the various genomic features, zebrafish lines, zebrafish batches, and their respective locations. Advanced search and reporting options are available. Unique features are the ability to upload files and images that are associated with the respective records and an integrated calendar component that supports multiple calendars and categories. Built on the basis of the Joomla content management system, the Zebrafish Database is easily extendable without the need for advanced programming skills.

  16. Solution processed organic light-emitting diodes using the plasma cross-linking technology

    Energy Technology Data Exchange (ETDEWEB)

    He, Kongduo [Department of Light Sources and Illuminating Engineering, Fudan University, Shanghai 200433 (China); Liu, Yang [Department of Light Sources and Illuminating Engineering, Fudan University, Shanghai 200433 (China); Engineering Research Center of Advanced Lighting Technology, Ministry of Education, Shanghai 200433 (China); Gong, Junyi; Zeng, Pan; Kong, Xun; Yang, Xilu; Yang, Cheng; Yu, Yan [Department of Light Sources and Illuminating Engineering, Fudan University, Shanghai 200433 (China); Liang, Rongqing [Department of Light Sources and Illuminating Engineering, Fudan University, Shanghai 200433 (China); Engineering Research Center of Advanced Lighting Technology, Ministry of Education, Shanghai 200433 (China); Ou, Qiongrong, E-mail: qrou@fudan.edu.cn [Department of Light Sources and Illuminating Engineering, Fudan University, Shanghai 200433 (China); Engineering Research Center of Advanced Lighting Technology, Ministry of Education, Shanghai 200433 (China)

    2016-09-30

    Highlights: • Mixed acetylene and Ar plasma treatment makes the organic film surface cross-linked. • The plasma treatment for 30 s does not affect the performance of OLEDs. • Cross-linking surface can resist rinsing and corrosion of organic solvent. • The surface morphology is nearly unchanged after plasma treatment. • The plasma cross-linking method can realize solution processed multilayer OLEDs. - Abstract: Solution processed multilayer organic light-emitting diodes (OLEDs) present challenges, especially regarding dissolution of the first layer during deposition of a second layer. In this work, we first demonstrated a plasma cross-linking technology to produce a solution processed OLED. The surfaces of organic films can be cross-linked after mixed acetylene and Ar plasma treatment for several tens of seconds and resist corrosion of organic solvent. The film thickness and surface morphology of emissive layers (EMLs) with plasma treatment and subsequently spin-rinsed with chlorobenzene are nearly unchanged. The solution processed triple-layer OLED is successfully fabricated and the current efficiency increases 50% than that of the double-layer OLED. Fluorescent characteristics of EMLs are also observed to investigate factors influencing the efficiency of the triple-layer OLED. Plasma cross-linking technology may open up a new pathway towards fabrication of all-solution processed multilayer OLEDs and other soft electronic devices.

  17. Optimising case detection within UK electronic health records : use of multiple linked databases for detecting liver injury

    NARCIS (Netherlands)

    Wing, Kevin; Bhaskaran, Krishnan; Smeeth, Liam; van Staa, Tjeerd P|info:eu-repo/dai/nl/304827762; Klungel, Olaf H|info:eu-repo/dai/nl/181447649; Reynolds, Robert F; Douglas, Ian

    2016-01-01

    OBJECTIVES: We aimed to create a 'multidatabase' algorithm for identification of cholestatic liver injury using multiple linked UK databases, before (1) assessing the improvement in case ascertainment compared to using a single database and (2) developing a new single-database case-definition

  18. A Flexible 5G Wide Area Solution for TDD with Asymmetric Link Operation

    DEFF Research Database (Denmark)

    Pedersen, Klaus I.; Berardinelli, Gilberto; Frederiksen, Frank

    2017-01-01

    optimization on a per-link basis is proposed. The solution encompasses the possibility to schedule users with different transmission time intervals to best match their service equirements and radio conditions. Due to the large downlink/uplink transmission power imbalance for each link, asymmetric link...... operation is proposed, where users operate with different minimum transmission times for the two link directions. This is achieved by using a highly flexible asynchronous hybrid Automatic repeat request (HARQ) scheme, as well as a novel solution with in-resource control channel signaling for the scheduling...

  19. Transparent Data Encryption -- Solution for Security of Database Contents

    OpenAIRE

    Deshmukh, Dr. Anwar Pasha; Qureshi, Dr. Riyazuddin

    2013-01-01

    The present study deals with Transparent Data Encryption which is a technology used to solve the problems of security of data. Transparent Data Encryption means encrypting databases on hard disk and on any backup media. Present day global business environment presents numerous security threats and compliance challenges. To protect against data thefts and frauds we require security solutions that are transparent by design. Transparent Data Encryption provides transparent, standards-based secur...

  20. Linked data management

    CERN Document Server

    Hose, Katja; Schenkel, Ralf

    2014-01-01

    Linked Data Management presents techniques for querying and managing Linked Data that is available on today’s Web. The book shows how the abundance of Linked Data can serve as fertile ground for research and commercial applications. The text focuses on aspects of managing large-scale collections of Linked Data. It offers a detailed introduction to Linked Data and related standards, including the main principles distinguishing Linked Data from standard database technology. Chapters also describe how to generate links between datasets and explain the overall architecture of data integration systems based on Linked Data. A large part of the text is devoted to query processing in different setups. After presenting methods to publish relational data as Linked Data and efficient centralized processing, the book explores lookup-based, distributed, and parallel solutions. It then addresses advanced topics, such as reasoning, and discusses work related to read-write Linked Data for system interoperation. Desp...

  1. Update History of This Database - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods ...B link & Genome analysis methods English archive site is opened. 2012/08/08 PGDBj... Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods is opened. About This...ate History of This Database - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive ...

  2. MyMolDB: a micromolecular database solution with open source and free components.

    Science.gov (United States)

    Xia, Bing; Tai, Zheng-Fu; Gu, Yu-Cheng; Li, Bang-Jing; Ding, Li-Sheng; Zhou, Yan

    2011-10-01

    To manage chemical structures in small laboratories is one of the important daily tasks. Few solutions are available on the internet, and most of them are closed source applications. The open-source applications typically have limited capability and basic cheminformatics functionalities. In this article, we describe an open-source solution to manage chemicals in research groups based on open source and free components. It has a user-friendly interface with the functions of chemical handling and intensive searching. MyMolDB is a micromolecular database solution that supports exact, substructure, similarity, and combined searching. This solution is mainly implemented using scripting language Python with a web-based interface for compound management and searching. Almost all the searches are in essence done with pure SQL on the database by using the high performance of the database engine. Thus, impressive searching speed has been archived in large data sets for no external Central Processing Unit (CPU) consuming languages were involved in the key procedure of the searching. MyMolDB is an open-source software and can be modified and/or redistributed under GNU General Public License version 3 published by the Free Software Foundation (Free Software Foundation Inc. The GNU General Public License, Version 3, 2007. Available at: http://www.gnu.org/licenses/gpl.html). The software itself can be found at http://code.google.com/p/mymoldb/. Copyright © 2011 Wiley Periodicals, Inc.

  3. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The... Article title: Author name(s): Journal: External Links: Original website information Database maintenance s...DB (Protein Data Bank) KEGG PATHWAY Database DrugPort Entry list Available Query search Available Web servic

  4. Linking the Taiwan Fish Database to the Global Database

    Directory of Open Access Journals (Sweden)

    Kwang-Tsao Shao

    2007-03-01

    Full Text Available Under the support of the National Digital Archive Program (NDAP, basic species information about most Taiwanese fishes, including their morphology, ecology, distribution, specimens with photos, and literatures have been compiled into the "Fish Database of Taiwan" (http://fishdb.sinica.edu.tw. We expect that the all Taiwanese fish species databank (RSD, with 2800+ species, and the digital "Fish Fauna of Taiwan" will be completed in 2007. Underwater ecological photos and video images for all 2,800+ fishes are quite difficult to achieve but will be collected continuously in the future. In the last year of NDAP, we have successfully integrated all fish specimen data deposited at 7 different institutes in Taiwan as well as their collection maps on the Google Map and Google Earth. Further, the database also provides the pronunciation of Latin scientific names and transliteration of Chinese common names by referring to the Romanization system for all Taiwanese fishes (2,902 species in 292 families so far. The Taiwanese fish species checklist with Chinese common/vernacular names and specimen data has been updated periodically and provided to the global FishBase as well as the Global Biodiversity Information Facility (GBIF through the national portal of the Taiwan Biodiversity Information Facility (TaiBIF. Thus, Taiwanese fish data can be queried and browsed on the WWW. For contributing to the "Barcode of Life" and "All Fishes" international projects, alcohol-preserved specimens of more than 1,800 species and cryobanking tissues of 800 species have been accumulated at RCBAS in the past two years. Through this close collaboration between local and global databases, "The Fish Database of Taiwan" now attracts more than 250,000 visitors and achieves 5 million hits per month. We believe that this local database is becoming an important resource for education, research, conservation, and sustainable use of fish in Taiwan.

  5. An online interactive geometric database including exact solutions of Einstein's field equations

    International Nuclear Information System (INIS)

    Ishak, Mustapha; Lake, Kayll

    2002-01-01

    We describe a new interactive database (GRDB) of geometric objects in the general area of differential geometry. Database objects include, but are not restricted to, exact solutions of Einstein's field equations. GRDB is designed for researchers (and teachers) in applied mathematics, physics and related fields. The flexible search environment allows the database to be useful over a wide spectrum of interests, for example, from practical considerations of neutron star models in astrophysics to abstract space-time classification schemes. The database is built using a modular and object-oriented design and uses several Java technologies (e.g. Applets, Servlets, JDBC). These are platform-independent and well adapted for applications developed for the World Wide Web. GRDB is accompanied by a virtual calculator (GRTensorJ), a graphical user interface to the computer algebra system GRTensorII, used to perform online coordinate, tetrad or basis calculations. The highly interactive nature of GRDB allows systematic internal self-checking and minimization of the required internal records. This new database is now available online at http://grdb.org

  6. 100G shortwave wavelength division multiplexing solutions for multimode fiber data links

    DEFF Research Database (Denmark)

    Cimoli, Bruno; Estaran Tolosa, Jose Manuel; Rodes Lopez, Guillermo Arturo

    2016-01-01

    We investigate an alternative 100G solution for optical short-range data center links. The presented solution adopts wavelength division multiplexing technology to transmit four channels of 25G over a multimode fiber. A comparative performance analysis of the wavelength-grid selection for the wav...

  7. Gold nanorod linking to control plasmonic properties in solution and polymer nanocomposites.

    Science.gov (United States)

    Ferrier, Robert C; Lee, Hyun-Su; Hore, Michael J A; Caporizzo, Matthew; Eckmann, David M; Composto, Russell J

    2014-02-25

    A novel, solution-based method is presented to prepare bifunctional gold nanorods (B-NRs), assemble B-NRs end-to-end in various solvents, and disperse linked B-NRs in a polymer matrix. The B-NRs have poly(ethylene glycol) grafted along its long axis and cysteine adsorbed to its ends. By controlling cysteine coverage, bifunctional ligands or polymer can be end-grafted to the AuNRs. Here, two dithiol ligands (C6DT and C9DT) are used to link the B-NRs in organic solvents. With increasing incubation time, the nanorod chain length increases linearly as the longitudinal surface plasmon resonance shifts toward lower adsorption wavelengths (i.e., red shift). Analogous to step-growth polymerization, the polydispersity in chain length also increases. Upon adding poly(ethylene glycol) or poly(methyl methacrylate) to chloroform solution with linked B-NR, the nanorod chains are shown to retain end-to-end linking upon spin-casting into PEO or PMMA films. Using quartz crystal microbalance with dissipation (QCM-D), the mechanism of nanorod linking is investigated on planar gold surfaces. At submonolayer coverage of cysteine, C6DT molecules can insert between cysteines and reach an areal density of 3.4 molecules per nm(2). To mimic the linking of Au NRs, this planar surface is exposed to cysteine-coated Au nanoparticles, which graft at 7 NPs per μm(2). This solution-based method to prepare, assemble, and disperse Au nanorods is applicable to other nanorod systems (e.g., CdSe) and presents a new strategy to assemble anisotropic particles in organic solvents and polymer coatings.

  8. DASTCOM5: A Portable and Current Database of Asteroid and Comet Orbit Solutions

    Science.gov (United States)

    Giorgini, Jon D.; Chamberlin, Alan B.

    2014-11-01

    A portable direct-access database containing all NASA/JPL asteroid and comet orbit solutions, with the software to access it, is available for download (ftp://ssd.jpl.nasa.gov/pub/xfr/dastcom5.zip; unzip -ao dastcom5.zip). DASTCOM5 contains the latest heliocentric IAU76/J2000 ecliptic osculating orbital elements for all known asteroids and comets as determined by a least-squares best-fit to ground-based optical, spacecraft, and radar astrometric measurements. Other physical, dynamical, and covariance parameters are included when known. A total of 142 parameters per object are supported within DASTCOM5. This information is suitable for initializing high-precision numerical integrations, assessing orbit geometry, computing trajectory uncertainties, visual magnitude, and summarizing physical characteristics of the body. The DASTCOM5 distribution is updated as often as hourly to include newly discovered objects or orbit solution updates. It includes an ASCII index of objects that supports look-ups based on name, current or past designation, SPK ID, MPC packed-designations, or record number. DASTCOM5 is the database used by the NASA/JPL Horizons ephemeris system. It is a subset exported from a larger MySQL-based relational Small-Body Database ("SBDB") maintained at JPL. The DASTCOM5 distribution is intended for programmers comfortable with UNIX/LINUX/MacOSX command-line usage who need to develop stand-alone applications. The goal of the implementation is to provide small, fast, portable, and flexibly programmatic access to JPL comet and asteroid orbit solutions. The supplied software library, examples, and application programs have been verified under gfortran, Lahey, Intel, and Sun 32/64-bit Linux/UNIX FORTRAN compilers. A command-line tool ("dxlook") is provided to enable database access from shell or script environments.

  9. Phynx: an open source software solution supporting data management and web-based patient-level data review for drug safety studies in the general practice research database and other health care databases.

    Science.gov (United States)

    Egbring, Marco; Kullak-Ublick, Gerd A; Russmann, Stefan

    2010-01-01

    To develop a software solution that supports management and clinical review of patient data from electronic medical records databases or claims databases for pharmacoepidemiological drug safety studies. We used open source software to build a data management system and an internet application with a Flex client on a Java application server with a MySQL database backend. The application is hosted on Amazon Elastic Compute Cloud. This solution named Phynx supports data management, Web-based display of electronic patient information, and interactive review of patient-level information in the individual clinical context. This system was applied to a dataset from the UK General Practice Research Database (GPRD). Our solution can be setup and customized with limited programming resources, and there is almost no extra cost for software. Access times are short, the displayed information is structured in chronological order and visually attractive, and selected information such as drug exposure can be blinded. External experts can review patient profiles and save evaluations and comments via a common Web browser. Phynx provides a flexible and economical solution for patient-level review of electronic medical information from databases considering the individual clinical context. It can therefore make an important contribution to an efficient validation of outcome assessment in drug safety database studies.

  10. Database Description - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Database Description General information of database Database... name Yeast Interacting Proteins Database Alternative name - DOI 10.18908/lsdba.nbdc00742-000 Creator C...-ken 277-8561 Tel: +81-4-7136-3989 FAX: +81-4-7136-3979 E-mail : Database classif...s cerevisiae Taxonomy ID: 4932 Database description Information on interactions and related information obta...l Acad Sci U S A. 2001 Apr 10;98(8):4569-74. Epub 2001 Mar 13. External Links: Original website information Database

  11. Study of the Conformational State of Non-Cross-Linked and Cross-Linked Poly(alkylmethyldiallylammonium chlorides) in Aqueous Solution by Fluorescence Probing

    NARCIS (Netherlands)

    Wang, Guang-Jia; Engberts, Jan B.F.N.

    The aggregation behaviour of novel non-cross-linked and cross-linked poly(alkylmethyldiallylammonium chlorides) in aqueous solutions has been investigated by fluorescence spectroscopy using pyrene as a probe. These copolymers were found to exhibit similar aggregate properties as the corresponding

  12. Database Description - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Open TG-GATEs Pathological Image Database Database Description General information of database Database... name Open TG-GATEs Pathological Image Database Alternative name - DOI 10.18908/lsdba.nbdc00954-0...iomedical Innovation 7-6-8, Saito-asagi, Ibaraki-city, Osaka 567-0085, Japan TEL:81-72-641-9826 Email: Database... classification Toxicogenomics Database Organism Taxonomy Name: Rattus norvegi... Article title: Author name(s): Journal: External Links: Original website information Database

  13. A Spatio-Temporal Building Exposure Database and Information Life-Cycle Management Solution

    Directory of Open Access Journals (Sweden)

    Marc Wieland

    2017-04-01

    Full Text Available With an ever-increasing volume and complexity of data collected from a variety of sources, the efficient management of geospatial information becomes a key topic in disaster risk management. For example, the representation of assets exposed to natural disasters is subjected to changes throughout the different phases of risk management reaching from pre-disaster mitigation to the response after an event and the long-term recovery of affected assets. Spatio-temporal changes need to be integrated into a sound conceptual and technological framework able to deal with data coming from different sources, at varying scales, and changing in space and time. Especially managing the information life-cycle, the integration of heterogeneous information and the distributed versioning and release of geospatial information are important topics that need to become essential parts of modern exposure modelling solutions. The main purpose of this study is to provide a conceptual and technological framework to tackle the requirements implied by disaster risk management for describing exposed assets in space and time. An information life-cycle management solution is proposed, based on a relational spatio-temporal database model coupled with Git and GeoGig repositories for distributed versioning. Two application scenarios focusing on the modelling of residential building stocks are presented to show the capabilities of the implemented solution. A prototype database model is shared on GitHub along with the necessary scenario data.

  14. Database Description - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ASTRA Alternative n...tics Journal Search: Contact address Database classification Nucleotide Sequence Databases - Gene structure,...3702 Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The database represents classified p...(10):1211-6. External Links: Original website information Database maintenance site National Institute of Ad... for user registration Not available About This Database Database Description Dow

  15. Database Description - RGP physicalmap | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available classification Plant databases - Rice Database classification Sequence Physical map Organism Taxonomy Name: ...inobe Journal: Nature Genetics (1994) 8: 365-372. External Links: Article title: Physical Mapping of Rice Ch...rnal: DNA Research (1997) 4(2): 133-140. External Links: Article title: Physical Mapping of Rice Chromosomes... T Sasaki Journal: Genome Research (1996) 6(10): 935-942. External Links: Article title: Physical mapping of

  16. Database Description - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PLACE Alternative name A Database...Kannondai, Tsukuba, Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Databas...e classification Plant databases Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database...99, Vol.27, No.1 :297-300 External Links: Original website information Database maintenance site National In...- Need for user registration Not available About This Database Database Descripti

  17. Microsoft Access Small Business Solutions State-of-the-Art Database Models for Sales, Marketing, Customer Management, and More Key Business Activities

    CERN Document Server

    Hennig, Teresa; Linson, Larry; Purvis, Leigh; Spaulding, Brent

    2010-01-01

    Database models developed by a team of leading Microsoft Access MVPs that provide ready-to-use solutions for sales, marketing, customer management and other key business activities for most small businesses. As the most popular relational database in the world, Microsoft Access is widely used by small business owners. This book responds to the growing need for resources that help business managers and end users design and build effective Access database solutions for specific business functions. Coverage includes::; Elements of a Microsoft Access Database; Relational Data Model; Dealing with C

  18. Capability Database of Injection Molding Process— Requirements Study for Wider Suitability and Higher Accuracy

    DEFF Research Database (Denmark)

    Boorla, Srinivasa Murthy; Eifler, Tobias; Jepsen, Jens Dines O.

    2017-01-01

    for an improved applicability of corresponding database solutions in an industrial context. A survey of database users at all phases of product value chain in the plastic industry revealed that 59% of the participating companies use their own, internally created databases, although reported to be not fully...... adequate in most cases. Essential influences are the suitability of the provided data, defined by the content such as material, tolerance types, etc. covered, as well as its accuracy, largely influenced by the updating frequency. Forming a consortium with stakeholders, linking database update to technology...

  19. Power-Aware Routing and Network Design with Bundled Links: Solutions and Analysis

    Directory of Open Access Journals (Sweden)

    Rosario G. Garroppo

    2013-01-01

    Full Text Available The paper deeply analyzes a novel network-wide power management problem, called Power-Aware Routing and Network Design with Bundled Links (PARND-BL, which is able to take into account both the relationship between the power consumption and the traffic throughput of the nodes and to power off both the chassis and even the single Physical Interface Card (PIC composing each link. The solutions of the PARND-BL model have been analyzed by taking into account different aspects associated with the actual applicability in real network scenarios: (i the time for obtaining the solution, (ii the deployed network topology and the resulting topology provided by the solution, (iii the power behavior of the network elements, (iv the traffic load, (v the QoS requirement, and (vi the number of paths to route each traffic demand. Among the most interesting and novel results, our analysis shows that the strategy of minimizing the number of powered-on network elements through the traffic consolidation does not always produce power savings, and the solution of this kind of problems, in some cases, can lead to spliting a single traffic demand into a high number of paths.

  20. Database Description - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name JSNP Alternative nam...n Science and Technology Agency Creator Affiliation: Contact address E-mail : Database...sapiens Taxonomy ID: 9606 Database description A database of about 197,000 polymorphisms in Japanese populat...1):605-610 External Links: Original website information Database maintenance site Institute of Medical Scien...er registration Not available About This Database Database Description Download License Update History of This Database

  1. Database Description - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti... Article title: Rice Expression Database: the gateway to rice functional genomics...nt Science (2002) Dec 7 (12):563-564 External Links: Original website information Database maintenance site

  2. Database Description - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PSCDB Alternative n...rial Science and Technology (AIST) Takayuki Amemiya E-mail: Database classification Structure Databases - Protein structure Database...554-D558. External Links: Original website information Database maintenance site Graduate School of Informat...available URL of Web services - Need for user registration Not available About This Database Database Descri...ption Download License Update History of This Database Site Policy | Contact Us Database Description - PSCDB | LSDB Archive ...

  3. Database Description - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...e databases - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description... Links: Original website information Database maintenance site The Molecular Profiling Research Center for D...stration Not available About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - SAHG | LSDB Archive ...

  4. Vitamin C and Poly(ethylene glycol) Protect Concentrated Poly(vinyl alcohol) Solutions against Radiation Cross-linking

    International Nuclear Information System (INIS)

    Oral, E.

    2006-01-01

    There is a need for an injectable material to augment damaged cartilage. We propose to make such self-associating poly(vinyl alcohol) (PVA) hydrogels. Physical associations can be formed in PVA using a gellant such as polyethylene glycol (PEG). The injectability of PVA solutions is compromised when sterilized due to chemical cross-linking. We hypothesized that an anticross-linking agent could prevent cross-linking of irradiated PVA solutions. PVA (17.5 wt/v %, MW= 115,000 g/mol) was prepared in water at 90 degree. PEG (MW=400 g/mol) was added at a ratio of PEG unit to PVA unit of 17, 86, 290, and 639 mol/mol. PVA solutions (17.5 wt/v %, MW= 16,000, 61,000, 81,000 and 115,000 g/mol) were also prepared. Vitamin C was added at a molar ratio of vitamin C to PVA unit of 0.75-10.4. Solutions were poured into syringes and γ-irradiated. The viscosity of injectable solutions was determined by using the bubble tube. Gel content of cross-linked samples was measured by boiling gels in water for 6 hours, drying at 90 degree and calculating the ratio of dry weight to 'as is' weight

  5. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RMG Alternative name ...raki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database... classification Nucleotide Sequence Databases Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database...rnal: Mol Genet Genomics (2002) 268: 434–445 External Links: Original website information Database...available URL of Web services - Need for user registration Not available About This Database Database Descri

  6. Database citation in supplementary data linked to Europe PubMed Central full text biomedical articles.

    Science.gov (United States)

    Kafkas, Şenay; Kim, Jee-Hyub; Pi, Xingjun; McEntyre, Johanna R

    2015-01-01

    In this study, we present an analysis of data citation practices in full text research articles and their corresponding supplementary data files, made available in the Open Access set of articles from Europe PubMed Central. Our aim is to investigate whether supplementary data files should be considered as a source of information for integrating the literature with biomolecular databases. Using text-mining methods to identify and extract a variety of core biological database accession numbers, we found that the supplemental data files contain many more database citations than the body of the article, and that those citations often take the form of a relatively small number of articles citing large collections of accession numbers in text-based files. Moreover, citation of value-added databases derived from submission databases (such as Pfam, UniProt or Ensembl) is common, demonstrating the reuse of these resources as datasets in themselves. All the database accession numbers extracted from the supplementary data are publicly accessible from http://dx.doi.org/10.5281/zenodo.11771. Our study suggests that supplementary data should be considered when linking articles with data, in curation pipelines, and in information retrieval tasks in order to make full use of the entire research article. These observations highlight the need to improve the management of supplemental data in general, in order to make this information more discoverable and useful.

  7. Solutions for medical databases optimal exploitation.

    Science.gov (United States)

    Branescu, I; Purcarea, V L; Dobrescu, R

    2014-03-15

    The paper discusses the methods to apply OLAP techniques for multidimensional databases that leverage the existing, performance-enhancing technique, known as practical pre-aggregation, by making this technique relevant to a much wider range of medical applications, as a logistic support to the data warehousing techniques. The transformations have practically low computational complexity and they may be implemented using standard relational database technology. The paper also describes how to integrate the transformed hierarchies in current OLAP systems, transparently to the user and proposes a flexible, "multimodel" federated system for extending OLAP querying to external object databases.

  8. Database Description - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DGBY Alternative name Database...EL: +81-29-838-8066 E-mail: Database classification Microarray Data and other Gene Expression Databases Orga...nism Taxonomy Name: Saccharomyces cerevisiae Taxonomy ID: 4932 Database descripti...-called phenomics). We uploaded these data on this website which is designated DGBY(Database for Gene expres...ma J, Ando A, Takagi H. Journal: Yeast. 2008 Mar;25(3):179-90. External Links: Original website information Database

  9. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name KOME Alternative nam... Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice ...Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description Information about approximately ...Hayashizaki Y, Kikuchi S. Journal: PLoS One. 2007 Nov 28; 2(11):e1235. External Links: Original website information Database...OS) Rice mutant panel database (Tos17) A Database of Plant Cis-acting Regulatory

  10. Database Description - SSBD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SSBD Alternative nam...ss 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe 650-0047, Japan, RIKEN Quantitative Biology Center Shuichi Onami E-mail: Database... classification Other Molecular Biology Databases Database classification Dynamic databa...elegans Taxonomy ID: 6239 Taxonomy Name: Escherichia coli Taxonomy ID: 562 Database description Systems Scie...i Onami Journal: Bioinformatics/April, 2015/Volume 31, Issue 7 External Links: Original website information Database

  11. Different nonideality relationships, different databases and their effects on modeling precipitation from concentrated solutions using numerical speciation codes

    Energy Technology Data Exchange (ETDEWEB)

    Brown, L.F.; Ebinger, M.H.

    1996-08-01

    Four simple precipitation problems are solved to examine the use of numerical equilibrium codes. The study emphasizes concentrated solutions, assumes both ideal and nonideal solutions, and employs different databases and different activity-coefficient relationships. The study uses the EQ3/6 numerical speciation codes. The results show satisfactory material balances and agreement between solubility products calculated from free-energy relationships and those calculated from concentrations and activity coefficients. Precipitates show slightly higher solubilities when the solutions are regarded as nonideal than when considered ideal, agreeing with theory. When a substance may precipitate from a solution dilute in the precipitating substance, a code may or may not predict precipitation, depending on the database or activity-coefficient relationship used. In a problem involving a two-component precipitation, there are only small differences in the precipitate mass and composition between the ideal and nonideal solution calculations. Analysis of this result indicates that this may be a frequent occurrence. An analytical approach is derived for judging whether this phenomenon will occur in any real or postulated precipitation situation. The discussion looks at applications of this approach. In the solutes remaining after the precipitations, there seems to be little consistency in the calculated concentrations and activity coefficients. They do not appear to depend in any coherent manner on the database or activity-coefficient relationship used. These results reinforce warnings in the literature about perfunctory or mechanical use of numerical speciation codes.

  12. Database Search Engines: Paradigms, Challenges and Solutions.

    Science.gov (United States)

    Verheggen, Kenneth; Martens, Lennart; Berven, Frode S; Barsnes, Harald; Vaudel, Marc

    2016-01-01

    The first step in identifying proteins from mass spectrometry based shotgun proteomics data is to infer peptides from tandem mass spectra, a task generally achieved using database search engines. In this chapter, the basic principles of database search engines are introduced with a focus on open source software, and the use of database search engines is demonstrated using the freely available SearchGUI interface. This chapter also discusses how to tackle general issues related to sequence database searching and shows how to minimize their impact.

  13. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RPSD Alternative nam...e Rice Protein Structure Database DOI 10.18908/lsdba.nbdc00749-000 Creator Creator Name: Toshimasa Yamazaki ... Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Databas...e classification Structure Databases - Protein structure Organism Taxonomy Name: Or...or name(s): Journal: External Links: Original website information Database maintenance site National Institu

  14. Database Description - KAIKOcDNA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us KAIKOcDNA Database Description General information of database Database name KAIKOcDNA Alter...National Institute of Agrobiological Sciences Akiya Jouraku E-mail : Database cla...ssification Nucleotide Sequence Databases Organism Taxonomy Name: Bombyx mori Taxonomy ID: 7091 Database des...rnal: G3 (Bethesda) / 2013, Sep / vol.9 External Links: Original website information Database maintenance si...available URL of Web services - Need for user registration Not available About This Database Database

  15. Database Description - AcEST | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name AcEST Alternative n...hi, Tokyo-to 192-0397 Tel: +81-42-677-1111(ext.3654) E-mail: Database classificat...eneris Taxonomy ID: 13818 Database description This is a database of EST sequences of Adiantum capillus-vene...(3): 223-227. External Links: Original website information Database maintenance site Plant Environmental Res...base Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - AcEST | LSDB Archive ...

  16. Improving Care And Research Electronic Data Trust Antwerp (iCAREdata): a research database of linked data on out-of-hours primary care.

    Science.gov (United States)

    Colliers, Annelies; Bartholomeeusen, Stefaan; Remmen, Roy; Coenen, Samuel; Michiels, Barbara; Bastiaens, Hilde; Van Royen, Paul; Verhoeven, Veronique; Holmgren, Philip; De Ruyck, Bernard; Philips, Hilde

    2016-05-04

    Primary out-of-hours care is developing throughout Europe. High-quality databases with linked data from primary health services can help to improve research and future health services. In 2014, a central clinical research database infrastructure was established (iCAREdata: Improving Care And Research Electronic Data Trust Antwerp, www.icaredata.eu ) for primary and interdisciplinary health care at the University of Antwerp, linking data from General Practice Cooperatives, Emergency Departments and Pharmacies during out-of-hours care. Medical data are pseudonymised using the services of a Trusted Third Party, which encodes private information about patients and physicians before data is sent to iCAREdata. iCAREdata provides many new research opportunities in the fields of clinical epidemiology, health care management and quality of care. A key aspect will be to ensure the quality of data registration by all health care providers. This article describes the establishment of a research database and the possibilities of linking data from different primary out-of-hours care providers, with the potential to help to improve research and the quality of health care services.

  17. Integr8: enhanced inter-operability of European molecular biology databases.

    Science.gov (United States)

    Kersey, P J; Morris, L; Hermjakob, H; Apweiler, R

    2003-01-01

    The increasing production of molecular biology data in the post-genomic era, and the proliferation of databases that store it, require the development of an integrative layer in database services to facilitate the synthesis of related information. The solution of this problem is made more difficult by the absence of universal identifiers for biological entities, and the breadth and variety of available data. Integr8 was modelled using UML (Universal Modelling Language). Integr8 is being implemented as an n-tier system using a modern object-oriented programming language (Java). An object-relational mapping tool, OJB, is being used to specify the interface between the upper layers and an underlying relational database. The European Bioinformatics Institute is launching the Integr8 project. Integr8 will be an automatically populated database in which we will maintain stable identifiers for biological entities, describe their relationships with each other (in accordance with the central dogma of biology), and store equivalences between identified entities in the source databases. Only core data will be stored in Integr8, with web links to the source databases providing further information. Integr8 will provide the integrative layer of the next generation of bioinformatics services from the EBI. Web-based interfaces will be developed to offer gene-centric views of the integrated data, presenting (where known) the links between genome, proteome and phenotype.

  18. Database Description - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us FANTOM5 Database Description General information of database Database name FANTOM5 Alternati...me: Rattus norvegicus Taxonomy ID: 10116 Taxonomy Name: Macaca mulatta Taxonomy ID: 9544 Database descriptio...l Links: Original website information Database maintenance site RIKEN Center for Life Science Technologies, ...ilable Web services Not available URL of Web services - Need for user registration Not available About This Database Database... Description Download License Update History of This Database Site Policy | Contact Us Database Description - FANTOM5 | LSDB Archive ...

  19. Database Description - DMPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DMPD Alternative nam...e Dynamic Macrophage Pathway CSML Database DOI 10.18908/lsdba.nbdc00558-000 Creator Creator Name: Masao Naga...ty of Tokyo 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639 Tel: +81-3-5449-5615 FAX: +83-3-5449-5442 E-mail: Database...606 Taxonomy Name: Mammalia Taxonomy ID: 40674 Database description DMPD collects...e(s) Article title: Author name(s): Journal: External Links: Original website information Database maintenan

  20. SAADA: Astronomical Databases Made Easier

    Science.gov (United States)

    Michel, L.; Nguyen, H. N.; Motch, C.

    2005-12-01

    Many astronomers wish to share datasets with their community but have not enough manpower to develop databases having the functionalities required for high-level scientific applications. The SAADA project aims at automatizing the creation and deployment process of such databases. A generic but scientifically relevant data model has been designed which allows one to build databases by providing only a limited number of product mapping rules. Databases created by SAADA rely on a relational database supporting JDBC and covered by a Java layer including a lot of generated code. Such databases can simultaneously host spectra, images, source lists and plots. Data are grouped in user defined collections whose content can be seen as one unique set per data type even if their formats differ. Datasets can be correlated one with each other using qualified links. These links help, for example, to handle the nature of a cross-identification (e.g., a distance or a likelihood) or to describe their scientific content (e.g., by associating a spectrum to a catalog entry). The SAADA query engine is based on a language well suited to the data model which can handle constraints on linked data, in addition to classical astronomical queries. These constraints can be applied on the linked objects (number, class and attributes) and/or on the link qualifier values. Databases created by SAADA are accessed through a rich WEB interface or a Java API. We are currently developing an inter-operability module implanting VO protocols.

  1. Database Description - eSOL | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name eSOL Alternative nam...eator Affiliation: The Research and Development of Biological Databases Project, National Institute of Genet...nology 4259 Nagatsuta-cho, Midori-ku, Yokohama, Kanagawa 226-8501 Japan Email: Tel.: +81-45-924-5785 Database... classification Protein sequence databases - Protein properties Organism Taxonomy Name: Escherichia coli Taxonomy ID: 562 Database...i U S A. 2009 Mar 17;106(11):4201-6. External Links: Original website information Database maintenance site

  2. Download - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods ...t_db_link_en.zip (36.3 KB) - 6 Genome analysis methods pgdbj_dna_marker_linkage_map_genome_analysis_methods_... of This Database Site Policy | Contact Us Download - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive ...

  3. Linking genotypes database with locus-specific database and genotype-phenotype correlation in phenylketonuria.

    Science.gov (United States)

    Wettstein, Sarah; Underhaug, Jarl; Perez, Belen; Marsden, Brian D; Yue, Wyatt W; Martinez, Aurora; Blau, Nenad

    2015-03-01

    The wide range of metabolic phenotypes in phenylketonuria is due to a large number of variants causing variable impairment in phenylalanine hydroxylase function. A total of 834 phenylalanine hydroxylase gene variants from the locus-specific database PAHvdb and genotypes of 4181 phenylketonuria patients from the BIOPKU database were characterized using FoldX, SIFT Blink, Polyphen-2 and SNPs3D algorithms. Obtained data was correlated with residual enzyme activity, patients' phenotype and tetrahydrobiopterin responsiveness. A descriptive analysis of both databases was compiled and an interactive viewer in PAHvdb database was implemented for structure visualization of missense variants. We found a quantitative relationship between phenylalanine hydroxylase protein stability and enzyme activity (r(s) = 0.479), between protein stability and allelic phenotype (r(s) = -0.458), as well as between enzyme activity and allelic phenotype (r(s) = 0.799). Enzyme stability algorithms (FoldX and SNPs3D), allelic phenotype and enzyme activity were most powerful to predict patients' phenotype and tetrahydrobiopterin response. Phenotype prediction was most accurate in deleterious genotypes (≈ 100%), followed by homozygous (92.9%), hemizygous (94.8%), and compound heterozygous genotypes (77.9%), while tetrahydrobiopterin response was correctly predicted in 71.0% of all cases. To our knowledge this is the largest study using algorithms for the prediction of patients' phenotype and tetrahydrobiopterin responsiveness in phenylketonuria patients, using data from the locus-specific and genotypes database.

  4. Marker list - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods ...Database Site Policy | Contact Us Marker list - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive ...

  5. Linking Multiple Databases: Term Project Using "Sentences" DBMS.

    Science.gov (United States)

    King, Ronald S.; Rainwater, Stephen B.

    This paper describes a methodology for use in teaching an introductory Database Management System (DBMS) course. Students master basic database concepts through the use of a multiple component project implemented in both relational and associative data models. The associative data model is a new approach for designing multi-user, Web-enabled…

  6. Database Description - tRNADB-CE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us tRNAD...B-CE Database Description General information of database Database name tRNADB-CE Alter...CC BY-SA Detail Background and funding Name: MEXT Integrated Database Project Reference(s) Article title: tRNAD... 2009 Jan;37(Database issue):D163-8. External Links: Article title: tRNADB-CE 2011: tRNA gene database curat...n Download License Update History of This Database Site Policy | Contact Us Database Description - tRNADB-CE | LSDB Archive ...

  7. Semantic-JSON: a lightweight web service interface for Semantic Web contents integrating multiple life science databases.

    Science.gov (United States)

    Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro

    2011-07-01

    Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org.

  8. Kliniske databaser i social epidemiologisk forskning

    DEFF Research Database (Denmark)

    Osler, Merete

    2009-01-01

    Danish researchers can link several databases on everything from medical records to socioeconomic data on jobs and salaries by use of an individual person id-number. This allows a number of clinical databases to be used in studies concerning the impact of social factors on healthcare-related outc......Danish researchers can link several databases on everything from medical records to socioeconomic data on jobs and salaries by use of an individual person id-number. This allows a number of clinical databases to be used in studies concerning the impact of social factors on healthcare...

  9. A Database Integrity Pattern Language

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2004-08-01

    Full Text Available Patterns and Pattern Languages are ways to capture experience and make it re-usable for others, and describe best practices and good designs. Patterns are solutions to recurrent problems.This paper addresses the database integrity problems from a pattern perspective. Even if the number of vendors of database management systems is quite high, the number of available solutions to integrity problems is limited. They all learned from the past experience applying the same solutions over and over again.The solutions to avoid integrity threats applied to in database management systems (DBMS can be formalized as a pattern language. Constraints, transactions, locks, etc, are recurrent integrity solutions to integrity threats and therefore they should be treated accordingly, as patterns.

  10. Database Description - Society Catalog | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ion of the academic societies in Japan (organization name, website URL, contact a...sing a category tree or a society website's thumbnail. This database is useful especially when the users are... External Links: Original website information Database maintenance site National Bioscience Database Center *The original web...site was terminated. URL of the original website - Operation start date 2008/06 Last update

  11. Database Description - D-HaploDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available 1-8. External Links: Article title: D-HaploDB: a database of definitive haplotypes determined by genotypin...(5):e1000468. External Links: Article title: A definitive haplotype map as determined by genotyping

  12. From Field to Laboratory: A New Database Approach for Linking Microbial Field Ecology with Laboratory Studies

    Science.gov (United States)

    Bebout, Leslie; Keller, R.; Miller, S.; Jahnke, L.; DeVincenzi, D. (Technical Monitor)

    2002-01-01

    The Ames Exobiology Culture Collection Database (AECC-DB) has been developed as a collaboration between microbial ecologists and information technology specialists. It allows for extensive web-based archiving of information regarding field samples to document microbial co-habitation of specific ecosystem micro-environments. Documentation and archiving continues as pure cultures are isolated, metabolic properties determined, and DNA extracted and sequenced. In this way metabolic properties and molecular sequences are clearly linked back to specific isolates and the location of those microbes in the ecosystem of origin. Use of this database system presents a significant advancement over traditional bookkeeping wherein there is generally little or no information regarding the environments from which microorganisms were isolated. Generally there is only a general ecosystem designation (i.e., hot-spring). However within each of these there are a myriad of microenvironments with very different properties and determining exactly where (which microenvironment) a given microbe comes from is critical in designing appropriate isolation media and interpreting physiological properties. We are currently using the database to aid in the isolation of a large number of cyanobacterial species and will present results by PI's and students demonstrating the utility of this new approach.

  13. The master two-dimensional gel database of human AMA cell proteins: towards linking protein and genome sequence and mapping information (update 1991)

    DEFF Research Database (Denmark)

    Celis, J E; Leffers, H; Rasmussen, H H

    1991-01-01

    autoantigens" and "cDNAs". For convenience we have included an alphabetical list of all known proteins recorded in this database. In the long run, the main goal of this database is to link protein and DNA sequencing and mapping information (Human Genome Program) and to provide an integrated picture......The master two-dimensional gel database of human AMA cells currently lists 3801 cellular and secreted proteins, of which 371 cellular polypeptides (306 IEF; 65 NEPHGE) were added to the master images during the last 10 months. These include: (i) very basic and acidic proteins that do not focus...

  14. Efficient linking of birth certificate and newborn screening databases for laboratory investigation of congenital cytomegalovirus infection and preterm birth: Florida, 2008.

    Science.gov (United States)

    DePasquale, John M; Freeman, Karen; Amin, Minal M; Park, Sohyun; Rivers, Samantha; Hopkins, Richard; Cannon, Michael J; Dy, Bonifacio; Dollard, Sheila C

    2012-02-01

    The objectives of this study are (1) to design an accurate method for linking newborn screening (NBS) and state birth certificate databases to create a de-identified study database; (2) To assess maternal cytomegalovirus (CMV) seroprevalence by measuring CMV IgG in newborn dried blood spots; (3) To assess congenital CMV infection among newborns and possible association with preterm birth. NBS and birth databases were linked and patient records were de-identified. A stratified random sample of records based on gestational age was selected and used to retrieve blood spots from the state NBS laboratory. Serum containing maternal antibodies was eluted from blood spots and tested for the presence of CMV IgG. DNA was extracted from blood spots and tested for the presence of CMV DNA. Analyses were performed with bivariable and multivariable logistic regression models. Linkage rates and specimen collection exceeded 98% of the total possible yielding a final database with 3,101 newborn blood spots. CMV seroprevalence was 91% among Black mothers, 83% among Hispanic mothers, 59% among White mothers, and decreased with increasing amounts of education. The prevalence of CMV infection in newborns was 0.45% and did not vary significantly by gestational age. Successful methods for database linkage, newborn blood spots collection, and de-identification of records can serve as a model for future congenital exposure surveillance projects. Maternal CMV seroprevalence was strongly associated with race/ethnicity and educational level. Congenital CMV infection rates were lower than those reported by other studies and lacked statistical power to examine associations with preterm birth.

  15. Robust optical wireless links over turbulent media using diversity solutions

    Science.gov (United States)

    Moradi, Hassan

    Free-space optic (FSO) technology, i.e., optical wireless communication (OWC), is widely recognized as superior to radio frequency (RF) in many aspects. Visible and invisible optical wireless links solve first/last mile connectivity problems and provide secure, jam-free communication. FSO is license-free and delivers high-speed data rates in the order of Gigabits. Its advantages have fostered significant research efforts aimed at utilizing optical wireless communication, e.g. visible light communication (VLC), for high-speed, secure, indoor communication under the IEEE 802.15.7 standard. However, conventional optical wireless links demand precise optical alignment and suffer from atmospheric turbulence. When compared with RF, they suffer a low degree of reliability and lack robustness. Pointing errors cause optical transceiver misalignment, adversely affecting system reliability. Furthermore, atmospheric turbulence causes irradiance fluctuations and beam broadening of transmitted light. Innovative solutions to overcome limitations on the exploitation of high-speed optical wireless links are greatly needed. Spatial diversity is known to improve RF wireless communication systems. Similar diversity approaches can be adapted for FSO systems to improve its reliability and robustness; however, careful diversity design is needed since FSO apertures typically remain unbalanced as a result of FSO system sensitivity to misalignment. Conventional diversity combining schemes require persistent aperture monitoring and repetitive switching, thus increasing FSO implementation complexities. Furthermore, current RF diversity combining schemes may not be optimized to address the issue of unbalanced FSO receiving apertures. This dissertation investigates two efficient diversity combining schemes for multi-receiving FSO systems: switched diversity combining and generalized selection combining. Both can be exploited to reduce complexity and improve combining efficiency. Unlike maximum

  16. Fluorescence spectroscopic study of the aggregation behavior of non-cross-linked and cross-linked poly(alkylmethyldiallylammonium bromides) having decyl, octyl, and hexyl side chains in aqueous solution

    NARCIS (Netherlands)

    Wang, G.J; Engberts, J.B.F.N.

    1996-01-01

    The conformational state of a series of non-cross-linked and cross-linked poly(alkylmethyldiallylammonium bromides) bearing decyl, octyl, and hexyl side chains ((CL)-CopolC1-10, (CL)-CopolC1-8, and (CL)-CopolC1-6, respectively) in aqueous solutions were investigated by fluorescence spectroscopy

  17. Migration of 137Cs, 90Sr, 239,240Pu and 241Am in the chain soil-soil solution-plant. The soil-soil solution link

    International Nuclear Information System (INIS)

    Sokolik, G.A.; Ovsyannikova, S.V.; Kil'chitskaya, S.L.; Ehjsmont, E.A.; Zhukovich, N.V.; Kimlenko, I.M.; Duksina, V.V.; Rubinchik, S.Ya.

    1999-01-01

    The mobility of 137 Cs, 90 Sr, 239,240 Pu and 241 Am in the link soil-soil solution is analysed for different soil types on the basis of radionuclide distribution coefficients between solid and liquid soil phases. The distribution coefficients allow to differentiate soils in correlation with radionuclide migration rate from the solid phase to the soil solution. The reasons of different radionuclide mobility are considered

  18. Incorporating the Last Four Digits of Social Security Numbers Substantially Improves Linking Patient Data from De-identified Hospital Claims Databases.

    Science.gov (United States)

    Naessens, James M; Visscher, Sue L; Peterson, Stephanie M; Swanson, Kristi M; Johnson, Matthew G; Rahman, Parvez A; Schindler, Joe; Sonneborn, Mark; Fry, Donald E; Pine, Michael

    2015-08-01

    Assess algorithms for linking patients across de-identified databases without compromising confidentiality. Hospital discharges from 11 Mayo Clinic hospitals during January 2008-September 2012 (assessment and validation data). Minnesota death certificates and hospital discharges from 2009 to 2012 for entire state (application data). Cross-sectional assessment of sensitivity and positive predictive value (PPV) for four linking algorithms tested by identifying readmissions and posthospital mortality on the assessment data with application to statewide data. De-identified claims included patient gender, birthdate, and zip code. Assessment records were matched with institutional sources containing unique identifiers and the last four digits of Social Security number (SSNL4). Gender, birthdate, and five-digit zip code identified readmissions with a sensitivity of 98.0 percent and a PPV of 97.7 percent and identified postdischarge mortality with 84.4 percent sensitivity and 98.9 percent PPV. Inclusion of SSNL4 produced nearly perfect identification of readmissions and deaths. When applied statewide, regions bordering states with unavailable hospital discharge data had lower rates. Addition of SSNL4 to administrative data, accompanied by appropriate data use and data release policies, can enable trusted repositories to link data with nearly perfect accuracy without compromising patient confidentiality. States maintaining centralized de-identified databases should add SSNL4 to data specifications. © Health Research and Educational Trust.

  19. Link importance incorporated failure probability measuring solution for multicast light-trees in elastic optical networks

    Science.gov (United States)

    Li, Xin; Zhang, Lu; Tang, Ying; Huang, Shanguo

    2018-03-01

    The light-tree-based optical multicasting (LT-OM) scheme provides a spectrum- and energy-efficient method to accommodate emerging multicast services. Some studies focus on the survivability technologies for LTs against a fixed number of link failures, such as single-link failure. However, a few studies involve failure probability constraints when building LTs. It is worth noting that each link of an LT plays different important roles under failure scenarios. When calculating the failure probability of an LT, the importance of its every link should be considered. We design a link importance incorporated failure probability measuring solution (LIFPMS) for multicast LTs under independent failure model and shared risk link group failure model. Based on the LIFPMS, we put forward the minimum failure probability (MFP) problem for the LT-OM scheme. Heuristic approaches are developed to address the MFP problem in elastic optical networks. Numerical results show that the LIFPMS provides an accurate metric for calculating the failure probability of multicast LTs and enhances the reliability of the LT-OM scheme while accommodating multicast services.

  20. Telecare and Social Link Solution for Ambient Assisted Living Using a Robot Companion with Visiophony

    Science.gov (United States)

    Varène, Thibaut; Hillereau, Paul; Simonnet, Thierry

    An increasing number of people are in need of help at home (elderly, isolated and/or disabled persons; people with mild cognitive impairment). Several solutions can be considered to maintain a social link while providing tele-care to these people. Many proposals suggest the use of a robot acting as a companion. In this paper we will look at an environment constrained solution, its drawbacks (such as latency) and its advantages (flexibility, integration…). A key design choice is to control the robot using a unified Voice over Internet Protocol (VoIP) solution, while addressing bandwidth limitations, providing good communication quality and reducing transmission latency

  1. A Graphical Solution for Espaces Verts

    CERN Document Server

    Skelton, K

    1999-01-01

    'Espaces Verts' is responsible for the landscaping of the green areas, the cleaning of the roads, pavements, and car parks on the CERN site. This work is carried out by a contracting company. To control the work previously, there was a database of all the areas included in the contract and paper plans of the site. Given the size of the site the ideal solution was considered to be a visual system which integrates the maps and the database. To achieve this, the Surveying Department's graphical information system was used, linking it to the database for Espaces Verts, thus enabling the presentation of graphical thematic queries. This provides a useful management tool, which facilitates the task of ensuring that the contracting company carries out the work according to the agreed planning, and gives precise measurement of the site and thus of the contract. This paper will present how this has been achieved.

  2. ZIKV - CDB: A Collaborative Database to Guide Research Linking SncRNAs and ZIKA Virus Disease Symptoms.

    Directory of Open Access Journals (Sweden)

    Victor Satler Pylro

    2016-06-01

    Full Text Available In early 2015, a ZIKA Virus (ZIKV infection outbreak was recognized in northeast Brazil, where concerns over its possible links with infant microcephaly have been discussed. Providing a causal link between ZIKV infection and birth defects is still a challenge. MicroRNAs (miRNAs are small noncoding RNAs (sncRNAs that regulate post-transcriptional gene expression by translational repression, and play important roles in viral pathogenesis and brain development. The potential for flavivirus-mediated miRNA signalling dysfunction in brain-tissue development provides a compelling hypothesis to test the perceived link between ZIKV and microcephaly.Here, we applied in silico analyses to provide novel insights to understand how Congenital ZIKA Syndrome symptoms may be related to an imbalance in miRNAs function. Moreover, following World Health Organization (WHO recommendations, we have assembled a database to help target investigations of the possible relationship between ZIKV symptoms and miRNA-mediated human gene expression.We have computationally predicted both miRNAs encoded by ZIKV able to target genes in the human genome and cellular (human miRNAs capable of interacting with ZIKV genomes. Our results represent a step forward in the ZIKV studies, providing new insights to support research in this field and identify potential targets for therapy.

  3. Toward an interactive article: integrating journals and biological databases

    Directory of Open Access Journals (Sweden)

    Marygold Steven J

    2011-05-01

    Full Text Available Abstract Background Journal articles and databases are two major modes of communication in the biological sciences, and thus integrating these critical resources is of urgent importance to increase the pace of discovery. Projects focused on bridging the gap between journals and databases have been on the rise over the last five years and have resulted in the development of automated tools that can recognize entities within a document and link those entities to a relevant database. Unfortunately, automated tools cannot resolve ambiguities that arise from one term being used to signify entities that are quite distinct from one another. Instead, resolving these ambiguities requires some manual oversight. Finding the right balance between the speed and portability of automation and the accuracy and flexibility of manual effort is a crucial goal to making text markup a successful venture. Results We have established a journal article mark-up pipeline that links GENETICS journal articles and the model organism database (MOD WormBase. This pipeline uses a lexicon built with entities from the database as a first step. The entity markup pipeline results in links from over nine classes of objects including genes, proteins, alleles, phenotypes and anatomical terms. New entities and ambiguities are discovered and resolved by a database curator through a manual quality control (QC step, along with help from authors via a web form that is provided to them by the journal. New entities discovered through this pipeline are immediately sent to an appropriate curator at the database. Ambiguous entities that do not automatically resolve to one link are resolved by hand ensuring an accurate link. This pipeline has been extended to other databases, namely Saccharomyces Genome Database (SGD and FlyBase, and has been implemented in marking up a paper with links to multiple databases. Conclusions Our semi-automated pipeline hyperlinks articles published in GENETICS to

  4. Adsorption of Amido Black 10B from aqueous solutions onto Zr (IV) surface-immobilized cross-linked chitosan/bentonite composite

    International Nuclear Information System (INIS)

    Zhang, Lujie; Hu, Pan; Wang, Jing; Huang, Ruihua

    2016-01-01

    Graphical abstract: - Highlights: • Zr-CCB was prepared and characterized. • The adsorption of AB10B followed the Langmuir isotherm model. • The pseudo-second-order model described the kinetic behavior. - Abstract: Zr(IV) surface-immobilized cross-linked chitosan/bentonite composite was synthesized by immersing cross-linked chitosan/bentonite composite in zirconium oxychloride solution, and characterized by X-ray diffraction, Fourier transform infrared spectroscopy, Scanning electron microscopy techniques. The adsorption of an anionic dye, Amido Black 10B, from aqueous solution by Zr(IV) loaded cross-linked chitosan/bentonite composite was investigated as a function of loading amount of Zr(IV), adsorbent dosage, pH value of initial dye solution, and ionic strength. The removal of Amido Black 10B increased with an increase in loading amount of Zr(IV) and adsorbent dosage, but decreased with an increase in pH or ionic strength. The adsorption of AB10B onto Zr(IV) loaded cross-linked chitosan/bentonite composite was favored at lower pH values and higher temperatures. The Langmuir isotherm model fitted well with the equilibrium adsorption isotherm data and the maximum monolayer adsorption capacity was 418.4 mg/g at natural pH value and 298 K. The pseudo-second-order kinetic model well described the adsorption process of Amido Black 10B onto Zr(IV) loaded cross-linked chitosan/bentonite composite. The possible mechanisms controlling Amido Black 10B adsorption included hydrogen bonding and electrostatic interactions.

  5. Adsorption of Amido Black 10B from aqueous solutions onto Zr (IV) surface-immobilized cross-linked chitosan/bentonite composite

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lujie; Hu, Pan; Wang, Jing; Huang, Ruihua, E-mail: hrh20022002@163.com

    2016-04-30

    Graphical abstract: - Highlights: • Zr-CCB was prepared and characterized. • The adsorption of AB10B followed the Langmuir isotherm model. • The pseudo-second-order model described the kinetic behavior. - Abstract: Zr(IV) surface-immobilized cross-linked chitosan/bentonite composite was synthesized by immersing cross-linked chitosan/bentonite composite in zirconium oxychloride solution, and characterized by X-ray diffraction, Fourier transform infrared spectroscopy, Scanning electron microscopy techniques. The adsorption of an anionic dye, Amido Black 10B, from aqueous solution by Zr(IV) loaded cross-linked chitosan/bentonite composite was investigated as a function of loading amount of Zr(IV), adsorbent dosage, pH value of initial dye solution, and ionic strength. The removal of Amido Black 10B increased with an increase in loading amount of Zr(IV) and adsorbent dosage, but decreased with an increase in pH or ionic strength. The adsorption of AB10B onto Zr(IV) loaded cross-linked chitosan/bentonite composite was favored at lower pH values and higher temperatures. The Langmuir isotherm model fitted well with the equilibrium adsorption isotherm data and the maximum monolayer adsorption capacity was 418.4 mg/g at natural pH value and 298 K. The pseudo-second-order kinetic model well described the adsorption process of Amido Black 10B onto Zr(IV) loaded cross-linked chitosan/bentonite composite. The possible mechanisms controlling Amido Black 10B adsorption included hydrogen bonding and electrostatic interactions.

  6. Intraoperative corneal thickness measurements during corneal collagen cross-linking with isotonic riboflavin solution without dextran in corneal ectasia.

    Science.gov (United States)

    Cınar, Yasin; Cingü, Abdullah Kürşat; Sahin, Alparslan; Türkcü, Fatih Mehmet; Yüksel, Harun; Caca, Ihsan

    2014-03-01

    Abstract Objective: To monitor the changes in corneal thickness during the corneal collagen cross-linking procedure by using isotonic riboflavin solution without dextran in ectatic corneal diseases. The corneal thickness measurements were obtained before epithelial removal, after epithelial removal, following the instillation of isotonic riboflavin solution without dextran for 30 min, and after 10 min of ultraviolet A irradiation. Eleven eyes of eleven patients with progressive keratoconus (n = 10) and iatrogenic corneal ectasia (n = 1) were included in this study. The mean thinnest pachymetric measurements were 391.82 ± 30.34 µm (320-434 µm) after de-epithelialization of the cornea, 435 ± 21.17 µm (402-472 µm) following 30 min instillation of isotonic riboflavin solution without dextran and 431.73 ± 20.64 µm (387-461 µm) following 10 min of ultraviolet A irradiation to the cornea. Performing corneal cross-linking procedure with isotonic riboflavin solution without dextran might not induce corneal thinning but a little swelling throughout the procedure.

  7. Q-bank phytoplasma database

    DEFF Research Database (Denmark)

    Contaldo, Nicoletta; Bertaccini, Assunta; Nicolaisen, Mogens

    2014-01-01

    The setting of the Q-Bank database free available on line for quarantine phytoplasma and also for general phytoplasma identification is described. The tool was developed in the frame of the EU-FP7 project Qbol and is linked with a new project Q-collect in order to made widely available the identi......The setting of the Q-Bank database free available on line for quarantine phytoplasma and also for general phytoplasma identification is described. The tool was developed in the frame of the EU-FP7 project Qbol and is linked with a new project Q-collect in order to made widely available...

  8. Successful linking of the Society of Thoracic Surgeons database to social security data to examine survival after cardiac operations.

    Science.gov (United States)

    Jacobs, Jeffrey Phillip; Edwards, Fred H; Shahian, David M; Prager, Richard L; Wright, Cameron D; Puskas, John D; Morales, David L S; Gammie, James S; Sanchez, Juan A; Haan, Constance K; Badhwar, Vinay; George, Kristopher M; O'Brien, Sean M; Dokholyan, Rachel S; Sheng, Shubin; Peterson, Eric D; Shewan, Cynthia M; Feehan, Kelly M; Han, Jane M; Jacobs, Marshall Lewis; Williams, William G; Mayer, John E; Chitwood, W Randolph; Murray, Gordon F; Grover, Frederick L

    2011-07-01

    Long-term evaluation of cardiothoracic surgical outcomes is a major goal of The Society of Thoracic Surgeons (STS). Linking the STS Database to the Social Security Death Master File (SSDMF) allows for the verification of "life status." This study demonstrates the feasibility of linking the STS Database to the SSDMF and examines longitudinal survival after cardiac operations. For all operations in the STS Adult Cardiac Surgery Database performed in 2008 in patients with an available Social Security Number, the SSDMF was searched for a matching Social Security Number. Survival probabilities at 30 days and 1 year were estimated for nine common operations. A Social Security Number was available for 101,188 patients undergoing isolated coronary artery bypass grafting, 12,336 patients undergoing isolated aortic valve replacement, and 6,085 patients undergoing isolated mitral valve operations. One-year survival for isolated coronary artery bypass grafting was 88.9% (6,529 of 7,344) with all vein grafts, 95.2% (84,696 of 88,966) with a single mammary artery graft, 97.4% (4,422 of 4,540) with bilateral mammary artery grafts, and 95.6% (7,543 of 7,890) with all arterial grafts. One-year survival was 92.4% (11,398 of 12,336) for isolated aortic valve replacement (95.6% [2,109 of 2,206] with mechanical prosthesis and 91.7% [9,289 of 10,130] with biologic prosthesis), 86.5% (2,312 of 2,674) for isolated mitral valve replacement (91.7% [923 of 1,006] with mechanical prosthesis and 83.3% [1,389 of 1,668] with biologic prosthesis), and 96.0% (3,275 of 3,411) for isolated mitral valve repair. Successful linkage to the SSDMF has substantially increased the power of the STS Database. These longitudinal survival data from this large multi-institutional study provide reassurance about the durability and long-term benefits of cardiac operations and constitute a contemporary benchmark for survival after cardiac operations. Copyright © 2011 The Society of Thoracic Surgeons. Published by

  9. Household Products Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — This database links over 4,000 consumer brands to health effects from Material Safety Data Sheets (MSDS) provided by the manufacturers and allows scientists and...

  10. Database Description - ClEST | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available filiation: National Institute of Advanced Industrial Science and Technology Contact address Symbiotic Evolut...2012 Aug; 47(3):233-243. External Links: Original website information Database maintenance site Symbiotic Ev

  11. ECMDB: The E. coli Metabolome Database

    OpenAIRE

    Guo, An Chi; Jewison, Timothy; Wilson, Michael; Liu, Yifeng; Knox, Craig; Djoumbou, Yannick; Lo, Patrick; Mandal, Rupasri; Krishnamurthy, Ram; Wishart, David S.

    2012-01-01

    The Escherichia coli Metabolome Database (ECMDB, http://www.ecmdb.ca) is a comprehensively annotated metabolomic database containing detailed information about the metabolome of E. coli (K-12). Modelled closely on the Human and Yeast Metabolome Databases, the ECMDB contains >2600 metabolites with links to ?1500 different genes and proteins, including enzymes and transporters. The information in the ECMDB has been collected from dozens of textbooks, journal articles and electronic databases. E...

  12. Coordinating Mobile Databases: A System Demonstration

    OpenAIRE

    Zaihrayeu, Ilya; Giunchiglia, Fausto

    2004-01-01

    In this paper we present the Peer Database Management System (PDBMS). This system runs on top of the standard database management system, and it allows it to connect its database with other (peer) databases on the network. A particularity of our solution is that PDBMS allows for conventional database technology to be effectively operational in mobile settings. We think of database mobility as a database network, where databases appear and disappear spontaneously and their network access point...

  13. Database Description - fRNAdb | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available Affiliation: National Institute of Advanced Industrial Science and Technology (AIST) Journal Search: Creato...D89-92 External Links: Original website information Database maintenance site National Institute of Industrial Science and Technology

  14. E-MSD: the European Bioinformatics Institute Macromolecular Structure Database.

    Science.gov (United States)

    Boutselakis, H; Dimitropoulos, D; Fillon, J; Golovin, A; Henrick, K; Hussain, A; Ionides, J; John, M; Keller, P A; Krissinel, E; McNeil, P; Naim, A; Newman, R; Oldfield, T; Pineda, J; Rachedi, A; Copeland, J; Sitnov, A; Sobhany, S; Suarez-Uruena, A; Swaminathan, J; Tagari, M; Tate, J; Tromm, S; Velankar, S; Vranken, W

    2003-01-01

    The E-MSD macromolecular structure relational database (http://www.ebi.ac.uk/msd) is designed to be a single access point for protein and nucleic acid structures and related information. The database is derived from Protein Data Bank (PDB) entries. Relational database technologies are used in a comprehensive cleaning procedure to ensure data uniformity across the whole archive. The search database contains an extensive set of derived properties, goodness-of-fit indicators, and links to other EBI databases including InterPro, GO, and SWISS-PROT, together with links to SCOP, CATH, PFAM and PROSITE. A generic search interface is available, coupled with a fast secondary structure domain search tool.

  15. Towards linked open gene mutations data

    Science.gov (United States)

    2012-01-01

    Background With the advent of high-throughput technologies, a great wealth of variation data is being produced. Such information may constitute the basis for correlation analyses between genotypes and phenotypes and, in the future, for personalized medicine. Several databases on gene variation exist, but this kind of information is still scarce in the Semantic Web framework. In this paper, we discuss issues related to the integration of mutation data in the Linked Open Data infrastructure, part of the Semantic Web framework. We present the development of a mapping from the IARC TP53 Mutation database to RDF and the implementation of servers publishing this data. Methods A version of the IARC TP53 Mutation database implemented in a relational database was used as first test set. Automatic mappings to RDF were first created by using D2RQ and later manually refined by introducing concepts and properties from domain vocabularies and ontologies, as well as links to Linked Open Data implementations of various systems of biomedical interest. Since D2RQ query performances are lower than those that can be achieved by using an RDF archive, generated data was also loaded into a dedicated system based on tools from the Jena software suite. Results We have implemented a D2RQ Server for TP53 mutation data, providing data on a subset of the IARC database, including gene variations, somatic mutations, and bibliographic references. The server allows to browse the RDF graph by using links both between classes and to external systems. An alternative interface offers improved performances for SPARQL queries. The resulting data can be explored by using any Semantic Web browser or application. Conclusions This has been the first case of a mutation database exposed as Linked Data. A revised version of our prototype, including further concepts and IARC TP53 Mutation database data sets, is under development. The publication of variation information as Linked Data opens new perspectives

  16. Towards linked open gene mutations data.

    Science.gov (United States)

    Zappa, Achille; Splendiani, Andrea; Romano, Paolo

    2012-03-28

    With the advent of high-throughput technologies, a great wealth of variation data is being produced. Such information may constitute the basis for correlation analyses between genotypes and phenotypes and, in the future, for personalized medicine. Several databases on gene variation exist, but this kind of information is still scarce in the Semantic Web framework. In this paper, we discuss issues related to the integration of mutation data in the Linked Open Data infrastructure, part of the Semantic Web framework. We present the development of a mapping from the IARC TP53 Mutation database to RDF and the implementation of servers publishing this data. A version of the IARC TP53 Mutation database implemented in a relational database was used as first test set. Automatic mappings to RDF were first created by using D2RQ and later manually refined by introducing concepts and properties from domain vocabularies and ontologies, as well as links to Linked Open Data implementations of various systems of biomedical interest. Since D2RQ query performances are lower than those that can be achieved by using an RDF archive, generated data was also loaded into a dedicated system based on tools from the Jena software suite. We have implemented a D2RQ Server for TP53 mutation data, providing data on a subset of the IARC database, including gene variations, somatic mutations, and bibliographic references. The server allows to browse the RDF graph by using links both between classes and to external systems. An alternative interface offers improved performances for SPARQL queries. The resulting data can be explored by using any Semantic Web browser or application. This has been the first case of a mutation database exposed as Linked Data. A revised version of our prototype, including further concepts and IARC TP53 Mutation database data sets, is under development.The publication of variation information as Linked Data opens new perspectives: the exploitation of SPARQL searches on

  17. A global database of sap flow measurements (SAPFLUXNET) to link plant and ecosystem physiology

    Science.gov (United States)

    Poyatos, Rafael; Granda, Víctor; Flo, Víctor; Molowny-Horas, Roberto; Mencuccini, Maurizio; Oren, Ram; Katul, Gabriel; Mahecha, Miguel; Steppe, Kathy; Martínez-Vilalta, Jordi

    2017-04-01

    Regional and global networks of ecosystem CO2 and water flux monitoring have dramatically increased our understanding of ecosystem functioning in the last 20 years. More recently, analyses of ecosystem-level fluxes have successfully incorporated data streams at coarser (remote sensing) and finer (plant traits) organisational scales. However, there are few data sources that capture the diel to seasonal dynamics of whole-plant physiology and that can provide a link between organism- and ecosystem-level function. Sap flow measured in plant stems reveals the temporal patterns in plant water transport, as mediated by stomatal regulation and hydraulic architecture. The widespread use of thermometric methods of sap flow measurement since the 1990s has resulted in numerous data sets for hundreds of species and sites worldwide, but these data have remained fragmentary and generally unavailable for syntheses of regional to global scope. We are compiling the first global database of sub-daily sap flow measurements in individual plants (SAPFLUXNET), aimed at unravelling the environmental and biotic drivers of plant transpiration regulation globally. I will present the SAPFLUXNET data infrastructure and workflow, which is built upon flexible, open-source computing tools within the R environment (dedicated R packages and classes, interactive documents and apps with Rmarkdown and Shiny). Data collection started in mid-2016, we have already incorporated > 50 datasets representing > 40 species and > 350 individual plants, globally distributed, and the number of contributed data sets is increasing rapidly. I will provide a general overview of the distribution of available data sets according to climate, measurement method, species, functional groups and plant size attributes. In parallel to the sap flow data compilation, we have also collated published results from calibrations of sap flow methods, to provide a first quantification on the variability associated with different sap

  18. National Database of Geriatrics

    DEFF Research Database (Denmark)

    Kannegaard, Pia Nimann; Vinding, Kirsten L; Hare-Bruun, Helle

    2016-01-01

    AIM OF DATABASE: The aim of the National Database of Geriatrics is to monitor the quality of interdisciplinary diagnostics and treatment of patients admitted to a geriatric hospital unit. STUDY POPULATION: The database population consists of patients who were admitted to a geriatric hospital unit....... Geriatric patients cannot be defined by specific diagnoses. A geriatric patient is typically a frail multimorbid elderly patient with decreasing functional ability and social challenges. The database includes 14-15,000 admissions per year, and the database completeness has been stable at 90% during the past......, percentage of discharges with a rehabilitation plan, and the part of cases where an interdisciplinary conference has taken place. Data are recorded by doctors, nurses, and therapists in a database and linked to the Danish National Patient Register. DESCRIPTIVE DATA: Descriptive patient-related data include...

  19. Design and Development of a Linked Open Data-Based Health Information Representation and Visualization System: Potentials and Preliminary Evaluation

    Science.gov (United States)

    Kauppinen, Tomi; Keßler, Carsten; Fritz, Fleur

    2014-01-01

    Background Healthcare organizations around the world are challenged by pressures to reduce cost, improve coordination and outcome, and provide more with less. This requires effective planning and evidence-based practice by generating important information from available data. Thus, flexible and user-friendly ways to represent, query, and visualize health data becomes increasingly important. International organizations such as the World Health Organization (WHO) regularly publish vital data on priority health topics that can be utilized for public health policy and health service development. However, the data in most portals is displayed in either Excel or PDF formats, which makes information discovery and reuse difficult. Linked Open Data (LOD)—a new Semantic Web set of best practice of standards to publish and link heterogeneous data—can be applied to the representation and management of public level health data to alleviate such challenges. However, the technologies behind building LOD systems and their effectiveness for health data are yet to be assessed. Objective The objective of this study is to evaluate whether Linked Data technologies are potential options for health information representation, visualization, and retrieval systems development and to identify the available tools and methodologies to build Linked Data-based health information systems. Methods We used the Resource Description Framework (RDF) for data representation, Fuseki triple store for data storage, and Sgvizler for information visualization. Additionally, we integrated SPARQL query interface for interacting with the data. We primarily use the WHO health observatory dataset to test the system. All the data were represented using RDF and interlinked with other related datasets on the Web of Data using Silk—a link discovery framework for Web of Data. A preliminary usability assessment was conducted following the System Usability Scale (SUS) method. Results We developed an LOD

  20. Design and development of a linked open data-based health information representation and visualization system: potentials and preliminary evaluation.

    Science.gov (United States)

    Tilahun, Binyam; Kauppinen, Tomi; Keßler, Carsten; Fritz, Fleur

    2014-10-25

    Healthcare organizations around the world are challenged by pressures to reduce cost, improve coordination and outcome, and provide more with less. This requires effective planning and evidence-based practice by generating important information from available data. Thus, flexible and user-friendly ways to represent, query, and visualize health data becomes increasingly important. International organizations such as the World Health Organization (WHO) regularly publish vital data on priority health topics that can be utilized for public health policy and health service development. However, the data in most portals is displayed in either Excel or PDF formats, which makes information discovery and reuse difficult. Linked Open Data (LOD)-a new Semantic Web set of best practice of standards to publish and link heterogeneous data-can be applied to the representation and management of public level health data to alleviate such challenges. However, the technologies behind building LOD systems and their effectiveness for health data are yet to be assessed. The objective of this study is to evaluate whether Linked Data technologies are potential options for health information representation, visualization, and retrieval systems development and to identify the available tools and methodologies to build Linked Data-based health information systems. We used the Resource Description Framework (RDF) for data representation, Fuseki triple store for data storage, and Sgvizler for information visualization. Additionally, we integrated SPARQL query interface for interacting with the data. We primarily use the WHO health observatory dataset to test the system. All the data were represented using RDF and interlinked with other related datasets on the Web of Data using Silk-a link discovery framework for Web of Data. A preliminary usability assessment was conducted following the System Usability Scale (SUS) method. We developed an LOD-based health information representation, querying

  1. The NCBI BioSystems database.

    Science.gov (United States)

    Geer, Lewis Y; Marchler-Bauer, Aron; Geer, Renata C; Han, Lianyi; He, Jane; He, Siqian; Liu, Chunlei; Shi, Wenyao; Bryant, Stephen H

    2010-01-01

    The NCBI BioSystems database, found at http://www.ncbi.nlm.nih.gov/biosystems/, centralizes and cross-links existing biological systems databases, increasing their utility and target audience by integrating their pathways and systems into NCBI resources. This integration allows users of NCBI's Entrez databases to quickly categorize proteins, genes and small molecules by metabolic pathway, disease state or other BioSystem type, without requiring time-consuming inference of biological relationships from the literature or multiple experimental datasets.

  2. Development of knowledge base system linked to material database

    International Nuclear Information System (INIS)

    Kaji, Yoshiyuki; Tsuji, Hirokazu; Mashiko, Shinichi; Miyakawa, Shunichi; Fujita, Mitsutane; Kinugawa, Junichi; Iwata, Shuichi

    2002-01-01

    The distributed material database system named 'Data-Free-Way' has been developed by four organizations (the National Institute for Materials Science, the Japan Atomic Energy Research Institute, the Japan Nuclear Cycle Development Institute, and the Japan Science and Technology Corporation) under a cooperative agreement in order to share fresh and stimulating information as well as accumulated information for the development of advanced nuclear materials, for the design of structural components, etc. In order to create additional values of the system, knowledge base system, in which knowledge extracted from the material database is expressed, is planned to be developed for more effective utilization of Data-Free-Way. XML (eXtensible Markup Language) has been adopted as the description method of the retrieved results and the meaning of them. One knowledge note described with XML is stored as one knowledge which composes the knowledge base. Since this knowledge note is described with XML, the user can easily convert the display form of the table and the graph into the data format which the user usually uses. This paper describes the current status of Data-Free-Way, the description method of knowledge extracted from the material database with XML and the distributed material knowledge base system. (author)

  3. Reach Address Database (RAD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Reach Address Database (RAD) stores the reach address of each Water Program feature that has been linked to the underlying surface water features (streams,...

  4. Specialist Bibliographic Databases.

    Science.gov (United States)

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls.

  5. Specialist Bibliographic Databases

    Science.gov (United States)

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  6. A simple method for serving Web hypermaps with dynamic database drill-down

    Directory of Open Access Journals (Sweden)

    Carson Ewart R

    2002-08-01

    Full Text Available Abstract Background HealthCyberMap http://healthcybermap.semanticweb.org aims at mapping parts of health information cyberspace in novel ways to deliver a semantically superior user experience. This is achieved through "intelligent" categorisation and interactive hypermedia visualisation of health resources using metadata, clinical codes and GIS. HealthCyberMap is an ArcView 3.1 project. WebView, the Internet extension to ArcView, publishes HealthCyberMap ArcView Views as Web client-side imagemaps. The basic WebView set-up does not support any GIS database connection, and published Web maps become disconnected from the original project. A dedicated Internet map server would be the best way to serve HealthCyberMap database-driven interactive Web maps, but is an expensive and complex solution to acquire, run and maintain. This paper describes HealthCyberMap simple, low-cost method for "patching" WebView to serve hypermaps with dynamic database drill-down functionality on the Web. Results The proposed solution is currently used for publishing HealthCyberMap GIS-generated navigational information maps on the Web while maintaining their links with the underlying resource metadata base. Conclusion The authors believe their map serving approach as adopted in HealthCyberMap has been very successful, especially in cases when only map attribute data change without a corresponding effect on map appearance. It should be also possible to use the same solution to publish other interactive GIS-driven maps on the Web, e.g., maps of real world health problems.

  7. Using a database to manage resolution of comments on standards

    International Nuclear Information System (INIS)

    Holloran, R.W.; Kelley, R.P.

    1995-01-01

    Features of production systems that would enhance development and implementation of procedures and other standards were first suggested in 1988 described how a database could provide the features sought for managing the content of structured documents such as standards and procedures. This paper describes enhancements of the database that manage the more complex links associated with resolution of comments. Displaying the linked information on a computer display aids comment resolvers. A hardcopy report generated by the database permits others to independently evaluate the resolution of comments in context with the original text of the standard, the comment, and the revised text of the standard. Because the links are maintained by the database, consistency between the agreed-upon resolutions and the text of the standard can be maintained throughout the subsequent reviews of the standard. Each of the links is bidirectional; i.e., the relationships between any two documents can be viewed from the perspective of either document

  8. Linked Patient-Reported Outcomes Data From Patients With Multiple Sclerosis Recruited on an Open Internet Platform to Health Care Claims Databases Identifies a Representative Population for Real-Life Data Analysis in Multiple Sclerosis.

    Science.gov (United States)

    Risson, Valery; Ghodge, Bhaskar; Bonzani, Ian C; Korn, Jonathan R; Medin, Jennie; Saraykar, Tanmay; Sengupta, Souvik; Saini, Deepanshu; Olson, Melvin

    2016-09-22

    An enormous amount of information relevant to public health is being generated directly by online communities. To explore the feasibility of creating a dataset that links patient-reported outcomes data, from a Web-based survey of US patients with multiple sclerosis (MS) recruited on open Internet platforms, to health care utilization information from health care claims databases. The dataset was generated by linkage analysis to a broader MS population in the United States using both pharmacy and medical claims data sources. US Facebook users with an interest in MS were alerted to a patient-reported survey by targeted advertisements. Eligibility criteria were diagnosis of MS by a specialist (primary progressive, relapsing-remitting, or secondary progressive), ≥12-month history of disease, age 18-65 years, and commercial health insurance. Participants completed a questionnaire including data on demographic and disease characteristics, current and earlier therapies, relapses, disability, health-related quality of life, and employment status and productivity. A unique anonymous profile was generated for each survey respondent. Each anonymous profile was linked to a number of medical and pharmacy claims datasets in the United States. Linkage rates were assessed and survey respondents' representativeness was evaluated based on differences in the distribution of characteristics between the linked survey population and the general MS population in the claims databases. The advertisement was placed on 1,063,973 Facebook users' pages generating 68,674 clicks, 3719 survey attempts, and 651 successfully completed surveys, of which 440 could be linked to any of the claims databases for 2014 or 2015 (67.6% linkage rate). Overall, no significant differences were found between patients who were linked and not linked for educational status, ethnicity, current or prior disease-modifying therapy (DMT) treatment, or presence of a relapse in the last 12 months. The frequencies of the

  9. A Sustainable Spacecraft Component Database Solution, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Numerous spacecraft component databases have been developed to support NASA, DoD, and contractor design centers and design tools. Despite the clear utility of...

  10. An Integrated Molecular Database on Indian Insects.

    Science.gov (United States)

    Pratheepa, Maria; Venkatesan, Thiruvengadam; Gracy, Gandhi; Jalali, Sushil Kumar; Rangheswaran, Rajagopal; Antony, Jomin Cruz; Rai, Anil

    2018-01-01

    MOlecular Database on Indian Insects (MODII) is an online database linking several databases like Insect Pest Info, Insect Barcode Information System (IBIn), Insect Whole Genome sequence, Other Genomic Resources of National Bureau of Agricultural Insect Resources (NBAIR), Whole Genome sequencing of Honey bee viruses, Insecticide resistance gene database and Genomic tools. This database was developed with a holistic approach for collecting information about phenomic and genomic information of agriculturally important insects. This insect resource database is available online for free at http://cib.res.in. http://cib.res.in/.

  11. BAPA Database: Linking landslide occurrence with rainfall in Asturias (Spain)

    Science.gov (United States)

    Valenzuela, Pablo; José Domínguez-Cuesta, María; Jiménez-Sánchez, Montserrat

    2015-04-01

    Asturias is a region in northern Spain with a temperate and humid climate. In this region, slope instability processes are very common and often cause economic losses and, sometimes, human victims. To prevent the geological risk involved, it is of great interest to predict landslide spatial and temporal occurrence. Some previous investigations have shown the importance of rainfall as a trigger factor. Despite the high incidence of these phenomena in Asturias, there are no databases of recent and actual landslides. The BAPA Project (Base de Datos de Argayos del Principado de Asturias - Principality of Asturias Landslide Database) aims to create an inventory of slope instabilities which have occurred between 1980 and 2015. The final goal is to study in detail the relationship between rainfall and slope instabilities in Asturias, establishing precipitation thresholds and soil moisture conditions necessary to instability triggering. This work presents the database progress showing its structure divided into various fields that essentially contain information related to spatial, temporal, geomorphological and damage data.

  12. Integration of Oracle and Hadoop: Hybrid Databases Affordable at Scale

    Science.gov (United States)

    Canali, L.; Baranowski, Z.; Kothuri, P.

    2017-10-01

    This work reports on the activities aimed at integrating Oracle and Hadoop technologies for the use cases of CERN database services and in particular on the development of solutions for offloading data and queries from Oracle databases into Hadoop-based systems. The goal and interest of this investigation is to increase the scalability and optimize the cost/performance footprint for some of our largest Oracle databases. These concepts have been applied, among others, to build offline copies of CERN accelerator controls and logging databases. The tested solution allows to run reports on the controls data offloaded in Hadoop without affecting the critical production database, providing both performance benefits and cost reduction for the underlying infrastructure. Other use cases discussed include building hybrid database solutions with Oracle and Hadoop, offering the combined advantages of a mature relational database system with a scalable analytics engine.

  13. Molecule database framework: a framework for creating database applications with chemical structure search capability.

    Science.gov (United States)

    Kiener, Joos

    2013-12-11

    Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework

  14. FLUORESCENCE PROBING OF THE FORMATION OF HYDROPHOBIC MICRODOMAINS BY CROSS-LINKED POLY(ALKYLMETHYLDIALLYLAMMONIUM BROMIDES) IN AQUEOUS-SOLUTION

    NARCIS (Netherlands)

    WANG, GJ; ENGBERTS, J B F N

    Pyrene has been used as a fluorescence probe to investigate the conformational behavior of cross-linked poly(alkylmethyldiallylammonium bromides) in aqueous solutions. Binding of pyrene to hydrophobic microdomains, formed by the polysoaps, is reflected by a change in the ratio I-1/I-3 of the

  15. Compiling Holocene RSL databases from near- to far-field regions: proxies, difficulties and possible solutions

    Science.gov (United States)

    Vacchi, M.; Horton, B.; Mann, T.; Engelhart, S. E.; Rovere, A.; Nikitina, D.; Bender, M.; Roy, K.; Peltier, W. R.

    2017-12-01

    Reconstructions of relative sea level (RSL) have implications for investigation of crustal movements, calibration of earth rheology models and the reconstruction of ice sheets. In recent years, efforts were made to create RSL databases following a standardized methodology. These regional databases provide a framework for developing our understanding of the primary mechanisms of RSL change since the Last Glacial Maximum and a long-term baseline against which to gauge changes in sea level during the 20th century and forecasts for the 21st. We report here the results of recently compiled databases in very different climatic and geographic contexts that are the northeastern Canadian coast, the Mediterranean Sea as well as the southeastern Asiatic region. Our re-evaluation of sea-level indicators from geological and archaeological investigations have yielded more than 3000 RSL data-points mainly from salt and freshwater wetlands or adjacent estuarine sediment, isolation basins, beach ridges, fixed biological indicators, beachrocks as well as coastal archaeological structures. We outline some of the inherent difficulties, and potential solutions to analyse sea-level data in such different depositional environments. In particular, we discuss problems related with the definition of standardized indicative meaning, and with the re-evaluation of old radiocarbon samples. We further address complex tectonics influences and the framework to compare such large variability of RSL data-points. Finally we discuss the implications of our results for the patterns of glacio-isostatic adjustment in these regions.

  16. FunctSNP: an R package to link SNPs to functional knowledge and dbAutoMaker: a suite of Perl scripts to build SNP databases

    Directory of Open Access Journals (Sweden)

    Watson-Haigh Nathan S

    2010-06-01

    Full Text Available Abstract Background Whole genome association studies using highly dense single nucleotide polymorphisms (SNPs are a set of methods to identify DNA markers associated with variation in a particular complex trait of interest. One of the main outcomes from these studies is a subset of statistically significant SNPs. Finding the potential biological functions of such SNPs can be an important step towards further use in human and agricultural populations (e.g., for identifying genes related to susceptibility to complex diseases or genes playing key roles in development or performance. The current challenge is that the information holding the clues to SNP functions is distributed across many different databases. Efficient bioinformatics tools are therefore needed to seamlessly integrate up-to-date functional information on SNPs. Many web services have arisen to meet the challenge but most work only within the framework of human medical research. Although we acknowledge the importance of human research, we identify there is a need for SNP annotation tools for other organisms. Description We introduce an R package called FunctSNP, which is the user interface to custom built species-specific databases. The local relational databases contain SNP data together with functional annotations extracted from online resources. FunctSNP provides a unified bioinformatics resource to link SNPs with functional knowledge (e.g., genes, pathways, ontologies. We also introduce dbAutoMaker, a suite of Perl scripts, which can be scheduled to run periodically to automatically create/update the customised SNP databases. We illustrate the use of FunctSNP with a livestock example, but the approach and software tools presented here can be applied also to human and other organisms. Conclusions Finding the potential functional significance of SNPs is important when further using the outcomes from whole genome association studies. FunctSNP is unique in that it is the only R

  17. Destruction and cross-linking of dextran during γ-radiolysis of its aqueous solutions. Effect of hydrogen ions

    International Nuclear Information System (INIS)

    Kovalev, G.V.; Sinitsin, A.P.; Bugaenko, L.T.

    2000-01-01

    Conditions of primary proceeding either cross-linking process or destruction one during γ-radiolysis in the range of 0-0.32 MGy doses of acid aqueous solutions of dextran macromolecules (P W =930) are determined by the methods of viscosimetry and gel-chromatography. It is shown that initial acidification of dextran solutions results in increasing of the role of cross-linking of macromolecules in the process of formation of molecular-mass distribution of the polymer but continued acidification promotes destruction. It is established that the former is caused by transformation of hydrated electron in hydrogen atoms and the second - by catalytic effect of protons on macromolecular destruction of primary macroradicals being accompanied by breakage of glucoside bonds. It is shown that so far as dextran concentration increase transmission of radical center can to occur on macroradical-macromolecule reaction. As a result macroradical able to monomolecular decomposition transforms in macroradical not able to this transformation [ru

  18. Geroprotectors.org: a new, structured and curated database of current therapeutic interventions in aging and age-related disease

    Science.gov (United States)

    Moskalev, Alexey; Chernyagina, Elizaveta; de Magalhães, João Pedro; Barardo, Diogo; Thoppil, Harikrishnan; Shaposhnikov, Mikhail; Budovsky, Arie; Fraifeld, Vadim E.; Garazha, Andrew; Tsvetkov, Vasily; Bronovitsky, Evgeny; Bogomolov, Vladislav; Scerbacov, Alexei; Kuryan, Oleg; Gurinovich, Roman; Jellen, Leslie C.; Kennedy, Brian; Mamoshina, Polina; Dobrovolskaya, Evgeniya; Aliper, Alex; Kaminsky, Dmitry; Zhavoronkov, Alex

    2015-01-01

    As the level of interest in aging research increases, there is a growing number of geroprotectors, or therapeutic interventions that aim to extend the healthy lifespan and repair or reduce aging-related damage in model organisms and, eventually, in humans. There is a clear need for a manually-curated database of geroprotectors to compile and index their effects on aging and age-related diseases and link these effects to relevant studies and multiple biochemical and drug databases. Here, we introduce the first such resource, Geroprotectors (http://geroprotectors.org). Geroprotectors is a public, rapidly explorable database that catalogs over 250 experiments involving over 200 known or candidate geroprotectors that extend lifespan in model organisms. Each compound has a comprehensive profile complete with biochemistry, mechanisms, and lifespan effects in various model organisms, along with information ranging from chemical structure, side effects, and toxicity to FDA drug status. These are presented in a visually intuitive, efficient framework fit for casual browsing or in-depth research alike. Data are linked to the source studies or databases, providing quick and convenient access to original data. The Geroprotectors database facilitates cross-study, cross-organism, and cross-discipline analysis and saves countless hours of inefficient literature and web searching. Geroprotectors is a one-stop, knowledge-sharing, time-saving resource for researchers seeking healthy aging solutions. PMID:26342919

  19. Database Vs Data Warehouse

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Data warehouse technology includes a set of concepts and methods that offer the users useful information for decision making. The necessity to build a data warehouse arises from the necessity to improve the quality of information in the organization. The date proceeding from different sources, having a variety of forms - both structured and unstructured, are filtered according to business rules and are integrated in a single large data collection. Using informatics solutions, managers have understood that data stored in operational systems - including databases, are an informational gold mine that must be exploited. Data warehouses have been developed to answer the increasing demands for complex analysis, which could not be properly achieved with operational databases. The present paper emphasizes some of the criteria that information application developers can use in order to choose between a database solution or a data warehouse one.

  20. Database Description - PGDBj - Ortholog DB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available e relevant data in the databases. By submitting queries to the PGDBj Ortholog DB with keywords or amino acid sequences, users... taxa including both model plants and crop plants. Following the links obtained, users can retrieve the actu

  1. Developments in diffraction databases

    International Nuclear Information System (INIS)

    Jenkins, R.

    1999-01-01

    Full text: There are a number of databases available to the diffraction community. Two of the more important of these are the Powder Diffraction File (PDF) maintained by the International Centre for Diffraction Data (ICDD), and the Inorganic Crystal Structure Database (ICSD) maintained by Fachsinformationzentrum (FIZ, Karlsruhe). In application, the PDF has been used as an indispensable tool in phase identification and identification of unknowns. The ICSD database has extensive and explicit reference to the structures of compounds: atomic coordinates, space group and even thermal vibration parameters. A similar database, but for organic compounds, is maintained by the Cambridge Crystallographic Data Centre. These databases are often used as independent sources of information. However, little thought has been given on how to exploit the combined properties of structural database tools. A recently completed agreement between ICDD and FIZ, plus ICDD and Cambridge, provides a first step in complementary use of the PDF and the ICSD databases. The focus of this paper (as indicated below) is to examine ways of exploiting the combined properties of both databases. In 1996, there were approximately 76,000 entries in the PDF and approximately 43,000 entries in the ICSD database. The ICSD database has now been used to calculate entries in the PDF. Thus, to derive d-spacing and peak intensity data requires the synthesis of full diffraction patterns, i.e., we use the structural data in the ICSD database and then add instrumental resolution information. The combined data from PDF and ICSD can be effectively used in many ways. For example, we can calculate PDF data for an ideally random crystal distribution and also in the absence of preferred orientation. Again, we can use systematic studies of intermediate members in solid solutions series to help produce reliable quantitative phase analyses. In some cases, we can study how solid solution properties vary with composition and

  2. Successful linking of the Society of Thoracic Surgeons Database to Social Security data to examine the accuracy of Society of Thoracic Surgeons mortality data.

    Science.gov (United States)

    Jacobs, Jeffrey P; O'Brien, Sean M; Shahian, David M; Edwards, Fred H; Badhwar, Vinay; Dokholyan, Rachel S; Sanchez, Juan A; Morales, David L; Prager, Richard L; Wright, Cameron D; Puskas, John D; Gammie, James S; Haan, Constance K; George, Kristopher M; Sheng, Shubin; Peterson, Eric D; Shewan, Cynthia M; Han, Jane M; Bongiorno, Phillip A; Yohe, Courtney; Williams, William G; Mayer, John E; Grover, Frederick L

    2013-04-01

    The Society of Thoracic Surgeons Adult Cardiac Surgery Database has been linked to the Social Security Death Master File to verify "life status" and evaluate long-term surgical outcomes. The objective of this study is explore practical applications of the linkage of the Society of Thoracic Surgeons Adult Cardiac Surgery Database to Social Securtiy Death Master File, including the use of the Social Securtiy Death Master File to examine the accuracy of the Society of Thoracic Surgeons 30-day mortality data. On January 1, 2008, the Society of Thoracic Surgeons Adult Cardiac Surgery Database began collecting Social Security numbers in its new version 2.61. This study includes all Society of Thoracic Surgeons Adult Cardiac Surgery Database records for operations with nonmissing Social Security numbers between January 1, 2008, and December 31, 2010, inclusive. To match records between the Society of Thoracic Surgeons Adult Cardiac Surgery Database and the Social Security Death Master File, we used a combined probabilistic and deterministic matching rule with reported high sensitivity and nearly perfect specificity. Between January 1, 2008, and December 31, 2010, the Society of Thoracic Surgeons Adult Cardiac Surgery Database collected data for 870,406 operations. Social Security numbers were available for 541,953 operations and unavailable for 328,453 operations. According to the Society of Thoracic Surgeons Adult Cardiac Surgery Database, the 30-day mortality rate was 17,757/541,953 = 3.3%. Linkage to the Social Security Death Master File identified 16,565 cases of suspected 30-day deaths (3.1%). Of these, 14,983 were recorded as 30-day deaths in the Society of Thoracic Surgeons database (relative sensitivity = 90.4%). Relative sensitivity was 98.8% (12,863/13,014) for suspected 30-day deaths occurring before discharge and 59.7% (2120/3551) for suspected 30-day deaths occurring after discharge. Linkage to the Social Security Death Master File confirms the accuracy of

  3. The Techniques for Arbitrary Secure Quering to Encrypted Cloud Database Using Fully Homomorphic Encryption

    Directory of Open Access Journals (Sweden)

    Filipp B. Burtyka

    2017-06-01

    Full Text Available The processing of queries to an encrypted database without its decryption has been actively investigated recently by both cryptographers and database researchers. Such a work is allowed by various types of so-called Processable encryption (PE, as well as special architectures of database management systems (DBMS which use these types of encryption. The most known types of PEs are order-preserving encryption, homomorphic encryption, functional encryption, searchable encryption, and property-preserving encryption. Based on these types of encryption, various DBMSs are built, the most famous of which are CryptDB, Mo- nomi, Arx and DBMS by researchers from Novosibirsk. These DBMSs are built on the basis of various types of PEs, for example order-preserving encryption, homomorphic encryption and traditional block encryption. However, this approach can cause privacy problems. The best approach from the security viewpoint is to build a cryptographic database using only homomorphic encryption. An obstacle to this is insufficient efficiency of the existing homomorphic encryption schemes and incomplete solution of a set of issues related to ensuring the confidentiality of decisions making in an untrusted environment. In this paper, we propose the techniques for solving these problems, in particular for organization of execution arbitrary secure query to the encrypted relational database using fully homomorphic encryption. Also we propose a model of query condition that splits query into atomic predicates and linking condition. One of roposed technique is aimed at ensuring the security of linking condition of queries, others keep security of atomic predicates. The parameters of the proposed techniques make it possible to implement them using the already existing homomorphic encryption schemes. The proposed techniques can be a basis for building secure cryptographic cloud databases.

  4. Database Description - TP Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available tform for Drug Discovery, Informatics, and Structural Life Science Research Organization of Information and ...3(3):145-54. External Links: Original website information Database maintenance site National Institute of Genetics, Research Organiza...tion of Information and Systems (ROIS) URL of the original website http://www.tanpa

  5. An XML-Based Networking Method for Connecting Distributed Anthropometric Databases

    Directory of Open Access Journals (Sweden)

    H Cheng

    2007-03-01

    Full Text Available Anthropometric data are used by numerous types of organizations for health evaluation, ergonomics, apparel sizing, fitness training, and many other applications. Data have been collected and stored in electronic databases since at least the 1940s. These databases are owned by many organizations around the world. In addition, the anthropometric studies stored in these databases often employ different standards, terminology, procedures, or measurement sets. To promote the use and sharing of these databases, the World Engineering Anthropometry Resources (WEAR group was formed and tasked with the integration and publishing of member resources. It is easy to see that organizing worldwide anthropometric data into a single database architecture could be a daunting and expensive undertaking. The challenges of WEAR integration reflect mainly in the areas of distributed and disparate data, different standards and formats, independent memberships, and limited development resources. Fortunately, XML schema and web services provide an alternative method for networking databases, referred to as the Loosely Coupled WEAR Integration. A standard XML schema can be defined and used as a type of Rosetta stone to translate the anthropometric data into a universal format, and a web services system can be set up to link the databases to one another. In this way, the originators of the data can keep their data locally along with their own data management system and user interface, but their data can be searched and accessed as part of the larger data network, and even combined with the data of others. This paper will identify requirements for WEAR integration, review XML as the universal format, review different integration approaches, and propose a hybrid web services/data mart solution.

  6. Preparation and Antioxidant Activity of Ethyl-Linked Anthocyanin-Flavanol Pigments from Model Wine Solutions.

    Science.gov (United States)

    Li, Lingxi; Zhang, Minna; Zhang, Shuting; Cui, Yan; Sun, Baoshan

    2018-05-03

    Anthocyanin-flavanol pigments, formed during red wine fermentation and storage by condensation reactions between anthocyanins and flavanols (monomers, oligomers, and polymers), are one of the major groups of polyphenols in aged red wine. However, knowledge of their biological activities is lacking. This is probably due to the structural diversity and complexity of these molecules, which makes the large-scale separation and isolation of the individual compounds very difficult, thus restricting their further study. In this study, anthocyanins (i.e., malvidin-3-glucoside, cyanidin-3-glucoside, and peonidin-3-glucoside) and (⁻)-epicatechin were first isolated at a preparative scale by high-speed counter-current chromatography. The condensation reaction between each of the isolated anthocyanins and (⁻)-epicatechin, mediated by acetaldehyde, was conducted in model wine solutions to obtain ethyl-linked anthocyanin-flavanol pigments. The effects of pH, molar ratio, and temperature on the reaction rate were investigated, and the reaction conditions of pH 1.7, molar ratio 1:6:10 (anthocyanin/(⁻)-epicatechin/acetaldehyde), and reaction temperature of 35 °C were identified as optimal for conversion of anthocyanins to ethyl-linked anthocyanin-flavanol pigments. Six ethyl-linked anthocyanin-flavanol pigments were isolated in larger quantities and collected under optimal reaction conditions, and their chemical structures were identified by HPLC-QTOF-MS and ECD analyses. Furthermore, DPPH, ABTS, and FRAP assays indicate that ethyl-linked anthocyanin-flavanol pigments show stronger antioxidant activities than their precursor anthocyanins.

  7. Preparation and Antioxidant Activity of Ethyl-Linked Anthocyanin-Flavanol Pigments from Model Wine Solutions

    Directory of Open Access Journals (Sweden)

    Lingxi Li

    2018-05-01

    Full Text Available Anthocyanin-flavanol pigments, formed during red wine fermentation and storage by condensation reactions between anthocyanins and flavanols (monomers, oligomers, and polymers, are one of the major groups of polyphenols in aged red wine. However, knowledge of their biological activities is lacking. This is probably due to the structural diversity and complexity of these molecules, which makes the large-scale separation and isolation of the individual compounds very difficult, thus restricting their further study. In this study, anthocyanins (i.e., malvidin-3-glucoside, cyanidin-3-glucoside, and peonidin-3-glucoside and (–-epicatechin were first isolated at a preparative scale by high-speed counter-current chromatography. The condensation reaction between each of the isolated anthocyanins and (–-epicatechin, mediated by acetaldehyde, was conducted in model wine solutions to obtain ethyl-linked anthocyanin-flavanol pigments. The effects of pH, molar ratio, and temperature on the reaction rate were investigated, and the reaction conditions of pH 1.7, molar ratio 1:6:10 (anthocyanin/(–-epicatechin/acetaldehyde, and reaction temperature of 35 °C were identified as optimal for conversion of anthocyanins to ethyl-linked anthocyanin-flavanol pigments. Six ethyl-linked anthocyanin-flavanol pigments were isolated in larger quantities and collected under optimal reaction conditions, and their chemical structures were identified by HPLC-QTOF-MS and ECD analyses. Furthermore, DPPH, ABTS, and FRAP assays indicate that ethyl-linked anthocyanin-flavanol pigments show stronger antioxidant activities than their precursor anthocyanins.

  8. The LHCb configuration database

    CERN Document Server

    Abadie, L; Van Herwijnen, Eric; Jacobsson, R; Jost, B; Neufeld, N

    2005-01-01

    The aim of the LHCb configuration database is to store information about all the controllable devices of the detector. The experiment's control system (that uses PVSS ) will configure, start up and monitor the detector from the information in the configuration database. The database will contain devices with their properties, connectivity and hierarchy. The ability to store and rapidly retrieve huge amounts of data, and the navigability between devices are important requirements. We have collected use cases to ensure the completeness of the design. Using the entity relationship modelling technique we describe the use cases as classes with attributes and links. We designed the schema for the tables using relational diagrams. This methodology has been applied to the TFC (switches) and DAQ system. Other parts of the detector will follow later. The database has been implemented using Oracle to benefit from central CERN database support. The project also foresees the creation of tools to populate, maintain, and co...

  9. Database mirroring in fault-tolerant continuous technological process control

    Directory of Open Access Journals (Sweden)

    R. Danel

    2015-10-01

    Full Text Available This paper describes the implementations of mirroring technology of the selected database systems – Microsoft SQL Server, MySQL and Caché. By simulating critical failures the systems behavior and their resilience against failure were tested. The aim was to determine whether the database mirroring is suitable to use in continuous metallurgical processes for ensuring the fault-tolerant solution at affordable cost. The present day database systems are characterized by high robustness and are resistant to sudden system failure. Database mirroring technologies are reliable and even low-budget projects can be provided with a decent fault-tolerant solution. The database system technologies available for low-budget projects are not suitable for use in real-time systems.

  10. Promoting Wired Links in Wireless Mesh Networks: An Efficient Engineering Solution

    Science.gov (United States)

    Barekatain, Behrang; Raahemifar, Kaamran; Ariza Quintana, Alfonso; Triviño Cabrera, Alicia

    2015-01-01

    Wireless Mesh Networks (WMNs) cannot completely guarantee good performance of traffic sources such as video streaming. To improve the network performance, this study proposes an efficient engineering solution named Wireless-to-Ethernet-Mesh-Portal-Passageway (WEMPP) that allows effective use of wired communication in WMNs. WEMPP permits transmitting data through wired and stable paths even when the destination is in the same network as the source (Intra-traffic). Tested with four popular routing protocols (Optimized Link State Routing or OLSR as a proactive protocol, Dynamic MANET On-demand or DYMO as a reactive protocol, DYMO with spanning tree ability and HWMP), WEMPP considerably decreases the end-to-end delay, jitter, contentions and interferences on nodes, even when the network size or density varies. WEMPP is also cost-effective and increases the network throughput. Moreover, in contrast to solutions proposed by previous studies, WEMPP is easily implemented by modifying the firmware of the actual Ethernet hardware without altering the routing protocols and/or the functionality of the IP/MAC/Upper layers. In fact, there is no need for modifying the functionalities of other mesh components in order to work with WEMPPs. The results of this study show that WEMPP significantly increases the performance of all routing protocols, thus leading to better video quality on nodes. PMID:25793516

  11. Practical database programming with Java

    CERN Document Server

    Bai, Ying

    2011-01-01

    "This important resource offers a detailed description about the practical considerations and applications in database programming using Java NetBeans 6.8 with authentic examples and detailed explanations. This book provides readers with a clear picture as to how to handle the database programming issues in the Java NetBeans environment. The book is ideal for classroom and professional training material. It includes a wealth of supplemental material that is available for download including Powerpoint slides, solution manuals, and sample databases"--

  12. Migration from relational to NoSQL database

    Science.gov (United States)

    Ghotiya, Sunita; Mandal, Juhi; Kandasamy, Saravanakumar

    2017-11-01

    Data generated by various real time applications, social networking sites and sensor devices is of very huge amount and unstructured, which makes it difficult for Relational database management systems to handle the data. Data is very precious component of any application and needs to be analysed after arranging it in some structure. Relational databases are only able to deal with structured data, so there is need of NoSQL Database management System which can deal with semi -structured data also. Relational database provides the easiest way to manage the data but as the use of NoSQL is increasing it is becoming necessary to migrate the data from Relational to NoSQL databases. Various frameworks has been proposed previously which provides mechanisms for migration of data stored at warehouses in SQL, middle layer solutions which can provide facility of data to be stored in NoSQL databases to handle data which is not structured. This paper provides a literature review of some of the recent approaches proposed by various researchers to migrate data from relational to NoSQL databases. Some researchers proposed mechanisms for the co-existence of NoSQL and Relational databases together. This paper provides a summary of mechanisms which can be used for mapping data stored in Relational databases to NoSQL databases. Various techniques for data transformation and middle layer solutions are summarised in the paper.

  13. The Danish Fetal Medicine Database

    DEFF Research Database (Denmark)

    Ekelund, Charlotte K; Petersen, Olav B; Jørgensen, Finn S

    2015-01-01

    OBJECTIVE: To describe the establishment and organization of the Danish Fetal Medicine Database and to report national results of first-trimester combined screening for trisomy 21 in the 5-year period 2008-2012. DESIGN: National register study using prospectively collected first-trimester screening...... data from the Danish Fetal Medicine Database. POPULATION: Pregnant women in Denmark undergoing first-trimester screening for trisomy 21. METHODS: Data on maternal characteristics, biochemical and ultrasonic markers are continuously sent electronically from local fetal medicine databases (Astraia Gmbh...... software) to a central national database. Data are linked to outcome data from the National Birth Register, the National Patient Register and the National Cytogenetic Register via the mother's unique personal registration number. First-trimester screening data from 2008 to 2012 were retrieved. MAIN OUTCOME...

  14. A new relational database structure and online interface for the HITRAN database

    Science.gov (United States)

    Hill, Christian; Gordon, Iouli E.; Rothman, Laurence S.; Tennyson, Jonathan

    2013-11-01

    A new format for the HITRAN database is proposed. By storing the line-transition data in a number of linked tables described by a relational database schema, it is possible to overcome the limitations of the existing format, which have become increasingly apparent over the last few years as new and more varied data are being used by radiative-transfer models. Although the database in the new format can be searched using the well-established Structured Query Language (SQL), a web service, HITRANonline, has been deployed to allow users to make most common queries of the database using a graphical user interface in a web page. The advantages of the relational form of the database to ensuring data integrity and consistency are explored, and the compatibility of the online interface with the emerging standards of the Virtual Atomic and Molecular Data Centre (VAMDC) project is discussed. In particular, the ability to access HITRAN data using a standard query language from other websites, command line tools and from within computer programs is described.

  15. A new relational database structure and online interface for the HITRAN database

    International Nuclear Information System (INIS)

    Hill, Christian; Gordon, Iouli E.; Rothman, Laurence S.; Tennyson, Jonathan

    2013-01-01

    A new format for the HITRAN database is proposed. By storing the line-transition data in a number of linked tables described by a relational database schema, it is possible to overcome the limitations of the existing format, which have become increasingly apparent over the last few years as new and more varied data are being used by radiative-transfer models. Although the database in the new format can be searched using the well-established Structured Query Language (SQL), a web service, HITRANonline, has been deployed to allow users to make most common queries of the database using a graphical user interface in a web page. The advantages of the relational form of the database to ensuring data integrity and consistency are explored, and the compatibility of the online interface with the emerging standards of the Virtual Atomic and Molecular Data Centre (VAMDC) project is discussed. In particular, the ability to access HITRAN data using a standard query language from other websites, command line tools and from within computer programs is described. -- Highlights: • A new, interactive version of the HITRAN database is presented. • The data is stored in a structured fashion in a relational database. • The new HITRANonline interface offers increased functionality and easier error correction

  16. Database Description - BodyParts3D | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available lternative name 3D structure database for anatomical concepts DOI 10.18908/lsdba.nbdc00837-000 Creator Creator Name: Kousaku Okubo...da K, Tamura T, Kawamoto S, Takagi T, Okubo K. Journal: Nucleic Acids Res. 2008 Oct 3. External Links: Origi

  17. An Adaptive Database Intrusion Detection System

    Science.gov (United States)

    Barrios, Rita M.

    2011-01-01

    Intrusion detection is difficult to accomplish when attempting to employ current methodologies when considering the database and the authorized entity. It is a common understanding that current methodologies focus on the network architecture rather than the database, which is not an adequate solution when considering the insider threat. Recent…

  18. METRICS FOR DYNAMIC SCALING OF DATABASE IN CLOUDS

    Directory of Open Access Journals (Sweden)

    Alexander V. Boichenko

    2013-01-01

    Full Text Available This article analyzes the main methods of scaling databases (replication, sharding and their support at the popular relational databases and NoSQL solutions with different data models: a document-oriented, key-value, column-oriented, graph. The article provides an assessment of the capabilities of modern cloud-based solution and gives a model for the organization of dynamic scaling in the cloud infrastructure. In the article are analyzed different types of metrics and are included the basic metrics that characterize the functioning parameters and database technology, as well as sets the goals of the integral metrics, necessary for the implementation of adaptive algorithms for dynamic scaling databases in the cloud infrastructure. This article was prepared with the support of RFBR grant № 13-07-00749.

  19. A Solution on Identification and Rearing Files Insmallhold Pig Farming

    Science.gov (United States)

    Xiong, Benhai; Fu, Runting; Lin, Zhaohui; Luo, Qingyao; Yang, Liang

    In order to meet government supervision of pork production safety as well as consumeŕs right to know what they buy, this study adopts animal identification, mobile PDA reader, GPRS and other information technologies, and put forward a data collection method to set up rearing files of pig in smallhold pig farming, and designs related metadata structures and its mobile database, and develops a mobile PDA embedded system to collect individual information of pig and uploading into the remote central database, and finally realizes mobile links to the a specific website. The embedded PDA can identify both a special pig bar ear tag appointed by the Ministry of Agricultural and a general data matrix bar ear tag designed by this study by mobile reader, and can record all kinds of inputs data including bacterins, feed additives, animal drugs and even some forbidden medicines and submitted them to the center database through GPRS. At the same time, the remote center database can be maintained by mobile PDA and GPRS, and finally reached pork tracking from its origin to consumption and its tracing through turn-over direction. This study has suggested a feasible technology solution how to set up network pig electronic rearing files involved smallhold pig farming based on farmer and the solution is proved practical through its application in the Tianjińs pork quality traceability system construction. Although some individual techniques have some adverse effects on the system running such as GPRS transmitting speed now, these will be resolved with the development of communication technology. The full implementation of the solution around China will supply technical supports in guaranteeing the quality and safety of pork production supervision and meet consumer demand.

  20. Physical Samples Linked Data in Action

    Science.gov (United States)

    Ji, P.; Arko, R. A.; Lehnert, K.; Bristol, S.

    2017-12-01

    Most data and metadata related to physical samples currently reside in isolated relational databases driven by diverse data models. How to approach the challenge for sharing, interchanging and integrating data from these difference relational databases motivated us to publish Linked Open Data for collections of physical samples, using Semantic Web technologies including the Resource Description Framework (RDF), RDF Query Language (SPARQL), and Web Ontology Language (OWL). In last few years, we have released four knowledge graphs concentrated on physical samples, including System for Earth Sample Registration (SESAR), USGS National Geochemical Database (NGDC), Ocean Biogeographic Information System (OBIS), and Earthchem Database. Currently the four knowledge graphs contain over 12 million facets (triples) about objects of interest to the geoscience domain. Choosing appropriate domain ontologies for representing context of data is the core of the whole work. Geolink ontology developed by Earthcube Geolink project was used as top level to represent common concepts like person, organization, cruise, etc. Physical sample ontology developed by Interdisciplinary Earth Data Alliance (IEDA) and Darwin Core vocabulary were used as second level to describe details about geological samples and biological diversity. We also focused on finding and building best tool chains to support the whole life cycle of publishing linked data we have, including information retrieval, linked data browsing and data visualization. Currently, Morph, Virtuoso Server, LodView, LodLive, and YASGUI were employed for converting, storing, representing, and querying data in a knowledge base (RDF triplestore). Persistent digital identifier is another main point we concentrated on. Open Researcher & Contributor IDs (ORCIDs), International Geo Sample Numbers (IGSNs), Global Research Identifier Database (GRID) and other persistent identifiers were used to link different resources from various graphs with

  1. SPAN: A Network Providing Integrated, End-to-End, Sensor-to-Database Solutions for Environmental Sciences

    Science.gov (United States)

    Benzel, T.; Cho, Y. H.; Deschon, A.; Gullapalli, S.; Silva, F.

    2009-12-01

    In recent years, advances in sensor network technology have shown great promise to revolutionize environmental data collection. Still, wide spread adoption of these systems by domain experts has been lacking, and these have remained the purview of the engineers who design them. While there are many data logging options for basic data collection in the field currently, scientists are often required to visit the deployment sites to retrieve their data and manually import it into spreadsheets. Some advanced commercial software systems do allow scientists to collect data remotely, but most of these systems only allow point-to-point access, and require proprietary hardware. Furthermore, these commercial solutions preclude the use of sensors from other manufacturers or integration with internet based database repositories and compute engines. Therefore, scientists often must download and manually reformat their data before uploading it to the repositories if they wish to share their data. We present an open-source, low-cost, extensible, turnkey solution called Sensor Processing and Acquisition Network (SPAN) which provides a robust and flexible sensor network service. At the deployment site, SPAN leverages low-power generic embedded processors to integrate variety of commercially available sensor hardware to the network of environmental observation systems. By bringing intelligence close to the sensed phenomena, we can remotely control configuration and re-use, establish rules to trigger sensor activity, manage power requirements, and control the two-way flow of sensed data as well as control information to the sensors. Key features of our design include (1) adoption of a hardware agnostic architecture: our solutions are compatible with several programmable platforms, sensor systems, communication devices and protocols. (2) information standardization: our system supports several popular communication protocols and data formats, and (3) extensible data support: our

  2. Schema Versioning for Multitemporal Relational Databases.

    Science.gov (United States)

    De Castro, Cristina; Grandi, Fabio; Scalas, Maria Rita

    1997-01-01

    Investigates new design options for extended schema versioning support for multitemporal relational databases. Discusses the improved functionalities they may provide. Outlines options and basic motivations for the new design solutions, as well as techniques for the management of proposed schema versioning solutions, includes algorithms and…

  3. [Dynamics of riboflavin level in aqueous humour of anterior chamber of experimental animals under standard stroma saturation by ultraviolet corneal cross-linking solutions].

    Science.gov (United States)

    Bikbov, M M; Shevchuk, N E; Khalimov, A R; Bikbova, G M

    To evaluate the dynamics of riboflavin changes in the aqueous humour of the anterior chamber (AHAC) of rabbits' eyes during standard ultraviolet (UV) cross-linking with account to the area of corneal debridement. Forty two rabbits were studied sequentially. The following solutions of riboflavin were used for cornea saturation: IR - 0.1% isosmotic riboflavin, D - Dextralink (0.1% riboflavin with 20% dextran), R - 0.1% riboflavin with 1.0% hydroxypropylmethylcellulose (HPMC). Each solution was evaluated in 3 groups that differed in the diameter of corneal debridement: group 1 - Epi-Off 3 mm (IR-3, D-3, P-3), group 2 - Epi-Off 6 mm (IR-6, D-6, R-6), and group 3 - Epi-Off 9 mm (IR-9, D-9, R-9). Aqueous humour sampling (252 samples in total) was performed in 10-minute intervals within a 60 minute period. Riboflavin levels were measured by enzyme-linked immunoassay (ID-Vit microbiological test system; Immundiagnostik, Germany). Stable growth rates of riboflavin level in the AHAC (with maximum values reached at 30-40 min) were observed for solutions D and R, regardless of the variant of corneal debridement. Moreover, throughout the whole follow-up period and regardless of the area of corneal debridement, the solution D provided a relatively lower concentration of riboflavin in the AHAC as compared to the two other solutions. At 30 minutes, when the cornea was considered ready for UV irradiation, the riboflavin level in the AHAC ranged from 385±26.1 μg/l (D-9) to 665±28 μg/l (R-9). In groups IR-9, IR-6, P-6, IR-3, and P-3 riboflavin levels were found to be in the same range starting at 20 minutes. However, even a sufficient concentration of riboflavin in the cornea or AHAC cannot guarantee safe and effective UV cross-linking, since the removed epithelium limits the area of the stroma that can be saturated with riboflavin, while the area of UV exposure is 8-10 mm. Safe and efficient standard UV cross-linking may be performed only under sufficient saturation of the

  4. Name Authority Challenges for Indexing and Abstracting Databases

    OpenAIRE

    Denise Beaubien Bennett; Priscilla Williams

    2006-01-01

    Objective - This analysis explores alternative methods for managing author name changes in Indexing and Abstarcting (I&A) databases. A searcher may retrieve incomplete or inaccurate results when the database provides no or faulty assistance in linking author name variations. Methods - The article includes an analysis of current name authority practices in I&A databases and of selected research into name disambiguation models applied to authorship of articles. Results - Several potential...

  5. Development of a personalized training system using the Lung Image Database Consortium and Image Database resource Initiative Database.

    Science.gov (United States)

    Lin, Hongli; Wang, Weisheng; Luo, Jiawei; Yang, Xuedong

    2014-12-01

    The aim of this study was to develop a personalized training system using the Lung Image Database Consortium (LIDC) and Image Database resource Initiative (IDRI) Database, because collecting, annotating, and marking a large number of appropriate computed tomography (CT) scans, and providing the capability of dynamically selecting suitable training cases based on the performance levels of trainees and the characteristics of cases are critical for developing a efficient training system. A novel approach is proposed to develop a personalized radiology training system for the interpretation of lung nodules in CT scans using the Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) database, which provides a Content-Boosted Collaborative Filtering (CBCF) algorithm for predicting the difficulty level of each case of each trainee when selecting suitable cases to meet individual needs, and a diagnostic simulation tool to enable trainees to analyze and diagnose lung nodules with the help of an image processing tool and a nodule retrieval tool. Preliminary evaluation of the system shows that developing a personalized training system for interpretation of lung nodules is needed and useful to enhance the professional skills of trainees. The approach of developing personalized training systems using the LIDC/IDRL database is a feasible solution to the challenges of constructing specific training program in terms of cost and training efficiency. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  6. New e-learning method using databases

    Directory of Open Access Journals (Sweden)

    Andreea IONESCU

    2012-10-01

    Full Text Available The objective of this paper is to present a new e-learning method that use databases. The solution could pe implemented for any typeof e-learning system in any domain. The article will purpose a solution to improve the learning process for virtual classes.

  7. Danish Gynecological Cancer Database

    DEFF Research Database (Denmark)

    Sørensen, Sarah Mejer; Bjørn, Signe Frahm; Jochumsen, Kirsten Marie

    2016-01-01

    AIM OF DATABASE: The Danish Gynecological Cancer Database (DGCD) is a nationwide clinical cancer database and its aim is to monitor the treatment quality of Danish gynecological cancer patients, and to generate data for scientific purposes. DGCD also records detailed data on the diagnostic measures...... data forms as follows: clinical data, surgery, pathology, pre- and postoperative care, complications, follow-up visits, and final quality check. DGCD is linked with additional data from the Danish "Pathology Registry", the "National Patient Registry", and the "Cause of Death Registry" using the unique...... Danish personal identification number (CPR number). DESCRIPTIVE DATA: Data from DGCD and registers are available online in the Statistical Analysis Software portal. The DGCD forms cover almost all possible clinical variables used to describe gynecological cancer courses. The only limitation...

  8. An Integrated Enterprise Accelerator Database for the SLC Control System

    International Nuclear Information System (INIS)

    2002-01-01

    Since its inception in the early 1980's, the SLC Control System has been driven by a highly structured memory-resident real-time database. While efficient, its rigid structure and file-based sources makes it difficult to maintain and extract relevant information. The goal of transforming the sources for this database into a relational form is to enable it to be part of a Control System Enterprise Database that is an integrated central repository for SLC accelerator device and Control System data with links to other associated databases. We have taken the concepts developed for the NLC Enterprise Database and used them to create and load a relational model of the online SLC Control System database. This database contains data and structure to allow querying and reporting on beamline devices, their associations and parameters. In the future this will be extended to allow generation of EPICS and SLC database files, setup of applications and links to other databases such as accelerator maintenance, archive data, financial and personnel records, cabling information, documentation etc. The database is implemented using Oracle 8i. In the short term it will be updated daily in batch from the online SLC database. In the longer term, it will serve as the primary source for Control System static data, an R and D platform for the NLC, and contribute to SLC Control System operations

  9. HFM web-site linked to the technical & administrative databases

    CERN Document Server

    Szeberenyi, A

    2013-01-01

    The EuCARD project has been using various databases to store scientific and contractual information, as well as working documents. This report documents the methods used during the life of the project and the strategy chosen to archive the technical and administrative material after the project completion. Special care is given to provide easy and open access for the foreground produced, especially for the EuCARD-2 community at large, including its network partners worldwide.

  10. "LinkedIn" for Accounting and Business Students

    Science.gov (United States)

    Albrecht, W. David

    2011-01-01

    LinkedIn is a social media application that every accounting and business student should join and use. LinkedIn is a database of 90,000,000 business professionals that enables each to connect and interact with their business associates. Five reasons are offered for why accounting students should join LinkedIn followed by 11 hints for use.

  11. Database Description - GenLibi | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ve name Gene Linker to bibliography DOI 10.18908/lsdba.nbdc01093-000 Creator Creator Name: Japan Science and Technology...mouse and rat genes. License CC BY-SA Detail Background and funding Name: JST (Japan Science and Technology ... site Japan Science and Technology Agency URL of the original website http://gene.biosciencedbc.jp/ Operatio...me(s): Journal: External Links: Original website information Database maintenance

  12. Experiment Databases

    Science.gov (United States)

    Vanschoren, Joaquin; Blockeel, Hendrik

    Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.

  13. What Is the Validity Domain of Einstein’s Equations? Distributional Solutions over Singularities and Topological Links in Geometrodynamics

    Directory of Open Access Journals (Sweden)

    Elias Zafiris

    2016-08-01

    Full Text Available The existence of singularities alerts that one of the highest priorities of a centennial perspective on general relativity should be a careful re-thinking of the validity domain of Einstein’s field equations. We address the problem of constructing distinguishable extensions of the smooth spacetime manifold model, which can incorporate singularities, while retaining the form of the field equations. The sheaf-theoretic formulation of this problem is tantamount to extending the algebra sheaf of smooth functions to a distribution-like algebra sheaf in which the former may be embedded, satisfying the pertinent cohomological conditions required for the coordinatization of all of the tensorial physical quantities, such that the form of the field equations is preserved. We present in detail the construction of these distribution-like algebra sheaves in terms of residue classes of sequences of smooth functions modulo the information of singular loci encoded in suitable ideals. Finally, we consider the application of these distribution-like solution sheaves in geometrodynamics by modeling topologically-circular boundaries of singular loci in three-dimensional space in terms of topological links. It turns out that the Borromean link represents higher order wormhole solutions.

  14. System Thinking Tutorial and Reef Link Database Fact Sheets

    Science.gov (United States)

    The sustainable well-being of communities is inextricably linked to both the health of the earth’s ecosystems and the health of humans living in the community. Currently, many policy and management decisions are made without considering the goods and services humans derive from ...

  15. FINDbase: A worldwide database for genetic variation allele frequencies updated

    NARCIS (Netherlands)

    M. Georgitsi (Marianthi); E. Viennas (Emmanouil); D.I. Antoniou (Dimitris I.); V. Gkantouna (Vassiliki); S. van Baal (Sjozef); E.F. Petricoin (Emanuel F.); K. Poulas (Konstantinos); G. Tzimas (Giannis); G.P. Patrinos (George)

    2011-01-01

    textabstractFrequency of INherited Disorders database (FIND base; http://www.findbase. org) records frequencies of causative genetic variations worldwide. Database records include the population and ethnic group or geographical region, the disorder name and the related gene, accompanied by links to

  16. kpath: integration of metabolic pathway linked data.

    Science.gov (United States)

    Navas-Delgado, Ismael; García-Godoy, María Jesús; López-Camacho, Esteban; Rybinski, Maciej; Reyes-Palomares, Armando; Medina, Miguel Ángel; Aldana-Montes, José F

    2015-01-01

    In the last few years, the Life Sciences domain has experienced a rapid growth in the amount of available biological databases. The heterogeneity of these databases makes data integration a challenging issue. Some integration challenges are locating resources, relationships, data formats, synonyms or ambiguity. The Linked Data approach partially solves the heterogeneity problems by introducing a uniform data representation model. Linked Data refers to a set of best practices for publishing and connecting structured data on the Web. This article introduces kpath, a database that integrates information related to metabolic pathways. kpath also provides a navigational interface that enables not only the browsing, but also the deep use of the integrated data to build metabolic networks based on existing disperse knowledge. This user interface has been used to showcase relationships that can be inferred from the information available in several public databases. © The Author(s) 2015. Published by Oxford University Press.

  17. O-GLYCBASE: a revised database of O-glycosylated proteins

    DEFF Research Database (Denmark)

    Hansen, Jan; Lund, Ole; Nielsen, Jens O.

    1996-01-01

    O-GLYCBASE is a comprehensive database of information on glycoproteins and their O-linked glycosylation sites. Entries are compiled and revised from the SWISS-PROT and PIR databases as well as directly from recently published reports. Nineteen percent of the entries extracted from the databases n...... of mucin type O-glycosylation sites in mammalian glycoproteins exclusively from the primary sequence is made available by E-mail or WWW. The O-GLYCBASE database is also available electronically through our WWW server or by anonymous FTP....

  18. The Androgen Receptor Gene Mutations Database.

    Science.gov (United States)

    Gottlieb, B; Lehvaslaiho, H; Beitel, L K; Lumbroso, R; Pinsky, L; Trifiro, M

    1998-01-01

    The current version of the androgen receptor (AR) gene mutations database is described. The total number of reported mutations has risen from 272 to 309 in the past year. We have expanded the database: (i) by giving each entry an accession number; (ii) by adding information on the length of polymorphic polyglutamine (polyGln) and polyglycine (polyGly) tracts in exon 1; (iii) by adding information on large gene deletions; (iv) by providing a direct link with a completely searchable database (courtesy EMBL-European Bioinformatics Institute). The addition of the exon 1 polymorphisms is discussed in light of their possible relevance as markers for predisposition to prostate or breast cancer. The database is also available on the internet (http://www.mcgill. ca/androgendb/ ), from EMBL-European Bioinformatics Institute (ftp. ebi.ac.uk/pub/databases/androgen ), or as a Macintosh FilemakerPro or Word file (MC33@musica.mcgill.ca).

  19. A middle layer solution to support ACID properties for NoSQL databases

    Directory of Open Access Journals (Sweden)

    Ayman E. Lotfy

    2016-01-01

    Full Text Available The main objective of this paper is to keep the strengths of RDBMSs as consistency and ACID properties and at the same time providing the benefits that inspired the NoSQL movement through a middle layer. The proposed middle layer uses a four phase commit protocol to ensure: the use of recent data, the use of the Pessimistic technique to forbid others dealing with data while it is used and the data updates residing in many locations to avoid the loss of data and disappointment. This mechanism is required, especially in distributed database application NoSQL based environment, because allowing conflicting transactions to continue not only wastes constrained computing power and decreases bandwidth, but also exacerbates conflicts. The middle layer keeps tracking all running transactions and manages with other layers the execution of concurrent transactions. This solution will help increase both of the scalability, and throughput. Finally, the experimental results show that the throughput of the system improves on increasing the number of middle layers in scenarios and the amount of updates to read in a transaction increases. Also the data are consistent with executing many transactions related to each other through updating the same data. The scalability and availability of the system is not affected while ensuring strict consistency.

  20. Heuristic program to design Relational Databases

    Directory of Open Access Journals (Sweden)

    Manuel Pereira Rosa

    2009-09-01

    Full Text Available The great development of today’s world determines that the world level of information increases day after day, however, the time allowed to transmit this information in the classrooms has not changed. Thus, the rational work in this respect is more than necessary. Besides, if for the solution of a given type of problem we do not have a working algorism, we have, first to look for a correct solution, then the heuristic programs are of paramount importance to succeed in these aspects. Having into consideration that the design of the database is, essentially, a process of problem resolution, this article aims at proposing a heuristic program for the design of the relating database.

  1. CBS Genome Atlas Database: a dynamic storage for bioinformatic results and sequence data

    DEFF Research Database (Denmark)

    Hallin, Peter Fischer; Ussery, David

    2004-01-01

    , these results counts to more than 220 pieces of information. The backbone of this solution consists of a program package written in Perl, which enables administrators to synchronize and update the database content. The MySQL database has been connected to the CBS web-server via PHP4, to present a dynamic web...... and frequent addition of new models are factors that require a dynamic database layout. Using basic tools like the GNU Make system, csh, Perl and MySQL, we have created a flexible database environment for storing and maintaining such results for a collection of complete microbial genomes. Currently...... content for users outside the center. This solution is tightly fitted to existing server infrastructure and the solutions proposed here can perhaps serve as a template for other research groups to solve database issues....

  2. MARC and Relational Databases.

    Science.gov (United States)

    Llorens, Jose; Trenor, Asuncion

    1993-01-01

    Discusses the use of MARC format in relational databases and addresses problems of incompatibilities. A solution is presented that is in accordance with Open Systems Interconnection (OSI) standards and is based on experiences at the library of the Universidad Politecnica de Valencia (Spain). (four references) (EA)

  3. Development of a PSA information database system

    International Nuclear Information System (INIS)

    Kim, Seung Hwan

    2005-01-01

    The need to develop the PSA information database for performing a PSA has been growing rapidly. For example, performing a PSA requires a lot of data to analyze, to evaluate the risk, to trace the process of results and to verify the results. PSA information database is a system that stores all PSA related information into the database and file system with cross links to jump to the physical documents whenever they are needed. Korea Atomic Energy Research Institute is developing a PSA information database system, AIMS (Advanced Information Management System for PSA). The objective is to integrate and computerize all the distributed information of a PSA into a system and to enhance the accessibility to PSA information for all PSA related activities. This paper describes how we implemented such a database centered application in the view of two areas, database design and data (document) service

  4. Design issues of an efficient distributed database scheduler for telecom

    NARCIS (Netherlands)

    Bodlaender, M.P.; Stok, van der P.D.V.

    1998-01-01

    We optimize the speed of real-time databases by optimizing the scheduler. The performance of a database is directly linked to the environment it operates in, and we use environment characteristics as guidelines for the optimization. A typical telecom environment is investigated, and characteristics

  5. Relational Databases and Biomedical Big Data.

    Science.gov (United States)

    de Silva, N H Nisansa D

    2017-01-01

    In various biomedical applications that collect, handle, and manipulate data, the amounts of data tend to build up and venture into the range identified as bigdata. In such occurrences, a design decision has to be taken as to what type of database would be used to handle this data. More often than not, the default and classical solution to this in the biomedical domain according to past research is relational databases. While this used to be the norm for a long while, it is evident that there is a trend to move away from relational databases in favor of other types and paradigms of databases. However, it still has paramount importance to understand the interrelation that exists between biomedical big data and relational databases. This chapter will review the pros and cons of using relational databases to store biomedical big data that previous researches have discussed and used.

  6. Pertukaran Data Antar Database Dengan Menggunakan Teknologi API

    Directory of Open Access Journals (Sweden)

    Ahmad Hanafi

    2017-03-01

    Full Text Available Electronically data interchange between institutions or companies must be supported with appropriate data storage media capacity. MySQL is a database engine that is used to perform data storage in the form of information, where the data can be utilized as needed. MYSQL has the advantage of which is to provide convenience in terms of usage, and able to work on different platforms. System requirements that must be reliable and multitasking capable of making the database not only as a data storage medium, but can also be utilized as a means of data exchange. Dropbox API is the best solution that can be utilized as a technology that supports the database to be able to Exchange data. The combination of the Dropbox API and database can be used as a very cheap solution for small companies to implement data exchange, because it only requires a relatively small Internet connection.

  7. 车载系统MirrorLink方案的研究%Research of head-unit MirrorLink solution

    Institute of Scientific and Technical Information of China (English)

    张元文; 陈玮

    2013-01-01

    介绍了车载系统的MirrorLink方案,分析了它的协议架构,对其两个主要内容——VNC架构和音视频传输架构做了介绍.分析了VNC架构中的核心RFB协议的过程和实现,RFB协议是负责MirrorLink系统中的界面传输和控制信号的传递,对它的编码要求进行分析,提出了更加优化的编码方式;在音视频架构中,提出了一种改进的可应用于车载MirrorLink中的UPnp视音频传输方案.%This paper presents the ear' s MirrorLink solution,analyses the protocol architecture. It simply introduced the two main content-VNC architecture and audio & video transmission architecture. In VNC architecture introduced the core protocol-RFB protocol implementation, it used to transfer the HMI screen and the control signals, analyzed coding requirements and proposed more optimized coding methods. In audio & video transmission architecture it proposed an improved UPnp video and audio transmission scheme ,it can be applied to the vehicle MirrorLink solution.

  8. Free Space Optics – Monitoring Setup for Experimental Link

    Directory of Open Access Journals (Sweden)

    Ján Tóth

    2015-12-01

    Full Text Available This paper deals with advanced Free Space Optics communication technology. Two FSO nodes are needed in order to make a connection. Laser diodes are used as light sources. Simple OOK modulation is involved in this technology. FSO system offers multiple advantages indeed. However, a direct visibility is required in order to set up a communication link. This fact yields perhaps the most significant weakness of this technology. Obviously, there is no a chance to fight the weather phenomena like fog, heavy rain, dust and many other particles which are naturally present in the atmosphere. That’s why there is a key task to find a suitable solution to keep FSO link working with high reliability and availability. It turns out that it’s necessary to have knowledge about weather situation when FSO link operates (liquid water content - LWC, geographical location, particle size distribution, average particle diameter, temperature, humidity, wind conditions, pressure and many other variable weather parameters. It’s obvious that having most of mentioned parameter’s values stored in database (implicitly in charts would be really beneficial. This paper presents some of mentioned indicators continuously gathered from several sensors located close to one of FSO nodes.

  9. Database for waste glass composition and properties

    International Nuclear Information System (INIS)

    Peters, R.D.; Chapman, C.C.; Mendel, J.E.; Williams, C.G.

    1993-09-01

    A database of waste glass composition and properties, called PNL Waste Glass Database, has been developed. The source of data is published literature and files from projects funded by the US Department of Energy. The glass data have been organized into categories and corresponding data files have been prepared. These categories are glass chemical composition, thermal properties, leaching data, waste composition, glass radionuclide composition and crystallinity data. The data files are compatible with commercial database software. Glass compositions are linked to properties across the various files using a unique glass code. Programs have been written in database software language to permit searches and retrievals of data. The database provides easy access to the vast quantities of glass compositions and properties that have been studied. It will be a tool for researchers and others investigating vitrification and glass waste forms

  10. Survey of Machine Learning Methods for Database Security

    Science.gov (United States)

    Kamra, Ashish; Ber, Elisa

    Application of machine learning techniques to database security is an emerging area of research. In this chapter, we present a survey of various approaches that use machine learning/data mining techniques to enhance the traditional security mechanisms of databases. There are two key database security areas in which these techniques have found applications, namely, detection of SQL Injection attacks and anomaly detection for defending against insider threats. Apart from the research prototypes and tools, various third-party commercial products are also available that provide database activity monitoring solutions by profiling database users and applications. We present a survey of such products. We end the chapter with a primer on mechanisms for responding to database anomalies.

  11. OxDBase: a database of oxygenases involved in biodegradation

    Directory of Open Access Journals (Sweden)

    Raghava Gajendra PS

    2009-04-01

    Full Text Available Abstract Background Oxygenases belong to the oxidoreductive group of enzymes (E.C. Class 1, which oxidize the substrates by transferring oxygen from molecular oxygen (O2 and utilize FAD/NADH/NADPH as the co-substrate. Oxygenases can further be grouped into two categories i.e. monooxygenases and dioxygenases on the basis of number of oxygen atoms used for oxidation. They play a key role in the metabolism of organic compounds by increasing their reactivity or water solubility or bringing about cleavage of the aromatic ring. Findings We compiled a database of biodegradative oxygenases (OxDBase which provides a compilation of the oxygenase data as sourced from primary literature in the form of web accessible database. There are two separate search engines for searching into the database i.e. mono and dioxygenases database respectively. Each enzyme entry contains its common name and synonym, reaction in which enzyme is involved, family and subfamily, structure and gene link and literature citation. The entries are also linked to several external database including BRENDA, KEGG, ENZYME and UM-BBD providing wide background information. At present the database contains information of over 235 oxygenases including both dioxygenases and monooxygenases. This database is freely available online at http://www.imtech.res.in/raghava/oxdbase/. Conclusion OxDBase is the first database that is dedicated only to oxygenases and provides comprehensive information about them. Due to the importance of the oxygenases in chemical synthesis of drug intermediates and oxidation of xenobiotic compounds, OxDBase database would be very useful tool in the field of synthetic chemistry as well as bioremediation.

  12. Heterogeneous distributed databases: A case study

    Science.gov (United States)

    Stewart, Tracy R.; Mukkamala, Ravi

    1991-01-01

    Alternatives are reviewed for accessing distributed heterogeneous databases and a recommended solution is proposed. The current study is limited to the Automated Information Systems Center at the Naval Sea Combat Systems Engineering Station at Norfolk, VA. This center maintains two databases located on Digital Equipment Corporation's VAX computers running under the VMS operating system. The first data base, ICMS, resides on a VAX11/780 and has been implemented using VAX DBMS, a CODASYL based system. The second database, CSA, resides on a VAX 6460 and has been implemented using the ORACLE relational database management system (RDBMS). Both databases are used for configuration management within the U.S. Navy. Different customer bases are supported by each database. ICMS tracks U.S. Navy ships and major systems (anti-sub, sonar, etc.). Even though the major systems on ships and submarines have totally different functions, some of the equipment within the major systems are common to both ships and submarines.

  13. ESPSD, Nuclear Power Plant Siting Database

    International Nuclear Information System (INIS)

    Slezak, S.

    2001-01-01

    1 - Description of program or function: This database is a repository of comprehensive licensing and technical reviews of siting regulatory processes and acceptance criteria for advanced light water reactor (ALWR) nuclear power plants. The program is designed to be used by applicants for an early site permit or combined construction permit/operating license (10CFRR522), Sub-parts A and C) as input for the development of the application. The database is a complete, menu-driven, self-contained package that can search and sort the supplied data by topic, keyword, or other input. The software is designed for operation on IBM compatible computers with DOS. 2 - Method of solution: The database is an R:BASE Runtime program with all the necessary database files included

  14. CD-ROM for the PGAA-IAEA database

    International Nuclear Information System (INIS)

    Firestone, R.B.; Zerking, V.

    2007-01-01

    Both the database of prompt gamma rays from slow neutron capture for elemental analysis and the results of this CRP are available on the accompanying CD-ROM. The file index.html is the home page for the CD-ROM, and provides links to the following information: (a) The CRP - General information, papers and reports relevant to this CRP. (b) The PGAA-IAEA database viewer - An interactive program to display and search the PGAA database by isotope, energy or capture cross-section. (c) The Database of Prompt Gamma Rays from Slow Neutron Capture for Elemental Analysis - This report. (d) The PGAA database files - Adopted PGAA database and associated files in EXCEL, PDF and Text formats. The archival databases by Lone et al. and by Reedy and Frankle are also available. (e) The Evaluated Gamma-Ray Activation File (EGAF) - The adopted PGAA database in ENSDF format. Data can be viewed with the Isotope Explorer 2.2 ENSDF Viewer. (f) The PGAA database evaluation - ENSDF format versions of the adopted PGAA database, and the Budapest and ENSDF isotopic input files. Decay scheme balance and statistical analysis summaries are provided. (g) The Isotope Explorer 2.2 ENSDF viewer - Windows software for viewing the level scheme drawings and tables provided in ENSDF format. The complete ENSDF database is included, as of December 2002. The databases and viewers are discussed in greater detail in the following sections

  15. New solutions for data management on the horizon

    CERN Multimedia

    Adrian Giordani

    2012-01-01

    Almost all large-scale scientific experiments, including those at CERN, manage their data using relational databases, accessible with a programming language called SQL (Structured Query Language). But, as the amount of data continues to grow, there are also growing doubts that relational databases are the best solution.   New types of databases, called NoSQL, are promising a different way to access large amounts of data. The languages used in NoSQL are far less complicated, making the initial set-up much easier. In addition, data can be stored in a more flexible way, promising a faster way to access and manage data. The CERN Database Group in the IT Department is participating in small-scale tests of NoSQL solutions with three of the four large detectors (CMS, ATLAS and LHCb). Over the past few months, non-relational database vendors – including Google, Hadapt, and Oracle – have also been presenting their NoSQL solutions to the IT Department. “We have used the Orac...

  16. An approach for access differentiation design in medical distributed applications built on databases.

    Science.gov (United States)

    Shoukourian, S K; Vasilyan, A M; Avagyan, A A; Shukurian, A K

    1999-01-01

    A formalized "top to bottom" design approach was described in [1] for distributed applications built on databases, which were considered as a medium between virtual and real user environments for a specific medical application. Merging different components within a unified distributed application posits new essential problems for software. Particularly protection tools, which are sufficient separately, become deficient during the integration due to specific additional links and relationships not considered formerly. E.g., it is impossible to protect a shared object in the virtual operating room using only DBMS protection tools, if the object is stored as a record in DB tables. The solution of the problem should be found only within the more general application framework. Appropriate tools are absent or unavailable. The present paper suggests a detailed outline of a design and testing toolset for access differentiation systems (ADS) in distributed medical applications which use databases. The appropriate formal model as well as tools for its mapping to a DMBS are suggested. Remote users connected via global networks are considered too.

  17. Hyperdatabase: A schema for browsing multiple databases

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, M A [Dalhousie Univ., Halifax (Canada). Computer Science Div.; Watters, C R [Waterloo Univ., Waterloo (Canada). Computer Science Dept.

    1990-05-01

    In order to insure effective information retrieval, a user may need to search multiple databases on multiple systems. Although front end systems have been developed to assist the user in accessing different systems, they access one retrieval system at a time and the search has to be repeated for each required database on each retrieval system. More importantly, the user interacts with the results as independent sessions. This paper models multiple bibliographic databases distributed over one or more retrieval systems as a hyperdatabase, i.e., a single virtual database. The hyperdatabase is viewed as a hypergraph in which each node represents a bibliographic item and the links among nodes represent relations among the items. In the response to a query, bibliographic items are extracted from the hyperdatabase and linked together to form a transient hypergraph. This hypergraph is transient in the sense that it is ``created`` in response to a query and only ``exists`` for the duration of the query session. A hypertext interface permits the user to browse the transient hypergraph in a nonlinear manner. The technology to implement a system based on this model is available now, consisting of powerful workstation, distributed processing, high-speed communications, and CD-ROMs. As the technology advances and costs decrease such systems should be generally available. (author). 13 refs, 5 figs.

  18. Hyperdatabase: A schema for browsing multiple databases

    International Nuclear Information System (INIS)

    Shepherd, M.A.; Watters, C.R.

    1990-05-01

    In order to insure effective information retrieval, a user may need to search multiple databases on multiple systems. Although front end systems have been developed to assist the user in accessing different systems, they access one retrieval system at a time and the search has to be repeated for each required database on each retrieval system. More importantly, the user interacts with the results as independent sessions. This paper models multiple bibliographic databases distributed over one or more retrieval systems as a hyperdatabase, i.e., a single virtual database. The hyperdatabase is viewed as a hypergraph in which each node represents a bibliographic item and the links among nodes represent relations among the items. In the response to a query, bibliographic items are extracted from the hyperdatabase and linked together to form a transient hypergraph. This hypergraph is transient in the sense that it is ''created'' in response to a query and only ''exists'' for the duration of the query session. A hypertext interface permits the user to browse the transient hypergraph in a nonlinear manner. The technology to implement a system based on this model is available now, consisting of powerful workstation, distributed processing, high-speed communications, and CD-ROMs. As the technology advances and costs decrease such systems should be generally available. (author). 13 refs, 5 figs

  19. KaBOB: ontology-based semantic integration of biomedical databases.

    Science.gov (United States)

    Livingston, Kevin M; Bada, Michael; Baumgartner, William A; Hunter, Lawrence E

    2015-04-23

    The ability to query many independent biological databases using a common ontology-based semantic model would facilitate deeper integration and more effective utilization of these diverse and rapidly growing resources. Despite ongoing work moving toward shared data formats and linked identifiers, significant problems persist in semantic data integration in order to establish shared identity and shared meaning across heterogeneous biomedical data sources. We present five processes for semantic data integration that, when applied collectively, solve seven key problems. These processes include making explicit the differences between biomedical concepts and database records, aggregating sets of identifiers denoting the same biomedical concepts across data sources, and using declaratively represented forward-chaining rules to take information that is variably represented in source databases and integrating it into a consistent biomedical representation. We demonstrate these processes and solutions by presenting KaBOB (the Knowledge Base Of Biomedicine), a knowledge base of semantically integrated data from 18 prominent biomedical databases using common representations grounded in Open Biomedical Ontologies. An instance of KaBOB with data about humans and seven major model organisms can be built using on the order of 500 million RDF triples. All source code for building KaBOB is available under an open-source license. KaBOB is an integrated knowledge base of biomedical data representationally based in prominent, actively maintained Open Biomedical Ontologies, thus enabling queries of the underlying data in terms of biomedical concepts (e.g., genes and gene products, interactions and processes) rather than features of source-specific data schemas or file formats. KaBOB resolves many of the issues that routinely plague biomedical researchers intending to work with data from multiple data sources and provides a platform for ongoing data integration and development and for

  20. Analysis of Patent Databases Using VxInsight

    Energy Technology Data Exchange (ETDEWEB)

    BOYACK,KEVIN W.; WYLIE,BRIAN N.; DAVIDSON,GEORGE S.; JOHNSON,DAVID K.

    2000-12-12

    We present the application of a new knowledge visualization tool, VxInsight, to the mapping and analysis of patent databases. Patent data are mined and placed in a database, relationships between the patents are identified, primarily using the citation and classification structures, then the patents are clustered using a proprietary force-directed placement algorithm. Related patents cluster together to produce a 3-D landscape view of the tens of thousands of patents. The user can navigate the landscape by zooming into or out of regions of interest. Querying the underlying database places a colored marker on each patent matching the query. Automatically generated labels, showing landscape content, update continually upon zooming. Optionally, citation links between patents may be shown on the landscape. The combination of these features enables powerful analyses of patent databases.

  1. An open source web interface for linking models to infrastructure system databases

    Science.gov (United States)

    Knox, S.; Mohamed, K.; Harou, J. J.; Rheinheimer, D. E.; Medellin-Azuara, J.; Meier, P.; Tilmant, A.; Rosenberg, D. E.

    2016-12-01

    Models of networked engineered resource systems such as water or energy systems are often built collaboratively with developers from different domains working at different locations. These models can be linked to large scale real world databases, and they are constantly being improved and extended. As the development and application of these models becomes more sophisticated, and the computing power required for simulations and/or optimisations increases, so has the need for online services and tools which enable the efficient development and deployment of these models. Hydra Platform is an open source, web-based data management system, which allows modellers of network-based models to remotely store network topology and associated data in a generalised manner, allowing it to serve multiple disciplines. Hydra Platform uses a web API using JSON to allow external programs (referred to as `Apps') to interact with its stored networks and perform actions such as importing data, running models, or exporting the networks to different formats. Hydra Platform supports multiple users accessing the same network and has a suite of functions for managing users and data. We present ongoing development in Hydra Platform, the Hydra Web User Interface, through which users can collaboratively manage network data and models in a web browser. The web interface allows multiple users to graphically access, edit and share their networks, run apps and view results. Through apps, which are located on the server, the web interface can give users access to external data sources and models without the need to install or configure any software. This also ensures model results can be reproduced by removing platform or version dependence. Managing data and deploying models via the web interface provides a way for multiple modellers to collaboratively manage data, deploy and monitor model runs and analyse results.

  2. NCLinac web-site linked to the technical and administrative databases

    CERN Document Server

    Szeberenyi, A

    2014-01-01

    The EuCARD project has been using various databases to store scientific and contractual information, as well as working documents. This report documents the methods used during the life of the project and the strategy chosen to archive the technical and administrative material after the project completion. Special care is given to provide easy and open access for the foreground produced, especially for the EuCARD-2 community at large, including its network partners worldwide.

  3. SRF web-site linked to the technical and administrative databases

    CERN Document Server

    Szeberenyi, A

    2013-01-01

    The EuCARD project has been using various databases to store scientific and contractual information, as well as working documents. This report documents the methods used during the life of the project and the strategy chosen to archive the technical and administrative material after the project completion. Special care is given to provide easy and open access for the foreground produced, especially for the EuCARD-2 community at large, including its network partners worldwide.

  4. Comparison of Changes in Central Corneal Thickness During Corneal Collagen Cross-Linking, Using Isotonic Riboflavin Solutions With and Without Dextran, in the Treatment of Progressive Keratoconus.

    Science.gov (United States)

    Zaheer, Naima; Khan, Wajid Ali; Khan, Shama; Khan, M Abdul Moqeet

    2018-03-01

    To compare intraoperative changes in central corneal thickness (CCT) during corneal cross-linking, using 2 different isotonic riboflavin solutions either with dextran or with hydroxy propyl methylcellulose, in the treatment of progressive keratoconus. In this retrospective study, we analyzed records of corneal thickness measurements, taken during various steps of cross-linking. Cross-linking was performed using either isotonic riboflavin with dextran (group A) or isotonic riboflavin with hydroxy propyl methylcellulose (without dextran) (group B). CCT measurements were recorded before and after epithelial removal, after saturation with respective isotonic riboflavin solution, after use of hypotonic riboflavin in selected cases, and after ultraviolet A (UV-A) application. A mixed-way analysis of variance was conducted on CCT readings within each group and between both groups, and p dextran causes a significant decrease in corneal thickness, whereas dextran-free isotonic riboflavin causes a significant increase in corneal thickness, thus facilitating the procedure.

  5. Securing the communication of medical information using local biometric authentication and commercial wireless links.

    Science.gov (United States)

    Ivanov, Vladimir I; Yu, Paul L; Baras, John S

    2010-09-01

    Medical information is extremely sensitive in nature - a compromise, such as eavesdropping or tampering by a malicious third party, may result in identity theft, incorrect diagnosis and treatment, and even death. Therefore, it is important to secure the transfer of medical information from the patient to the recording system. We consider a portable, wireless device transferring medical information to a remote server. We decompose this problem into two sub-problems and propose security solutions to each of them: (1) to secure the link between the patient and the portable device, and (2) to secure the link between the portable device and the network. Thus we push the limits of the network security to the edge by authenticating the user using their biometric information; authenticating the device to the network at the physical layer; and strengthening the security of the wireless link with a key exchange mechanism. The proposed authentication methods can be used for recording the readings of medical data in a central database and for accessing medical records in various settings.

  6. Simultaneous cross-linking and p-doping of a polymeric semiconductor film by immersion into a phosphomolybdic acid solution for use in organic solar cells.

    Science.gov (United States)

    Aizawa, Naoya; Fuentes-Hernandez, Canek; Kolesov, Vladimir A; Khan, Talha M; Kido, Junji; Kippelen, Bernard

    2016-03-07

    Poly[N-9'-heptadecanyl-2,7-carbazole-alt-5,5-(4',7'-di-2-thienyl-2',1',3'-benzothiadiazole)] (PCDTBT) is shown to be simultaneously cross-linked and p-doped when immersed into a phosphomolybdic acid solution, yielding conductive films with low solubility that can withstand the solution processing of subsequent photoactive layers. Such a modified PCDTBT film serves to improve hole collection and limit carrier recombination in organic solar cells.

  7. PH-Sensitive Nanogels Synthesised by Radiation-Induced Cross-Linking of Hydrogen-Bonded Interpolymer Complexes in Aqueous Solution

    International Nuclear Information System (INIS)

    Ulanski, P.; Kadłubowski, S.; Henke, A.; Olejnik, A.K.; Rokita, B.; Wach, R.; Rosiak, J.M.

    2010-01-01

    Nanogels, i.e., internally cross-linked hydrophilic polymeric particles of sub-micron sizes, gained much interest over the last years due to their possible application as components of advanced type of medicines, like drug carriers. It is expected that they can facilitate distribution and delivery of different types of biologically active substances (including proteins, peptides and oligonucleotides) in a controlled way within the human body. Nanogels and their bigger analogues – microgels, are mainly synthesised through free-radical cross-linking polymerization of monomers. This synthetic routine can be carried out in solution but more often emulsion techniques are preferred (mini- or microemulsion) due to easier size control and exclusion of the macrogelation process. Additionally, surfactant-free emulsion polymerization (SFEP) is the method of choice for the preparation of temperature-sensitive particles, mainly based on poly(N-isopropylacrylamide).Nanogels were also successfully prepared by intramolecular cross-linking of single macromolecules. More recently, covalent stabilization was utilized to obtain the self-assembled structures like micelles of amphiphilic block copolymers, held by relatively weak physical interactions. Due to low stability of these polymolecular systems against dilution or temperature changes, different chemistry-based strategies to turn them into permanent nanopaticles were proposed in the literature (e.g., independent stabilization of a core or a shell of the micelles)

  8. PH-Sensitive Nanogels Synthesised by Radiation-Induced Cross-Linking of Hydrogen-Bonded Interpolymer Complexes in Aqueous Solution

    Energy Technology Data Exchange (ETDEWEB)

    Ulanski, P.; Kadłubowski, S.; Henke, A.; Olejnik, A. K.; Rokita, B.; Wach, R.; Rosiak, J.M., E-mail: slawekka@mitr.p.lodz.pl [Technical University of Lodz, Wroblewskiego 15, 93-590 Lodz (Poland)

    2010-07-01

    Nanogels, i.e., internally cross-linked hydrophilic polymeric particles of sub-micron sizes, gained much interest over the last years due to their possible application as components of advanced type of medicines, like drug carriers. It is expected that they can facilitate distribution and delivery of different types of biologically active substances (including proteins, peptides and oligonucleotides) in a controlled way within the human body. Nanogels and their bigger analogues – microgels, are mainly synthesised through free-radical cross-linking polymerization of monomers. This synthetic routine can be carried out in solution but more often emulsion techniques are preferred (mini- or microemulsion) due to easier size control and exclusion of the macrogelation process. Additionally, surfactant-free emulsion polymerization (SFEP) is the method of choice for the preparation of temperature-sensitive particles, mainly based on poly(N-isopropylacrylamide).Nanogels were also successfully prepared by intramolecular cross-linking of single macromolecules. More recently, covalent stabilization was utilized to obtain the self-assembled structures like micelles of amphiphilic block copolymers, held by relatively weak physical interactions. Due to low stability of these polymolecular systems against dilution or temperature changes, different chemistry-based strategies to turn them into permanent nanopaticles were proposed in the literature (e.g., independent stabilization of a core or a shell of the micelles)

  9. Repetitive Bibliographical Information in Relational Databases.

    Science.gov (United States)

    Brooks, Terrence A.

    1988-01-01

    Proposes a solution to the problem of loading repetitive bibliographic information in a microcomputer-based relational database management system. The alternative design described is based on a representational redundancy design and normalization theory. (12 references) (Author/CLB)

  10. Full-Text Linking: Affiliated versus Nonaffiliated Access in a Free Database.

    Science.gov (United States)

    Grogg, Jill E.; Andreadis, Debra K.; Kirk, Rachel A.

    2002-01-01

    Presents a comparison of access to full-text articles from a free bibliographic database (PubSCIENCE) for affiliated and unaffiliated users. Found that affiliated users had access to more full-text articles than unaffiliated users had, and that both types of users could increase their level of access through additional searching and greater…

  11. Interactions of cross-linked and uncross-linked chitosan hydrogels ...

    African Journals Online (AJOL)

    The swelling equilibrium of Chitosan and sodium tripolyphosphate (NaTPP) cross-linked chitosan hydrogels in aqueous solutions of surfactants differing in structure and hydrophobicity at 250C is reported. Anionic surfactant sodium dodecylsulfate (SDS), the cationic surfactant hexadecyltrimethylammonium bromide (HTAB) ...

  12. Improving Information on Maternal Medication Use by Linking Prescription Data to Congenital Anomaly Registers

    DEFF Research Database (Denmark)

    de Jonge, Linda; Garne, Ester; Gini, Rosa

    2015-01-01

    INTRODUCTION: Research on associations between medication use during pregnancy and congenital anomalies is significative for assessing the safe use of a medicine in pregnancy. Congenital anomaly (CA) registries do not have optimal information on medicine exposure, in contrast to prescription...... databases. Linkage of prescription databases to the CA registries is a potentially effective method of obtaining accurate information on medicine use in pregnancies and the risk of congenital anomalies. METHODS: We linked data from primary care and prescription databases to five European Surveillance...... of Congenital Anomalies (EUROCAT) CA registries. The linkage was evaluated by looking at linkage rate, characteristics of linked and non-linked cases, first trimester exposure rates for six groups of medicines according to the prescription data and information on medication use registered in the CA databases...

  13. MammoGrid: a mammography database

    CERN Multimedia

    2002-01-01

    What would be the advantages if physicians around the world could gain access to a unique mammography database? The answer may come from MammoGrid, a three-year project under the Fifth Framework Programme of the EC. Led by CERN, MammoGrid involves the UK (the Universities of Oxford, Cambridge and the West of England, Bristol, plus the company Mirada Solutions of Oxford), and Italy (the Universities of Pisa and Sassari and the Hospitals in Udine and Torino). The aim of the project is, in light of emerging GRID technology, to develop a Europe-wide database of mammograms. The database will be used to investigate a set of important healthcare applications as well as the potential of the GRID to enable healthcare professionals throughout the EU to work together effectively. The contributions of the partners include building the GRID-database infrastructure, developing image processing and Computer Aided Detection techniques, and making the clinical evaluation. The first project meeting took place at CERN in Sept...

  14. In-memory databases and innovations in Business Intelligence

    Directory of Open Access Journals (Sweden)

    Ruxandra BABEANU

    2015-07-01

    Full Text Available The large amount of data that companies are dealing with, day by day, is a big challenge for the traditional BI systems and databases. A significant part of this data is usually wasted because the companies do not own the appropriate capacity to process it. In the actual competitive environment, this lost data could point up valuable information if it was analyzed and put in the right context. In these circumstances, in-memory databases seem to be the solution. This innovative technology combined with specialized BI solutions offers high performance and satisfaction to users and comes up with new data modeling and processing options.

  15. Materializing the web of linked data

    CERN Document Server

    Konstantinou, Nikolaos

    2015-01-01

    This book explains the Linked Data domain by adopting a bottom-up approach: it introduces the fundamental Semantic Web technologies and building blocks, which are then combined into methodologies and end-to-end examples for publishing datasets as Linked Data, and use cases that harness scholarly information and sensor data. It presents how Linked Data is used for web-scale data integration, information management and search. Special emphasis is given to the publication of Linked Data from relational databases as well as from real-time sensor data streams. The authors also trace the transformation from the document-based World Wide Web into a Web of Data. Materializing the Web of Linked Data is addressed to researchers and professionals studying software technologies, tools and approaches that drive the Linked Data ecosystem, and the Web in general.

  16. License - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available lows: Rice Proteome Database © Setsuko Komatsu (National Institute of Crop Science, National Agriculture and Food Research Organizati...1-18 Kannondai Tsukuba, Ibaraki 305-8634, Japan National Institute of Crop Science, National Agriculture and Food Research Organizati...on) licensed under CC Attribution-Share Alike 4.0 Intern...on Setsuko Komatsu E-mail: About Providing Links to This Database You can freely pr

  17. Some Considerations about Modern Database Machines

    Directory of Open Access Journals (Sweden)

    Manole VELICANU

    2010-01-01

    Full Text Available Optimizing the two computing resources of any computing system - time and space - has al-ways been one of the priority objectives of any database. A current and effective solution in this respect is the computer database. Optimizing computer applications by means of database machines has been a steady preoccupation of researchers since the late seventies. Several information technologies have revolutionized the present information framework. Out of these, those which have brought a major contribution to the optimization of the databases are: efficient handling of large volumes of data (Data Warehouse, Data Mining, OLAP – On Line Analytical Processing, the improvement of DBMS – Database Management Systems facilities through the integration of the new technologies, the dramatic increase in computing power and the efficient use of it (computer networks, massive parallel computing, Grid Computing and so on. All these information technologies, and others, have favored the resumption of the research on database machines and the obtaining in the last few years of some very good practical results, as far as the optimization of the computing resources is concerned.

  18. O-GLYCOBASE version 4.0: a revised database of O-glycosylated proteins

    DEFF Research Database (Denmark)

    Gupta, Ramneek; Birch, Hanne; Rapacki, Krzysztof

    1999-01-01

    O-GLYCBASE is a database of glycoproteins with O-linked glycosylation sites. Entries with at least one experimentally verified O-glycosylation site have been complied from protein sequence databases and literature. Each entry contains information about the glycan involved, the species, sequence, ...

  19. THE EXTRAGALACTIC DISTANCE DATABASE

    International Nuclear Information System (INIS)

    Tully, R. Brent; Courtois, Helene M.; Jacobs, Bradley A.; Rizzi, Luca; Shaya, Edward J.; Makarov, Dmitry I.

    2009-01-01

    A database can be accessed on the Web at http://edd.ifa.hawaii.edu that was developed to promote access to information related to galaxy distances. The database has three functional components. First, tables from many literature sources have been gathered and enhanced with links through a distinct galaxy naming convention. Second, comparisons of results both at the levels of parameters and of techniques have begun and are continuing, leading to increasing homogeneity and consistency of distance measurements. Third, new material is presented arising from ongoing observational programs at the University of Hawaii 2.2 m telescope, radio telescopes at Green Bank, Arecibo, and Parkes and with the Hubble Space Telescope. This new observational material is made available in tandem with related material drawn from archives and passed through common analysis pipelines.

  20. Data management for the internet of things: design primitives and solution.

    Science.gov (United States)

    Abu-Elkheir, Mervat; Hayajneh, Mohammad; Ali, Najah Abu

    2013-11-14

    The Internet of Things (IoT) is a networking paradigm where interconnected, smart objects continuously generate data and transmit it over the Internet. Much of the IoT initiatives are geared towards manufacturing low-cost and energy-efficient hardware for these objects, as well as the communication technologies that provide objects interconnectivity. However, the solutions to manage and utilize the massive volume of data produced by these objects are yet to mature. Traditional database management solutions fall short in satisfying the sophisticated application needs of an IoT network that has a truly global-scale. Current solutions for IoT data management address partial aspects of the IoT environment with special focus on sensor networks. In this paper, we survey the data management solutions that are proposed for IoT or subsystems of the IoT. We highlight the distinctive design primitives that we believe should be addressed in an IoT data management solution, and discuss how they are approached by the proposed solutions. We finally propose a data management framework for IoT that takes into consideration the discussed design elements and acts as a seed to a comprehensive IoT data management solution. The framework we propose adapts a federated, data- and sources-centric approach to link the diverse Things with their abundance of data to the potential applications and services that are envisioned for IoT.

  1. Generic Entity Resolution in Relational Databases

    Science.gov (United States)

    Sidló, Csaba István

    Entity Resolution (ER) covers the problem of identifying distinct representations of real-world entities in heterogeneous databases. We consider the generic formulation of ER problems (GER) with exact outcome. In practice, input data usually resides in relational databases and can grow to huge volumes. Yet, typical solutions described in the literature employ standalone memory resident algorithms. In this paper we utilize facilities of standard, unmodified relational database management systems (RDBMS) to enhance the efficiency of GER algorithms. We study and revise the problem formulation, and propose practical and efficient algorithms optimized for RDBMS external memory processing. We outline a real-world scenario and demonstrate the advantage of algorithms by performing experiments on insurance customer data.

  2. Linkage between the Danish National Health Service Prescription Database, the Danish Fetal Medicine Database, and other Danish registries as a tool for the study of drug safety in pregnancy

    DEFF Research Database (Denmark)

    Pedersen, Lars Henning; Petersen, Olav Bjørn; Nørgaard, Mette

    2016-01-01

    A linked population-based database is being created in Denmark for research on drug safety during pregnancy. It combines information from the Danish National Health Service Prescription Database (with information on all prescriptions reimbursed in Denmark since 2004), the Danish Fetal Medicine...

  3. Upgrade of laser and electron beam welding database

    CERN Document Server

    Furman, Magdalena

    2014-01-01

    The main purpose of this project was to fix existing issues and update the existing database holding parameters of laser-beam and electron-beam welding machines. Moreover, the database had to be extended to hold the data for the new machines that arrived recently at the workshop. As a solution - the database had to be migrated to Oracle framework, the new user interface (using APEX) had to be designed and implemented with the integration with the CERN web services (EDMS, Phonebook, JMT, CDD and EDH).

  4. Dynamic Link Inclusion in Online PDF Journals

    OpenAIRE

    Probets, Steve; Brailsford, David; Carr, Les; Hall, Wendy

    1998-01-01

    Two complementary de facto standards for the publication of electronic documents are HTML on theWorldWideWeb and Adobe s PDF (Portable Document Format) language for use with Acrobat viewers. Both these formats provide support for hypertext features to be embedded within documents. We present a method, which allows links and other hypertext material to be kept in an abstract form in separate link databases. The links can then be interpreted or compiled at any stage and applied, in the correct ...

  5. A comparison of database systems for XML-type data.

    Science.gov (United States)

    Risse, Judith E; Leunissen, Jack A M

    2010-01-01

    In the field of bioinformatics interchangeable data formats based on XML are widely used. XML-type data is also at the core of most web services. With the increasing amount of data stored in XML comes the need for storing and accessing the data. In this paper we analyse the suitability of different database systems for storing and querying large datasets in general and Medline in particular. All reviewed database systems perform well when tested with small to medium sized datasets, however when the full Medline dataset is queried a large variation in query times is observed. There is not one system that is vastly superior to the others in this comparison and, depending on the database size and the query requirements, different systems are most suitable. The best all-round solution is the Oracle 11~g database system using the new binary storage option. Alias-i's Lingpipe is a more lightweight, customizable and sufficiently fast solution. It does however require more initial configuration steps. For data with a changing XML structure Sedna and BaseX as native XML database systems or MySQL with an XML-type column are suitable.

  6. Building spatio-temporal database model based on ontological approach using relational database environment

    International Nuclear Information System (INIS)

    Mahmood, N.; Burney, S.M.A.

    2017-01-01

    Everything in this world is encapsulated by space and time fence. Our daily life activities are utterly linked and related with other objects in vicinity. Therefore, a strong relationship exist with our current location, time (including past, present and future) and event through with we are moving as an object also affect our activities in life. Ontology development and its integration with database are vital for the true understanding of the complex systems involving both spatial and temporal dimensions. In this paper we propose a conceptual framework for building spatio-temporal database model based on ontological approach. We have used relational data model for modelling spatio-temporal data content and present our methodology with spatio-temporal ontological accepts and its transformation into spatio-temporal database model. We illustrate the implementation of our conceptual model through a case study related to cultivated land parcel used for agriculture to exhibit the spatio-temporal behaviour of agricultural land and related entities. Moreover, it provides a generic approach for designing spatiotemporal databases based on ontology. The proposed model is capable to understand the ontological and somehow epistemological commitments and to build spatio-temporal ontology and transform it into a spatio-temporal data model. Finally, we highlight the existing and future research challenges. (author)

  7. Ocean acidification: Linking science to management solutions using the Great Barrier Reef as a case study.

    Science.gov (United States)

    Albright, Rebecca; Anthony, Kenneth R N; Baird, Mark; Beeden, Roger; Byrne, Maria; Collier, Catherine; Dove, Sophie; Fabricius, Katharina; Hoegh-Guldberg, Ove; Kelly, Ryan P; Lough, Janice; Mongin, Mathieu; Munday, Philip L; Pears, Rachel J; Russell, Bayden D; Tilbrook, Bronte; Abal, Eva

    2016-11-01

    Coral reefs are one of the most vulnerable ecosystems to ocean acidification. While our understanding of the potential impacts of ocean acidification on coral reef ecosystems is growing, gaps remain that limit our ability to translate scientific knowledge into management action. To guide solution-based research, we review the current knowledge of ocean acidification impacts on coral reefs alongside management needs and priorities. We use the world's largest continuous reef system, Australia's Great Barrier Reef (GBR), as a case study. We integrate scientific knowledge gained from a variety of approaches (e.g., laboratory studies, field observations, and ecosystem modelling) and scales (e.g., cell, organism, ecosystem) that underpin a systems-level understanding of how ocean acidification is likely to impact the GBR and associated goods and services. We then discuss local and regional management options that may be effective to help mitigate the effects of ocean acidification on the GBR, with likely application to other coral reef systems. We develop a research framework for linking solution-based ocean acidification research to practical management options. The framework assists in identifying effective and cost-efficient options for supporting ecosystem resilience. The framework enables on-the-ground OA management to be the focus, while not losing sight of CO2 mitigation as the ultimate solution. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Elastic-Plastic J-Integral Solutions or Surface Cracks in Tension Using an Interpolation Methodology. Appendix C -- Finite Element Models Solution Database File, Appendix D -- Benchmark Finite Element Models Solution Database File

    Science.gov (United States)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.

  9. Scale out databases for CERN use cases

    International Nuclear Information System (INIS)

    Baranowski, Zbigniew; Grzybek, Maciej; Canali, Luca; Garcia, Daniel Lanza; Surdy, Kacper

    2015-01-01

    Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log database. (paper)

  10. Management of virtualized infrastructure for physics databases

    International Nuclear Information System (INIS)

    Topurov, Anton; Gallerani, Luigi; Chatal, Francois; Piorkowski, Mariusz

    2012-01-01

    Demands for information storage of physics metadata are rapidly increasing together with the requirements for its high availability. Most of the HEP laboratories are struggling to squeeze more from their computer centers, thus focus on virtualizing available resources. CERN started investigating database virtualization in early 2006, first by testing database performance and stability on native Xen. Since then we have been closely evaluating the constantly evolving functionality of virtualisation solutions for database and middle tier together with the associated management applications – Oracle's Enterprise Manager and VM Manager. This session will detail our long experience in dealing with virtualized environments, focusing on newest Oracle OVM 3.0 for x86 and Oracle Enterprise Manager functionality for efficiently managing your virtualized database infrastructure.

  11. O-GLYCBASE version 3.0: a revised database of O-glycosylated proteins

    DEFF Research Database (Denmark)

    Hansen, Jan; Lund, Ole; Nilsson, Jette

    1998-01-01

    O-GLYCBASE is a revised database of information on glycoproteins and their O-linked glycosylation sites. Entries are compiled and revised from the literature, and from the sequence databases. Entries include informations about species, sequence, glycosylation sites and glycan type and is fully cr...

  12. A decade of Web Server updates at the Bioinformatics Links Directory: 2003-2012.

    Science.gov (United States)

    Brazas, Michelle D; Yim, David; Yeung, Winston; Ouellette, B F Francis

    2012-07-01

    The 2012 Bioinformatics Links Directory update marks the 10th special Web Server issue from Nucleic Acids Research. Beginning with content from their 2003 publication, the Bioinformatics Links Directory in collaboration with Nucleic Acids Research has compiled and published a comprehensive list of freely accessible, online tools, databases and resource materials for the bioinformatics and life science research communities. The past decade has exhibited significant growth and change in the types of tools, databases and resources being put forth, reflecting both technology changes and the nature of research over that time. With the addition of 90 web server tools and 12 updates from the July 2012 Web Server issue of Nucleic Acids Research, the Bioinformatics Links Directory at http://bioinformatics.ca/links_directory/ now contains an impressive 134 resources, 455 databases and 1205 web server tools, mirroring the continued activity and efforts of our field.

  13. Revisiting Reuse in Main Memory Database Systems

    OpenAIRE

    Dursun, Kayhan; Binnig, Carsten; Cetintemel, Ugur; Kraska, Tim

    2016-01-01

    Reusing intermediates in databases to speed-up analytical query processing has been studied in the past. Existing solutions typically require intermediate results of individual operators to be materialized into temporary tables to be considered for reuse in subsequent queries. However, these approaches are fundamentally ill-suited for use in modern main memory databases. The reason is that modern main memory DBMSs are typically limited by the bandwidth of the memory bus, thus query execution ...

  14. Human Ageing Genomic Resources: new and updated databases

    Science.gov (United States)

    Tacutu, Robi; Thornton, Daniel; Johnson, Emily; Budovsky, Arie; Barardo, Diogo; Craig, Thomas; Diana, Eugene; Lehmann, Gilad; Toren, Dmitri; Wang, Jingwei; Fraifeld, Vadim E

    2018-01-01

    Abstract In spite of a growing body of research and data, human ageing remains a poorly understood process. Over 10 years ago we developed the Human Ageing Genomic Resources (HAGR), a collection of databases and tools for studying the biology and genetics of ageing. Here, we present HAGR’s main functionalities, highlighting new additions and improvements. HAGR consists of six core databases: (i) the GenAge database of ageing-related genes, in turn composed of a dataset of >300 human ageing-related genes and a dataset with >2000 genes associated with ageing or longevity in model organisms; (ii) the AnAge database of animal ageing and longevity, featuring >4000 species; (iii) the GenDR database with >200 genes associated with the life-extending effects of dietary restriction; (iv) the LongevityMap database of human genetic association studies of longevity with >500 entries; (v) the DrugAge database with >400 ageing or longevity-associated drugs or compounds; (vi) the CellAge database with >200 genes associated with cell senescence. All our databases are manually curated by experts and regularly updated to ensure a high quality data. Cross-links across our databases and to external resources help researchers locate and integrate relevant information. HAGR is freely available online (http://genomics.senescence.info/). PMID:29121237

  15. The new Scandinavian Donations and Transfusions database (SCANDAT2)

    DEFF Research Database (Denmark)

    Edgren, Gustaf; Rostgaard, Klaus; Vasan, Senthil K

    2015-01-01

    : It is possible to create a binational, nationwide database with almost 50 years of follow-up of blood donors and transfused patients for a range of health outcomes. We aim to use this database for further studies of donor health, transfusion-associated risks, and transfusion-transmitted disease....... AND METHODS: We have previously created the anonymized Scandinavian Donations and Transfusions (SCANDAT) database, containing data on blood donors, blood transfusions, and transfused patients, with complete follow-up of donors and patients for a range of health outcomes. Here we describe the re......-creation of SCANDAT with updated, identifiable data. We collected computerized data on blood donations and transfusions from blood banks covering all of Sweden and Denmark. After data cleaning, two structurally identical databases were created and the entire database was linked with nationwide health outcomes...

  16. Federated Database Services for Wind Tunnel Experiment Workflows

    Directory of Open Access Journals (Sweden)

    A. Paventhan

    2006-01-01

    Full Text Available Enabling the full life cycle of scientific and engineering workflows requires robust middleware and services that support effective data management, near-realtime data movement and custom data processing. Many existing solutions exploit the database as a passive metadata catalog. In this paper, we present an approach that makes use of federation of databases to host data-centric wind tunnel application workflows. The user is able to compose customized application workflows based on database services. We provide a reference implementation that leverages typical business tools and technologies: Microsoft SQL Server for database services and Windows Workflow Foundation for workflow services. The application data and user's code are both hosted in federated databases. With the growing interest in XML Web Services in scientific Grids, and with databases beginning to support native XML types and XML Web services, we can expect the role of databases in scientific computation to grow in importance.

  17. Database Constraints Applied to Metabolic Pathway Reconstruction Tools

    Directory of Open Access Journals (Sweden)

    Jordi Vilaplana

    2014-01-01

    Full Text Available Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (reannotation of proteomes, to properly identify both the individual proteins involved in the process(es of interest and their function. It also enables the sets of proteins involved in the process(es in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes.

  18. Database constraints applied to metabolic pathway reconstruction tools.

    Science.gov (United States)

    Vilaplana, Jordi; Solsona, Francesc; Teixido, Ivan; Usié, Anabel; Karathia, Hiren; Alves, Rui; Mateo, Jordi

    2014-01-01

    Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the process(es) of interest and their function. It also enables the sets of proteins involved in the process(es) in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes.

  19. O-GLYCBASE version 2.0: a revised database of O-glycosylated proteins

    DEFF Research Database (Denmark)

    Hansen, Jan; Lund, Ole; Rapacki, Kristoffer

    1997-01-01

    O-GLYCBASE is an updated database of information on glycoproteins and their O-linked glycosylation sites. Entries are compiled and revised from the literature, and from the SWISS-PROT database. Entries include information about species, sequence, glycosylation sites and glycan type. O-GLYCBASE is...... patterns for the GalNAc, mannose and GlcNAc transferases are shown. The O-GLYCBASE database is available through WWW or by anonymous FTP....

  20. Aspects of the design of distributed databases

    OpenAIRE

    Burlacu Irina-Andreea

    2011-01-01

    Distributed data - data, processed by a system, can be distributed among several computers, but it is accessible from any of them. A distributed database design problem is presented that involves the development of a global model, a fragmentation, and a data allocation. The student is given a conceptual entity-relationship model for the database and a description of the transactions and a generic network environment. A stepwise solution approach to this problem is shown, based on mean value a...

  1. TRENDS: The aeronautical post-test database management system

    Science.gov (United States)

    Bjorkman, W. S.; Bondi, M. J.

    1990-01-01

    TRENDS, an engineering-test database operating system developed by NASA to support rotorcraft flight tests, is described. Capabilities and characteristics of the system are presented, with examples of its use in recalling and analyzing rotorcraft flight-test data from a TRENDS database. The importance of system user-friendliness in gaining users' acceptance is stressed, as is the importance of integrating supporting narrative data with numerical data in engineering-test databases. Considerations relevant to the creation and maintenance of flight-test database are discussed and TRENDS' solutions to database management problems are described. Requirements, constraints, and other considerations which led to the system's configuration are discussed and some of the lessons learned during TRENDS' development are presented. Potential applications of TRENDS to a wide range of aeronautical and other engineering tests are identified.

  2. The relational clinical database: a possible solution to the star wars in registry systems.

    Science.gov (United States)

    Michels, D K; Zamieroski, M

    1990-12-01

    In summary, having data from other service areas available in a relational clinical database could resolve many of the problems existing in today's registry systems. Uniting sophisticated information systems into a centralized database system could definitely be a corporate asset in managing the bottom line.

  3. Sharing and executing linked data queries in a collaborative environment.

    Science.gov (United States)

    García Godoy, María Jesús; López-Camacho, Esteban; Navas-Delgado, Ismael; Aldana-Montes, José F

    2013-07-01

    Life Sciences have emerged as a key domain in the Linked Data community because of the diversity of data semantics and formats available through a great variety of databases and web technologies. Thus, it has been used as the perfect domain for applications in the web of data. Unfortunately, bioinformaticians are not exploiting the full potential of this already available technology, and experts in Life Sciences have real problems to discover, understand and devise how to take advantage of these interlinked (integrated) data. In this article, we present Bioqueries, a wiki-based portal that is aimed at community building around biological Linked Data. This tool has been designed to aid bioinformaticians in developing SPARQL queries to access biological databases exposed as Linked Data, and also to help biologists gain a deeper insight into the potential use of this technology. This public space offers several services and a collaborative infrastructure to stimulate the consumption of biological Linked Data and, therefore, contribute to implementing the benefits of the web of data in this domain. Bioqueries currently contains 215 query entries grouped by database and theme, 230 registered users and 44 end points that contain biological Resource Description Framework information. The Bioqueries portal is freely accessible at http://bioqueries.uma.es. Supplementary data are available at Bioinformatics online.

  4. Domain fusion analysis by applying relational algebra to protein sequence and domain databases.

    Science.gov (United States)

    Truong, Kevin; Ikura, Mitsuhiko

    2003-05-06

    Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at http://calcium.uhnres.utoronto.ca/pi. As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time.

  5. ALFRED: An Allele Frequency Database for Microevolutionary Studies

    Directory of Open Access Journals (Sweden)

    Kenneth K Kidd

    2005-01-01

    Full Text Available Many kinds of microevolutionary studies require data on multiple polymorphisms in multiple populations. Increasingly, and especially for human populations, multiple research groups collect relevant data and those data are dispersed widely in the literature. ALFRED has been designed to hold data from many sources and make them available over the web. Data are assembled from multiple sources, curated, and entered into the database. Multiple links to other resources are also established by the curators. A variety of search options are available and additional geographic based interfaces are being developed. The database can serve the human anthropologic genetic community by identifying what loci are already typed on many populations thereby helping to focus efforts on a common set of markers. The database can also serve as a model for databases handling similar DNA polymorphism data for other species.

  6. The Danish Fetal Medicine Database

    Directory of Open Access Journals (Sweden)

    Ekelund CK

    2016-10-01

    Full Text Available Charlotte Kvist Ekelund,1 Tine Iskov Kopp,2 Ann Tabor,1 Olav Bjørn Petersen3 1Department of Obstetrics, Center of Fetal Medicine, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark; 2Registry Support Centre (East – Epidemiology and Biostatistics, Research Centre for Prevention and Health, Glostrup, Denmark; 3Fetal Medicine Unit, Aarhus University Hospital, Aarhus Nord, Denmark Aim: The aim of this study is to set up a database in order to monitor the detection rates and false-positive rates of first-trimester screening for chromosomal abnormalities and prenatal detection rates of fetal malformations in Denmark. Study population: Pregnant women with a first or second trimester ultrasound scan performed at all public hospitals in Denmark are registered in the database. Main variables/descriptive data: Data on maternal characteristics, ultrasonic, and biochemical variables are continuously sent from the fetal medicine units' Astraia databases to the central database via web service. Information about outcome of pregnancy (miscarriage, termination, live birth, or stillbirth is received from the National Patient Register and National Birth Register and linked via the Danish unique personal registration number. Furthermore, results of all pre- and postnatal chromosome analyses are sent to the database. Conclusion: It has been possible to establish a fetal medicine database, which monitors first-trimester screening for chromosomal abnormalities and second-trimester screening for major fetal malformations with the input from already collected data. The database is valuable to assess the performance at a regional level and to compare Danish performance with international results at a national level. Keywords: prenatal screening, nuchal translucency, fetal malformations, chromosomal abnormalities

  7. HIP2: An online database of human plasma proteins from healthy individuals

    Directory of Open Access Journals (Sweden)

    Shen Changyu

    2008-04-01

    Full Text Available Abstract Background With the introduction of increasingly powerful mass spectrometry (MS techniques for clinical research, several recent large-scale MS proteomics studies have sought to characterize the entire human plasma proteome with a general objective for identifying thousands of proteins leaked from tissues in the circulating blood. Understanding the basic constituents, diversity, and variability of the human plasma proteome is essential to the development of sensitive molecular diagnosis and treatment monitoring solutions for future biomedical applications. Biomedical researchers today, however, do not have an integrated online resource in which they can search for plasma proteins collected from different mass spectrometry platforms, experimental protocols, and search software for healthy individuals. The lack of such a resource for comparisons has made it difficult to interpret proteomics profile changes in patients' plasma and to design protein biomarker discovery experiments. Description To aid future protein biomarker studies of disease and health from human plasma, we developed an online database, HIP2 (Healthy Human Individual's Integrated Plasma Proteome. The current version contains 12,787 protein entries linked to 86,831 peptide entries identified using different MS platforms. Conclusion This web-based database will be useful to biomedical researchers involved in biomarker discovery research. This database has been developed to be the comprehensive collection of healthy human plasma proteins, and has protein data captured in a relational database schema built to contain mappings of supporting peptide evidence from several high-quality and high-throughput mass-spectrometry (MS experimental data sets. Users can search for plasma protein/peptide annotations, peptide/protein alignments, and experimental/sample conditions with options for filter-based retrieval to achieve greater analytical power for discovery and validation.

  8. Scalable Database Design of End-Game Model with Decoupled Countermeasure and Threat Information

    Science.gov (United States)

    2017-11-01

    the Army Modular Active Protection System (MAPS) program to provide end-to-end APS modeling and simulation capabilities. The SSES simulation features...research project of scalable database design was initiated in support of SSES modularization efforts with respect to 4 major software components...Iron Curtain KE kinetic energy MAPS Modular Active Protective System OLE DB object linking and embedding database RDB relational database RPG

  9. Standardisation of an European end-user nutrient database for nutritional epidemiology: what can we learn from the EPIC Nutrient Database (ENDB) project?

    DEFF Research Database (Denmark)

    Slimani, N.; Deharveng, G.; Unwin, I.

    2007-01-01

    the absence of a reference European nutrient database for international nutritional epidemiology studies, the EPIC Nutrient Database (ENDB) project has been set up to standardise nutrient databases (NDBs) across 10 European countries participating in the EPIC study. This paper reports the main...... problems in harmonising NDBs experienced by end-user in the ENDB project and the solutions adopted to prevent and minimize them, which are also relevant for other large European nutritional studies. Furthermore, it provides end-user recommendations for improving the comparability of European and other NDBs...

  10. Scale out databases for CERN use cases

    CERN Document Server

    Baranowski, Zbigniew; Canali, Luca; Garcia, Daniel Lanza; Surdy, Kacper

    2015-01-01

    Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log dat...

  11. Using the structure-function linkage database to characterize functional domains in enzymes.

    Science.gov (United States)

    Brown, Shoshana; Babbitt, Patricia

    2014-12-12

    The Structure-Function Linkage Database (SFLD; http://sfld.rbvi.ucsf.edu/) is a Web-accessible database designed to link enzyme sequence, structure, and functional information. This unit describes the protocols by which a user may query the database to predict the function of uncharacterized enzymes and to correct misannotated functional assignments. The information in this unit is especially useful in helping a user discriminate functional capabilities of a sequence that is only distantly related to characterized sequences in publicly available databases. Copyright © 2014 John Wiley & Sons, Inc.

  12. ColMat web-site linked to the technical and administrative databases

    CERN Document Server

    EuCARD, Collaboration

    2014-01-01

    The EuCARD project has been using various databases to store scientific and contractual information, as well as working documents. This report documents the methods used during the life of the project and the strategy chosen to archive the technical and administrative material after the project completion. Special care is given to provide easy and open access for the foreground produced, especially for the EuCARD-2 community at large, including its network partners worldwide.

  13. YMDB: the Yeast Metabolome Database

    Science.gov (United States)

    Jewison, Timothy; Knox, Craig; Neveu, Vanessa; Djoumbou, Yannick; Guo, An Chi; Lee, Jacqueline; Liu, Philip; Mandal, Rupasri; Krishnamurthy, Ram; Sinelnikov, Igor; Wilson, Michael; Wishart, David S.

    2012-01-01

    The Yeast Metabolome Database (YMDB, http://www.ymdb.ca) is a richly annotated ‘metabolomic’ database containing detailed information about the metabolome of Saccharomyces cerevisiae. Modeled closely after the Human Metabolome Database, the YMDB contains >2000 metabolites with links to 995 different genes/proteins, including enzymes and transporters. The information in YMDB has been gathered from hundreds of books, journal articles and electronic databases. In addition to its comprehensive literature-derived data, the YMDB also contains an extensive collection of experimental intracellular and extracellular metabolite concentration data compiled from detailed Mass Spectrometry (MS) and Nuclear Magnetic Resonance (NMR) metabolomic analyses performed in our lab. This is further supplemented with thousands of NMR and MS spectra collected on pure, reference yeast metabolites. Each metabolite entry in the YMDB contains an average of 80 separate data fields including comprehensive compound description, names and synonyms, structural information, physico-chemical data, reference NMR and MS spectra, intracellular/extracellular concentrations, growth conditions and substrates, pathway information, enzyme data, gene/protein sequence data, as well as numerous hyperlinks to images, references and other public databases. Extensive searching, relational querying and data browsing tools are also provided that support text, chemical structure, spectral, molecular weight and gene/protein sequence queries. Because of S. cervesiae's importance as a model organism for biologists and as a biofactory for industry, we believe this kind of database could have considerable appeal not only to metabolomics researchers, but also to yeast biologists, systems biologists, the industrial fermentation industry, as well as the beer, wine and spirit industry. PMID:22064855

  14. Patterns of Undergraduates' Use of Scholarly Databases in a Large Research University

    Science.gov (United States)

    Mbabu, Loyd Gitari; Bertram, Albert; Varnum, Ken

    2013-01-01

    Authentication data was utilized to explore undergraduate usage of subscription electronic databases. These usage patterns were linked to the information literacy curriculum of the library. The data showed that out of the 26,208 enrolled undergraduate students, 42% of them accessed a scholarly database at least once in the course of the entire…

  15. HOLLYWOOD: a comparative relational database of alternative splicing.

    Science.gov (United States)

    Holste, Dirk; Huo, George; Tung, Vivian; Burge, Christopher B

    2006-01-01

    RNA splicing is an essential step in gene expression, and is often variable, giving rise to multiple alternatively spliced mRNA and protein isoforms from a single gene locus. The design of effective databases to support experimental and computational investigations of alternative splicing (AS) is a significant challenge. In an effort to integrate accurate exon and splice site annotation with current knowledge about splicing regulatory elements and predicted AS events, and to link information about the splicing of orthologous genes in different species, we have developed the Hollywood system. This database was built upon genomic annotation of splicing patterns of known genes derived from spliced alignment of complementary DNAs (cDNAs) and expressed sequence tags, and links features such as splice site sequence and strength, exonic splicing enhancers and silencers, conserved and non-conserved patterns of splicing, and cDNA library information for inferred alternative exons. Hollywood was implemented as a relational database and currently contains comprehensive information for human and mouse. It is accompanied by a web query tool that allows searches for sets of exons with specific splicing characteristics or splicing regulatory element composition, or gives a graphical or sequence-level summary of splicing patterns for a specific gene. A streamlined graphical representation of gene splicing patterns is provided, and these patterns can alternatively be layered onto existing information in the UCSC Genome Browser. The database is accessible at http://hollywood.mit.edu.

  16. ATLAS DDM/DQ2 & NoSQL databases: Use cases and experiences

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    NoSQL databases. This includes distributed file system like HDFS that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value/document stores, like HBase, Cassandra or MongoDB. These databases provide solutions to particular types...

  17. Data Management for the Internet of Things: Design Primitives and Solution

    Science.gov (United States)

    Abu-Elkheir, Mervat; Hayajneh, Mohammad; Ali, Najah Abu

    2013-01-01

    The Internet of Things (IoT) is a networking paradigm where interconnected, smart objects continuously generate data and transmit it over the Internet. Much of the IoT initiatives are geared towards manufacturing low-cost and energy-efficient hardware for these objects, as well as the communication technologies that provide objects interconnectivity. However, the solutions to manage and utilize the massive volume of data produced by these objects are yet to mature. Traditional database management solutions fall short in satisfying the sophisticated application needs of an IoT network that has a truly global-scale. Current solutions for IoT data management address partial aspects of the IoT environment with special focus on sensor networks. In this paper, we survey the data management solutions that are proposed for IoT or subsystems of the IoT. We highlight the distinctive design primitives that we believe should be addressed in an IoT data management solution, and discuss how they are approached by the proposed solutions. We finally propose a data management framework for IoT that takes into consideration the discussed design elements and acts as a seed to a comprehensive IoT data management solution. The framework we propose adapts a federated, data- and sources-centric approach to link the diverse Things with their abundance of data to the potential applications and services that are envisioned for IoT. PMID:24240599

  18. Landslide databases for applied landslide impact research: the example of the landslide database for the Federal Republic of Germany

    Science.gov (United States)

    Damm, Bodo; Klose, Martin

    2014-05-01

    running on PostgreSQL/PostGIS. This provides advanced functionality for spatial data analysis and forms the basis for future data provision and visualization using a WebGIS application. Analysis of landslide database contents shows that in most parts of Germany landslides primarily affect transportation infrastructures. Although with distinct lower frequency, recent landslides are also recorded to cause serious damage to hydraulic facilities and waterways, supply and disposal infrastructures, sites of cultural heritage, as well as forest, agricultural, and mining areas. The main types of landslide damage are failure of cut and fill slopes, destruction of retaining walls, street lights, and forest stocks, burial of roads, backyards, and garden areas, as well as crack formation in foundations, sewer lines, and building walls. Landslide repair and mitigation at transportation infrastructures is dominated by simple solutions such as catch barriers or rock fall drapery. These solutions are often undersized and fail under stress. The use of costly slope stabilization or protection systems is proven to reduce these risks effectively over longer maintenance cycles. The right balancing of landslide mitigation is thus a crucial problem in managing landslide risks. Development and analysis of such landslide databases helps to support decision-makers in finding efficient solutions to minimize landslide risks for human beings, infrastructures, and financial assets.

  19. Databases for neurogenetics: introduction, overview, and challenges.

    Science.gov (United States)

    Sobrido, María-Jesús; Cacheiro, Pilar; Carracedo, Angel; Bertram, Lars

    2012-09-01

    The importance for research and clinical utility of mutation databases, as well as the issues and difficulties entailed in their construction, is discussed within the Human Variome Project. While general principles and standards can apply to most human diseases, some specific questions arise when dealing with the nature of genetic neurological disorders. So far, publically accessible mutation databases exist for only about half of the genes causing neurogenetic disorders; and a considerable work is clearly still needed to optimize their content. The current landscape, main challenges, some potential solutions, and future perspectives on genetic databases for disorders of the nervous system are reviewed in this special issue of Human Mutation on neurogenetics. © 2012 Wiley Periodicals, Inc.

  20. Linked Health Data: how linked data can help provide better health decisions.

    Science.gov (United States)

    Farinelli, Fernanda; Barcellos de Almeida, Maurício; Linhares de Souza, Yóris

    2015-01-01

    This paper provides a brief survey about the use of linked data in healthcare to foster better health decisions and increase health knowledge. We present real cases from the Brazilian experience and emphasize some issues in research. This paper is not intending to be fully comprehensive, we discuss some open issues and research challenges in linked data and the technologies involved. We conclude that even though linked data has been adopted in many countries, some challenges have to be overcome, for example, interoperability between different standards. A defined solution able to foster the semantic interoperability between different standards must be developed. Benefits contributed through linked health data involve better decision making on diagnostics, assertive treatments, knowledge acquisition, improvements in quality healthcare service to citizens.

  1. "Pseudo" Faraday cage: a solution for telemetry link interaction between a left ventricular assist device and an implantable cardioverter defibrillator.

    Science.gov (United States)

    Jacob, Sony; Cherian, Prasad K; Ghumman, Waqas S; Das, Mithilesh K

    2010-09-01

    Patients implanted with left ventricular assist devices (LVAD) may have implantable cardioverter defibrillators (ICD) implanted for sudden cardiac death prevention. This opens the possibility of device-device communication interactions and thus interferences. We present a case of such interaction that led to ICD communication failure following the activation of an LVAD. In this paper, we describe a practical solution to circumvent the communication interference and review the communication links of ICDs and possible mechanisms of ICD-LVAD interactions.

  2. The MAJORANA Parts Tracking Database

    Science.gov (United States)

    Abgrall, N.; Aguayo, E.; Avignone, F. T.; Barabash, A. S.; Bertrand, F. E.; Brudanin, V.; Busch, M.; Byram, D.; Caldwell, A. S.; Chan, Y.-D.; Christofferson, C. D.; Combs, D. C.; Cuesta, C.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu.; Egorov, V.; Ejiri, H.; Elliott, S. R.; Esterline, J.; Fast, J. E.; Finnerty, P.; Fraenkle, F. M.; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, V. E.; Gusev, K.; Hallin, A. L.; Hazama, R.; Hegai, A.; Henning, R.; Hoppe, E. W.; Howard, S.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J. Diaz; Leviner, L. E.; Loach, J. C.; MacMullin, J.; Martin, R. D.; Meijer, S. J.; Mertens, S.; Miller, M. L.; Mizouni, L.; Nomachi, M.; Orrell, J. L.; O`Shaughnessy, C.; Overman, N. R.; Petersburg, R.; Phillips, D. G.; Poon, A. W. P.; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, K.; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, K. J.; Snyder, N.; Soin, A.; Suriano, A. M.; Tedeschi, D.; Thompson, J.; Timkin, V.; Tornow, W.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A. R.; Yu, C.-H.; Yumatov, V.; Zhitnikov, I.

    2015-04-01

    The MAJORANA DEMONSTRATOR is an ultra-low background physics experiment searching for the neutrinoless double beta decay of 76Ge. The MAJORANA Parts Tracking Database is used to record the history of components used in the construction of the DEMONSTRATOR. The tracking implementation takes a novel approach based on the schema-free database technology CouchDB. Transportation, storage, and processes undergone by parts such as machining or cleaning are linked to part records. Tracking parts provide a great logistics benefit and an important quality assurance reference during construction. In addition, the location history of parts provides an estimate of their exposure to cosmic radiation. A web application for data entry and a radiation exposure calculator have been developed as tools for achieving the extreme radio-purity required for this rare decay search.

  3. The MAJORANA Parts Tracking Database

    Energy Technology Data Exchange (ETDEWEB)

    Abgrall, N. [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA (United States); Aguayo, E. [Pacific Northwest National Laboratory, Richland, WA (United States); Avignone, F.T. [Department of Physics and Astronomy, University of South Carolina, Columbia, SC (United States); Oak Ridge National Laboratory, Oak Ridge, TN (United States); Barabash, A.S. [Institute for Theoretical and Experimental Physics, Moscow (Russian Federation); Bertrand, F.E. [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Brudanin, V. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Busch, M. [Department of Physics, Duke University, Durham, NC (United States); Triangle Universities Nuclear Laboratory, Durham, NC (United States); Byram, D. [Department of Physics, University of South Dakota, Vermillion, SD (United States); Caldwell, A.S. [South Dakota School of Mines and Technology, Rapid City, SD (United States); Chan, Y-D. [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA (United States); Christofferson, C.D. [South Dakota School of Mines and Technology, Rapid City, SD (United States); Combs, D.C. [Department of Physics, North Carolina State University, Raleigh, NC (United States); Triangle Universities Nuclear Laboratory, Durham, NC (United States); Cuesta, C.; Detwiler, J.A.; Doe, P.J. [Center for Experimental Nuclear Physics and Astrophysics, and Department of Physics, University of Washington, Seattle, WA (United States); Efremenko, Yu. [Department of Physics and Astronomy, University of Tennessee, Knoxville, TN (United States); Egorov, V. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Ejiri, H. [Research Center for Nuclear Physics and Department of Physics, Osaka University, Ibaraki, Osaka (Japan); Elliott, S.R. [Los Alamos National Laboratory, Los Alamos, NM (United States); and others

    2015-04-11

    The MAJORANA DEMONSTRATOR is an ultra-low background physics experiment searching for the neutrinoless double beta decay of {sup 76}Ge. The MAJORANA Parts Tracking Database is used to record the history of components used in the construction of the DEMONSTRATOR. The tracking implementation takes a novel approach based on the schema-free database technology CouchDB. Transportation, storage, and processes undergone by parts such as machining or cleaning are linked to part records. Tracking parts provide a great logistics benefit and an important quality assurance reference during construction. In addition, the location history of parts provides an estimate of their exposure to cosmic radiation. A web application for data entry and a radiation exposure calculator have been developed as tools for achieving the extreme radio-purity required for this rare decay search.

  4. Bluetooth wireless database for scoliosis clinics.

    Science.gov (United States)

    Lou, E; Fedorak, M V; Hill, D L; Raso, J V; Moreau, M J; Mahood, J K

    2003-05-01

    A database system with Bluetooth wireless connectivity has been developed so that scoliosis clinics can be run more efficiently and data can be mined for research studies without significant increases in equipment cost. The wireless database system consists of a Bluetooth-enabled laptop or PC and a Bluetooth-enabled handheld personal data assistant (PDA). Each patient has a profile in the database, which has all of his or her clinical history. Immediately prior to the examination, the orthopaedic surgeon selects a patient's profile from the database and uploads that data to the PDA over a Bluetooth wireless connection. The surgeon can view the entire clinical history of the patient while in the examination room and, at the same time, enter in any new measurements and comments from the current examination. After seeing the patient, the surgeon synchronises the newly entered information with the database wirelessly and prints a record for the chart. This combination of the database and the PDA both improves efficiency and accuracy and can save significant time, as there is less duplication of work, and no dictation is required. The equipment required to implement this solution is a Bluetooth-enabled PDA and a Bluetooth wireless transceiver for the PC or laptop.

  5. Using Large Diabetes Databases for Research.

    Science.gov (United States)

    Wild, Sarah; Fischbacher, Colin; McKnight, John

    2016-09-01

    There are an increasing number of clinical, administrative and trial databases that can be used for research. These are particularly valuable if there are opportunities for linkage to other databases. This paper describes examples of the use of large diabetes databases for research. It reviews the advantages and disadvantages of using large diabetes databases for research and suggests solutions for some challenges. Large, high-quality databases offer potential sources of information for research at relatively low cost. Fundamental issues for using databases for research are the completeness of capture of cases within the population and time period of interest and accuracy of the diagnosis of diabetes and outcomes of interest. The extent to which people included in the database are representative should be considered if the database is not population based and there is the intention to extrapolate findings to the wider diabetes population. Information on key variables such as date of diagnosis or duration of diabetes may not be available at all, may be inaccurate or may contain a large amount of missing data. Information on key confounding factors is rarely available for the nondiabetic or general population limiting comparisons with the population of people with diabetes. However comparisons that allow for differences in distribution of important demographic factors may be feasible using data for the whole population or a matched cohort study design. In summary, diabetes databases can be used to address important research questions. Understanding the strengths and limitations of this approach is crucial to interpret the findings appropriately. © 2016 Diabetes Technology Society.

  6. CCDB: a curated database of genes involved in cervix cancer.

    Science.gov (United States)

    Agarwal, Subhash M; Raghav, Dhwani; Singh, Harinder; Raghava, G P S

    2011-01-01

    The Cervical Cancer gene DataBase (CCDB, http://crdd.osdd.net/raghava/ccdb) is a manually curated catalog of experimentally validated genes that are thought, or are known to be involved in the different stages of cervical carcinogenesis. In spite of the large women population that is presently affected from this malignancy still at present, no database exists that catalogs information on genes associated with cervical cancer. Therefore, we have compiled 537 genes in CCDB that are linked with cervical cancer causation processes such as methylation, gene amplification, mutation, polymorphism and change in expression level, as evident from published literature. Each record contains details related to gene like architecture (exon-intron structure), location, function, sequences (mRNA/CDS/protein), ontology, interacting partners, homology to other eukaryotic genomes, structure and links to other public databases, thus augmenting CCDB with external data. Also, manually curated literature references have been provided to support the inclusion of the gene in the database and establish its association with cervix cancer. In addition, CCDB provides information on microRNA altered in cervical cancer as well as search facility for querying, several browse options and an online tool for sequence similarity search, thereby providing researchers with easy access to the latest information on genes involved in cervix cancer.

  7. Databases for highway inventories. Proposal for a new model

    Energy Technology Data Exchange (ETDEWEB)

    Perez Casan, J.A.

    2016-07-01

    Database models for highway inventories are based on classical schemes for relational databases: many related tables, in which the database designer establishes, a priori, every detail that they consider relevant for inventory management. This kind of database presents several problems. First, adapting the model and its applications when new database features appear is difficult. In addition, the different needs of different sets of road inventory users are difficult to fulfil with these schemes. For example, maintenance management services, road authorities and emergency services have different needs. In addition, this kind of database cannot be adapted to new scenarios, such as other countries and regions (that may classify roads or name certain elements differently). The problem is more complex if the language used in these scenarios is not the same as that used in the database design. In addition, technicians need a long time to learn to use the database efficiently. This paper proposes a flexible, multilanguage and multipurpose database model, which gives an effective and simple solution to the aforementioned problems. (Author)

  8. A blue carbon soil database: Tidal wetland stocks for the US National Greenhouse Gas Inventory

    Science.gov (United States)

    Feagin, R. A.; Eriksson, M.; Hinson, A.; Najjar, R. G.; Kroeger, K. D.; Herrmann, M.; Holmquist, J. R.; Windham-Myers, L.; MacDonald, G. M.; Brown, L. N.; Bianchi, T. S.

    2015-12-01

    Coastal wetlands contain large reservoirs of carbon, and in 2015 the US National Greenhouse Gas Inventory began the work of placing blue carbon within the national regulatory context. The potential value of a wetland carbon stock, in relation to its location, soon could be influential in determining governmental policy and management activities, or in stimulating market-based CO2 sequestration projects. To meet the national need for high-resolution maps, a blue carbon stock database was developed linking National Wetlands Inventory datasets with the USDA Soil Survey Geographic Database. Users of the database can identify the economic potential for carbon conservation or restoration projects within specific estuarine basins, states, wetland types, physical parameters, and land management activities. The database is geared towards both national-level assessments and local-level inquiries. Spatial analysis of the stocks show high variance within individual estuarine basins, largely dependent on geomorphic position on the landscape, though there are continental scale trends to the carbon distribution as well. Future plans including linking this database with a sedimentary accretion database to predict carbon flux in US tidal wetlands.

  9. Using relational databases to collect and store discrete-event simulation results

    DEFF Research Database (Denmark)

    Poderys, Justas; Soler, José

    2016-01-01

    , export the results to a data carrier file and then process the results stored in a file using the data processing software. In this work, we propose to save the simulation results directly from a simulation tool to a computer database. We implemented a link between the discrete-even simulation tool...... and the database and performed performance evaluation of 3 different open-source database systems. We show, that with a right choice of a database system, simulation results can be collected and exported up to 2.67 times faster, and use 1.78 times less disk space when compared to using simulation software built...

  10. PlantCARE, a plant cis-acting regulatory element database

    OpenAIRE

    Rombauts, Stephane; Déhais, Patrice; Van Montagu, Marc; Rouzé, Pierre

    1999-01-01

    PlantCARE is a database of plant cis- acting regulatory elements, enhancers and repressors. Besides the transcription motifs found on a sequence, it also offers a link to the EMBL entry that contains the full gene sequence as well as a description of the conditions in which a motif becomes functional. The information on these sites is given by matrices, consensus and individual site sequences on particular genes, depending on the available information. PlantCARE is a relational database avail...

  11. Simple re-instantiation of small databases using cloud computing.

    Science.gov (United States)

    Tan, Tin Wee; Xie, Chao; De Silva, Mark; Lim, Kuan Siong; Patro, C Pawan K; Lim, Shen Jean; Govindarajan, Kunde Ramamoorthy; Tong, Joo Chuan; Choo, Khar Heng; Ranganathan, Shoba; Khan, Asif M

    2013-01-01

    Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear.

  12. The GIOD Project-Globally Interconnected Object Databases

    CERN Document Server

    Bunn, J J; Newman, H B; Wilkinson, R P

    2001-01-01

    The GIOD (Globally Interconnected Object Databases) Project, a joint effort between Caltech and CERN, funded by Hewlett Packard Corporation, has investigated the use of WAN-distributed Object Databases and Mass Storage systems for LHC data. A prototype small- scale LHC data analysis center has been constructed using computing resources at Caltechs Centre for advanced Computing Research (CACR). These resources include a 256 CPU HP Exemplar of ~4600 SPECfp95, a 600 TByte High Performance Storage System (HPSS), and local/wide area links based on OC3 ATM. Using the exemplar, a large number of fully simulated CMS events were produced, and used to populate an object database with a complete schema for raw, reconstructed and analysis objects. The reconstruction software used for this task was based on early codes developed in preparation for the current CMS reconstruction program, ORCA. (6 refs).

  13. Biblio-Link and Pro-Cite: The Searcher's Workstation.

    Science.gov (United States)

    Hoyle, Norman; McNamara, Kathleen

    1987-01-01

    Describes the Biblio-Link and Pro-Cite software packages, which can be used together to create local databases with downloaded records, or to reorganize and repackage downloaded records for client reports. (CLB)

  14. A short-term study of corneal collagen cross-linking with hypo-osmolar riboflavin solution in keratoconic corneas

    Directory of Open Access Journals (Sweden)

    Shao-Feng Gu

    2015-02-01

    Full Text Available AIM: To report the 3mo outcomes of collagen cross-linking (CXL with a hypo-osmolar riboflavin in thin corneas with the thinnest thickness less than 400 μm without epithelium. METHODS: Eight eyes in 6 patients with age 26.2±4.8y were included in the study. All patients underwent CXL using a hypo-osmolar riboflavin solution after its de-epithelization. Best corrected visual acuity, manifest refraction, the thinnest corneal thickness, and endothelial cell density were evaluated before and 3mo after the procedure. RESULTS: The mean thinnest thickness of the cornea was 408.5±29.0 μm before treatment and reduced to 369.8±24.8 μm after the removal of epithelium. With the application of the hypo-osmolar riboflavin solution, the thickness increased to 445.0±26.5 μm before CXL and recover to 412.5±22.7 μm at 3mo after treatment, P=0.659. Before surgery, the mean K-value of the apex of the keratoconus corneas was 57.6±4.0 diopters, and slightly decreased (54.7±4.9 diopters after surgery (P=0.085. Mean best-corrected visual acuity was 0.55±0.23 logarithm of the minimal angle of resolution, and increased to 0.53±0.26 logarithm after surgery (P=0.879. The endothelial cell density was 2706.4±201.6 cells/mm2 before treatment, and slightly decreased (2641.2±218.2 cells/mm2 at last fellow up (P=0.002. CONCLUSION: Corneal collagen cross-linking with a hypo-osmolar riboflavin in thin corneas seems to be a promising treatment. Further study should be done to evaluate the safety and efficiency of CXL in thin corneas for the long-term.

  15. Blind links, a big challenge in the linked data idea: Analysis of Persian Subject Headings

    Directory of Open Access Journals (Sweden)

    Atefeh Sharif

    2014-12-01

    Full Text Available In this survey, Linked data concept as exposing, sharing, and connecting pieces of data, information, and knowledge on the Semantic Web and some potential problems in converting Persian subject headings (PSHs Records into linked data were discussed. A data set (11233 records of PSHs was searched in three information retrieval systems including National Library of Iran (NLI online catalog, Library of Congress (LC online catalog and NOSA books. Correct links between Persian and English subject headings in the 9519 common records of two catalogs were recorded. The results indicate that the links between Persian and English subjects in 20% of records were failed. The maximum error was associated with the anonymous databases (6/7 % in NLI online catalog. It is recommended to preprocess the PSHs records before any conversion projects. It seems that, during the preprocessing, the potential errors could be identified and corrected.

  16. Performance of popular open source databases for HEP related computing problems

    International Nuclear Information System (INIS)

    Kovalskyi, D; Sfiligoi, I; Wuerthwein, F; Yagil, A

    2014-01-01

    Databases are used in many software components of HEP computing, from monitoring and job scheduling to data storage and processing. It is not always clear at the beginning of a project if a problem can be handled by a single server, or if one needs to plan for a multi-server solution. Before a scalable solution is adopted, it helps to know how well it performs in a single server case to avoid situations when a multi-server solution is adopted mostly due to sub-optimal performance per node. This paper presents comparison benchmarks of popular open source database management systems. As a test application we use a user job monitoring system based on the Glidein workflow management system used in the CMS Collaboration.

  17. Database Description - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods ... QTL list, Plant DB link & Genome analysis methods Alternative name - DOI 10.18908/lsdba.nbdc01194-01-000 Cr...ers and QTLs are curated manually from the published literature. The marker information includes marker sequences, genotyping methods... Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive ...

  18. Database Description - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database

  19. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  20. Thermodynamic database for the Co-Pr system

    Directory of Open Access Journals (Sweden)

    S.H. Zhou

    2016-03-01

    Full Text Available In this article, we describe data on (1 compositions for both as-cast and heat treated specimens were summarized in Table 1; (2 the determined enthalpy of mixing of liquid phase is listed in Table 2; (3 thermodynamic database of the Co-Pr system in TDB format for the research articled entitle Chemical partitioning for the Co-Pr system: First-principles, experiments and energetic calculations to investigate the hard magnetic phase W. Keywords: Thermodynamic database of Co-Pr, Solution calorimeter measurement, Phase diagram Co-Pr

  1. Pulotu: Database of Austronesian Supernatural Beliefs and Practices.

    Science.gov (United States)

    Watts, Joseph; Sheehan, Oliver; Greenhill, Simon J; Gomes-Ng, Stephanie; Atkinson, Quentin D; Bulbulia, Joseph; Gray, Russell D

    2015-01-01

    Scholars have debated naturalistic theories of religion for thousands of years, but only recently have scientists begun to test predictions empirically. Existing databases contain few variables on religion, and are subject to Galton's Problem because they do not sufficiently account for the non-independence of cultures or systematically differentiate the traditional states of cultures from their contemporary states. Here we present Pulotu: the first quantitative cross-cultural database purpose-built to test evolutionary hypotheses of supernatural beliefs and practices. The Pulotu database documents the remarkable diversity of the Austronesian family of cultures, which originated in Taiwan, spread west to Madagascar and east to Easter Island-a region covering over half the world's longitude. The focus of Austronesian beliefs range from localised ancestral spirits to powerful creator gods. A wide range of practices also exist, such as headhunting, elaborate tattooing, and the construction of impressive monuments. Pulotu is freely available, currently contains 116 cultures, and has 80 variables describing supernatural beliefs and practices, as well as social and physical environments. One major advantage of Pulotu is that it has separate sections on the traditional states of cultures, the post-contact history of cultures, and the contemporary states of cultures. A second major advantage is that cultures are linked to a language-based family tree, enabling the use phylogenetic methods, which can be used to address Galton's Problem by accounting for common ancestry, to infer deep prehistory, and to model patterns of trait evolution over time. We illustrate the power of phylogenetic methods by performing an ancestral state reconstruction on the Pulotu variable "headhunting", finding evidence that headhunting was practiced in proto-Austronesian culture. Quantitative cross-cultural databases explicitly linking cultures to a phylogeny have the potential to revolutionise the

  2. Pulotu: Database of Austronesian Supernatural Beliefs and Practices.

    Directory of Open Access Journals (Sweden)

    Joseph Watts

    Full Text Available Scholars have debated naturalistic theories of religion for thousands of years, but only recently have scientists begun to test predictions empirically. Existing databases contain few variables on religion, and are subject to Galton's Problem because they do not sufficiently account for the non-independence of cultures or systematically differentiate the traditional states of cultures from their contemporary states. Here we present Pulotu: the first quantitative cross-cultural database purpose-built to test evolutionary hypotheses of supernatural beliefs and practices. The Pulotu database documents the remarkable diversity of the Austronesian family of cultures, which originated in Taiwan, spread west to Madagascar and east to Easter Island-a region covering over half the world's longitude. The focus of Austronesian beliefs range from localised ancestral spirits to powerful creator gods. A wide range of practices also exist, such as headhunting, elaborate tattooing, and the construction of impressive monuments. Pulotu is freely available, currently contains 116 cultures, and has 80 variables describing supernatural beliefs and practices, as well as social and physical environments. One major advantage of Pulotu is that it has separate sections on the traditional states of cultures, the post-contact history of cultures, and the contemporary states of cultures. A second major advantage is that cultures are linked to a language-based family tree, enabling the use phylogenetic methods, which can be used to address Galton's Problem by accounting for common ancestry, to infer deep prehistory, and to model patterns of trait evolution over time. We illustrate the power of phylogenetic methods by performing an ancestral state reconstruction on the Pulotu variable "headhunting", finding evidence that headhunting was practiced in proto-Austronesian culture. Quantitative cross-cultural databases explicitly linking cultures to a phylogeny have the potential

  3. Phospho.ELM: a database of phosphorylation sites--update 2011

    DEFF Research Database (Denmark)

    Dinkel, Holger; Chica, Claudia; Via, Allegra

    2011-01-01

    The Phospho.ELM resource (http://phospho.elm.eu.org) is a relational database designed to store in vivo and in vitro phosphorylation data extracted from the scientific literature and phosphoproteomic analyses. The resource has been actively developed for more than 7 years and currently comprises ...... sequence alignment used for the score calculation. Finally, special emphasis has been put on linking to external resources such as interaction networks and other databases.......The Phospho.ELM resource (http://phospho.elm.eu.org) is a relational database designed to store in vivo and in vitro phosphorylation data extracted from the scientific literature and phosphoproteomic analyses. The resource has been actively developed for more than 7 years and currently comprises 42...

  4. Removal of anionic azo dyes from aqueous solution by functional ionic liquid cross-linked polymer

    International Nuclear Information System (INIS)

    Gao, Hejun; Kan, Taotao; Zhao, Siyuan; Qian, Yixia; Cheng, Xiyuan; Wu, Wenli; Wang, Xiaodong; Zheng, Liqiang

    2013-01-01

    Highlights: • Equilibrium, kinetic and thermodynamic of adsorption of dyes onto PDVB-IL was investigated. • PDVB-IL has a high adsorption capacity to treat dyes solution. • Higher adsorption capacity is due to the functional groups of PDVB-IL. • Molecular structure of dyes influences the adsorption capacity. -- Abstract: A novel functional ionic liquid based cross-linked polymer (PDVB-IL) was synthesized from 1-aminoethyl-3-vinylimidazolium chloride and divinylbenzene for use as an adsorbent. The physicochemical properties of PDVB-IL were investigated by Fourier transform infrared spectroscopy, scanning electron microscopy and thermogravimetric analysis. The adsorptive capacity was investigated using anionic azo dyes of orange II, sunset yellow FCF, and amaranth as adsorbates. The maximum adsorption capacity could reach 925.09, 734.62, and 547.17 mg/g for orange II, sunset yellow FCF and amaranth at 25 °C, respectively, which are much better than most of the other adsorbents reported earlier. The effect of pH value was investigated in the range of 1–8. The result shows that a low pH value is found to favor the adsorption of those anionic azo dyes. The adsorption kinetics and isotherms are well fitted by a pseudo second-order model and Langmuir model, respectively. The adsorption process is found to be dominated by physisorption. The introduction of functional ionic liquid moieties into cross-linked poly(divinylbenzene) polymer constitutes a new and efficient kind of adsorbent

  5. Access to DNA and protein databases on the Internet.

    Science.gov (United States)

    Harper, R

    1994-02-01

    During the past year, the number of biological databases that can be queried via Internet has dramatically increased. This increase has resulted from the introduction of networking tools, such as Gopher and WAIS, that make it easy for research workers to index databases and make them available for on-line browsing. Biocomputing in the nineties will see the advent of more client/server options for the solution of problems in bioinformatics.

  6. Knowledge databases as instrument for a fast assessment in nuclear emergency management

    Energy Technology Data Exchange (ETDEWEB)

    Raskob, Wolfgang; Moehrle, Stella [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology (KIT), Hermann-von- Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany)

    2014-07-01

    The European project PREPARE (Innovative integrated tools and platforms for radiological emergency preparedness and post-accident response in Europe) aims to close gaps that have been identified in nuclear and radiological preparedness following the first evaluation of the Fukushima disaster. Among others, a work package was established to develop a so called Analytical Platform exploring the scientific and operational means to improve information collection, information exchange and the evaluation of such types of disasters. As methodological approach knowledge databases and case-based reasoning (CBR) will be used. The application of knowledge gained from previous events or the establishment of scenarios in advance to anticipate possible event developments are used in many areas, but so far not in nuclear and radiological emergency management and preparedness. However in PREPARE, knowledge databases and CBR should be combined by establishing a database, which contains historic events and scenarios, their propagation with time, and applied emergency measures and using the CBR methodology to find solutions for events that are not part of the database. The objectives are to provide information about consequences and future developments after a nuclear or radiological event and emergency measures, which include early, intermediate and late phase actions. CBR is a methodology to solve new problems by utilizing knowledge of previously experienced problem situations. In order to solve a current problem, similar problems from a case base are retrieved. Their solutions are taken and, if necessary, adapted to the current situation. The suggested solution is revised and if it is confirmed, it is stored in the case base. Hence, a CBR system learns with time by storing new cases with its solutions. CBR has many advantages, such as solutions can be proposed quickly and do not have to be made from scratch, solutions can be proposed in domains that are not understood completely

  7. A Database of EPO-Patenting Firms in Denmark

    DEFF Research Database (Denmark)

    Nielsen, Anders Østergaard

    1998-01-01

    The first section gives a brief introduction of the basic stages to be observed by the patent applicant from idea to the patent is granted. Section two presents three examples of how patents are registered in the online patent database INPADOC. Section three accounts for the initial analysis...... of the existing patent stock issued to firms with domicile in Denmark. Sections four and five report the basic characteristics of the EPO-patent sample and the procedures for linking the patent statistics to accounting data at the firm level, and finally they present the basic properties of the resulting database...

  8. NoSQL technologies for the CMS Conditions Database

    CERN Document Server

    Sipos, Roland

    2015-01-01

    With the restart of the LHC in 2015, the growth of the CMS Conditions dataset will continue, therefore the need of consistent and highly available access to the Conditions makes a great cause to revisit different aspects of the current data storage solutions.We present a study of alternative data storage backends for the Conditions Databases, by evaluating some of the most popular NoSQL databases to support a key-value representation of the CMS Conditions. An important detail about the Conditions that the payloads are stored as BLOBs, and they can reach sizes that may require special treatment (splitting) in these NoSQL databases. As big binary objects may be a bottleneck in several database systems, and also to give an accurate baseline, a testing framework extension was implemented to measure the characteristics of the handling of arbitrary binary data in these databases. Based on the evaluation, prototypes of a document store, using a column-oriented and plain key-value store, are deployed. An adaption l...

  9. PAMDB: a comprehensive Pseudomonas aeruginosa metabolome database.

    Science.gov (United States)

    Huang, Weiliang; Brewer, Luke K; Jones, Jace W; Nguyen, Angela T; Marcu, Ana; Wishart, David S; Oglesby-Sherrouse, Amanda G; Kane, Maureen A; Wilks, Angela

    2018-01-04

    The Pseudomonas aeruginosaMetabolome Database (PAMDB, http://pseudomonas.umaryland.edu) is a searchable, richly annotated metabolite database specific to P. aeruginosa. P. aeruginosa is a soil organism and significant opportunistic pathogen that adapts to its environment through a versatile energy metabolism network. Furthermore, P. aeruginosa is a model organism for the study of biofilm formation, quorum sensing, and bioremediation processes, each of which are dependent on unique pathways and metabolites. The PAMDB is modelled on the Escherichia coli (ECMDB), yeast (YMDB) and human (HMDB) metabolome databases and contains >4370 metabolites and 938 pathways with links to over 1260 genes and proteins. The database information was compiled from electronic databases, journal articles and mass spectrometry (MS) metabolomic data obtained in our laboratories. For each metabolite entered, we provide detailed compound descriptions, names and synonyms, structural and physiochemical information, nuclear magnetic resonance (NMR) and MS spectra, enzymes and pathway information, as well as gene and protein sequences. The database allows extensive searching via chemical names, structure and molecular weight, together with gene, protein and pathway relationships. The PAMBD and its future iterations will provide a valuable resource to biologists, natural product chemists and clinicians in identifying active compounds, potential biomarkers and clinical diagnostics. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Research and Implementation of Distributed Database HBase Monitoring System

    Directory of Open Access Journals (Sweden)

    Guo Lisi

    2017-01-01

    Full Text Available With the arrival of large data age, distributed database HBase becomes an important tool for storing data in massive data age. The normal operation of HBase database is an important guarantee to ensure the security of data storage. Therefore designing a reasonable HBase monitoring system is of great significance in practice. In this article, we introduce the solution, which contains the performance monitoring and fault alarm function module, to meet a certain operator’s demand of HBase monitoring database in their actual production projects. We designed a monitoring system which consists of a flexible and extensible monitoring agent, a monitoring server based on SSM architecture, and a concise monitoring display layer. Moreover, in order to deal with the problem that pages renders too slow in the actual operation process, we present a solution: reducing the SQL query. It has been proved that reducing SQL query can effectively improve system performance and user experience. The system work well in monitoring the status of HBase database, flexibly extending the monitoring index, and issuing a warning when a fault occurs, so that it is able to improve the working efficiency of the administrator, and ensure the smooth operation of the project.

  11. Plant DB link - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods ...e Site Policy | Contact Us Plant DB link - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive ...

  12. Development a GIS Snowstorm Database

    Science.gov (United States)

    Squires, M. F.

    2010-12-01

    This paper describes the development of a GIS Snowstorm Database (GSDB) at NOAA’s National Climatic Data Center. The snowstorm database is a collection of GIS layers and tabular information for 471 snowstorms between 1900 and 2010. Each snowstorm has undergone automated and manual quality control. The beginning and ending date of each snowstorm is specified. The original purpose of this data was to serve as input for NCDC’s new Regional Snowfall Impact Scale (ReSIS). However, this data is being preserved and used to investigate the impacts of snowstorms on society. GSDB is used to summarize the impact of snowstorms on transportation (interstates) and various classes of facilities (roads, schools, hospitals, etc.). GSDB can also be linked to other sources of impacts such as insurance loss information and Storm Data. Thus the snowstorm database is suited for many different types of users including the general public, decision makers, and researchers. This paper summarizes quality control issues associated with using snowfall data, methods used to identify the starting and ending dates of a storm, and examples of the tables that combine snowfall and societal data.

  13. CRAVE: a database, middleware and visualization system for phenotype ontologies.

    Science.gov (United States)

    Gkoutos, Georgios V; Green, Eain C J; Greenaway, Simon; Blake, Andrew; Mallon, Ann-Marie; Hancock, John M

    2005-04-01

    A major challenge in modern biology is to link genome sequence information to organismal function. In many organisms this is being done by characterizing phenotypes resulting from mutations. Efficiently expressing phenotypic information requires combinatorial use of ontologies. However tools are not currently available to visualize combinations of ontologies. Here we describe CRAVE (Concept Relation Assay Value Explorer), a package allowing storage, active updating and visualization of multiple ontologies. CRAVE is a web-accessible JAVA application that accesses an underlying MySQL database of ontologies via a JAVA persistent middleware layer (Chameleon). This maps the database tables into discrete JAVA classes and creates memory resident, interlinked objects corresponding to the ontology data. These JAVA objects are accessed via calls through the middleware's application programming interface. CRAVE allows simultaneous display and linking of multiple ontologies and searching using Boolean and advanced searches.

  14. WGDB: Wood Gene Database with search interface.

    Science.gov (United States)

    Goyal, Neha; Ginwal, H S

    2014-01-01

    Wood quality can be defined in terms of particular end use with the involvement of several traits. Over the last fifteen years researchers have assessed the wood quality traits in forest trees. The wood quality was categorized as: cell wall biochemical traits, fibre properties include the microfibril angle, density and stiffness in loblolly pine [1]. The user friendly and an open-access database has been developed named Wood Gene Database (WGDB) for describing the wood genes along the information of protein and published research articles. It contains 720 wood genes from species namely Pinus, Deodar, fast growing trees namely Poplar, Eucalyptus. WGDB designed to encompass the majority of publicly accessible genes codes for cellulose, hemicellulose and lignin in tree species which are responsive to wood formation and quality. It is an interactive platform for collecting, managing and searching the specific wood genes; it also enables the data mining relate to the genomic information specifically in Arabidopsis thaliana, Populus trichocarpa, Eucalyptus grandis, Pinus taeda, Pinus radiata, Cedrus deodara, Cedrus atlantica. For user convenience, this database is cross linked with public databases namely NCBI, EMBL & Dendrome with the search engine Google for making it more informative and provides bioinformatics tools named BLAST,COBALT. The database is freely available on www.wgdb.in.

  15. ARCPHdb: A comprehensive protein database for SF1 and SF2 helicase from archaea.

    Science.gov (United States)

    Moukhtar, Mirna; Chaar, Wafi; Abdel-Razzak, Ziad; Khalil, Mohamad; Taha, Samir; Chamieh, Hala

    2017-01-01

    Superfamily 1 and Superfamily 2 helicases, two of the largest helicase protein families, play vital roles in many biological processes including replication, transcription and translation. Study of helicase proteins in the model microorganisms of archaea have largely contributed to the understanding of their function, architecture and assembly. Based on a large phylogenomics approach, we have identified and classified all SF1 and SF2 protein families in ninety five sequenced archaea genomes. Here we developed an online webserver linked to a specialized protein database named ARCPHdb to provide access for SF1 and SF2 helicase families from archaea. ARCPHdb was implemented using MySQL relational database. Web interfaces were developed using Netbeans. Data were stored according to UniProt accession numbers, NCBI Ref Seq ID, PDB IDs and Entrez Databases. A user-friendly interactive web interface has been developed to browse, search and download archaeal helicase protein sequences, their available 3D structure models, and related documentation available in the literature provided by ARCPHdb. The database provides direct links to matching external databases. The ARCPHdb is the first online database to compile all protein information on SF1 and SF2 helicase from archaea in one platform. This database provides essential resource information for all researchers interested in the field. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Dynamic graph system for a semantic database

    Science.gov (United States)

    Mizell, David

    2015-01-27

    A method and system in a computer system for dynamically providing a graphical representation of a data store of entries via a matrix interface is disclosed. A dynamic graph system provides a matrix interface that exposes to an application program a graphical representation of data stored in a data store such as a semantic database storing triples. To the application program, the matrix interface represents the graph as a sparse adjacency matrix that is stored in compressed form. Each entry of the data store is considered to represent a link between nodes of the graph. Each entry has a first field and a second field identifying the nodes connected by the link and a third field with a value for the link that connects the identified nodes. The first, second, and third fields represent the rows, column, and elements of the adjacency matrix.

  17. Knowledge Discovery in Biological Databases for Revealing Candidate Genes Linked to Complex Phenotypes.

    Science.gov (United States)

    Hassani-Pak, Keywan; Rawlings, Christopher

    2017-06-13

    Genetics and "omics" studies designed to uncover genotype to phenotype relationships often identify large numbers of potential candidate genes, among which the causal genes are hidden. Scientists generally lack the time and technical expertise to review all relevant information available from the literature, from key model species and from a potentially wide range of related biological databases in a variety of data formats with variable quality and coverage. Computational tools are needed for the integration and evaluation of heterogeneous information in order to prioritise candidate genes and components of interaction networks that, if perturbed through potential interventions, have a positive impact on the biological outcome in the whole organism without producing negative side effects. Here we review several bioinformatics tools and databases that play an important role in biological knowledge discovery and candidate gene prioritization. We conclude with several key challenges that need to be addressed in order to facilitate biological knowledge discovery in the future.

  18. GiSAO.db: a database for ageing research

    Directory of Open Access Journals (Sweden)

    Grillari Johannes

    2011-05-01

    Full Text Available Abstract Background Age-related gene expression patterns of Homo sapiens as well as of model organisms such as Mus musculus, Saccharomyces cerevisiae, Caenorhabditis elegans and Drosophila melanogaster are a basis for understanding the genetic mechanisms of ageing. For an effective analysis and interpretation of expression profiles it is necessary to store and manage huge amounts of data in an organized way, so that these data can be accessed and processed easily. Description GiSAO.db (Genes involved in senescence, apoptosis and oxidative stress database is a web-based database system for storing and retrieving ageing-related experimental data. Expression data of genes and miRNAs, annotation data like gene identifiers and GO terms, orthologs data and data of follow-up experiments are stored in the database. A user-friendly web application provides access to the stored data. KEGG pathways were incorporated and links to external databases augment the information in GiSAO.db. Search functions facilitate retrieval of data which can also be exported for further processing. Conclusions We have developed a centralized database that is very well suited for the management of data for ageing research. The database can be accessed at https://gisao.genome.tugraz.at and all the stored data can be viewed with a guest account.

  19. Development of database systems for safety of repositories for disposal of radioactive wastes

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yeong Hun; Han, Jeong Sang; Shin, Hyeon Jun; Ham, Sang Won; Kim, Hye Seong [Yonsei Univ., Seoul (Korea, Republic of)

    1999-03-15

    In the study, GSIS os developed for the maximizing effectiveness of the database system. For this purpose, the spatial relation of data from various fields that are constructed in the database which was developed for the site selection and management of repository for radioactive waste disposal. By constructing the integration system that can link attribute and spatial data, it is possible to evaluate the safety of repository effectively and economically. The suitability of integrating database and GSIS is examined by constructing the database in the test district where the site characteristics are similar to that of repository for radioactive waste disposal.

  20. Training Database Technology in DBMS MS Access

    Directory of Open Access Journals (Sweden)

    Nataliya Evgenievna Surkova

    2015-05-01

    Full Text Available The article describes the methodological issues of learning relational database technology and management systems relational databases. DBMS Microsoft Access is the primer for learning of DBMS. This methodology allows to generate some general cultural competence, such as the possession of the main methods, ways and means of production, storage and processing of information, computer skills as a means of managing information. Also must formed professional competence such as the ability to collect, analyze and process the data necessary for solving the professional tasks, the ability to use solutions for analytical and research tasks modern technology and information technology.

  1. Potential translational targets revealed by linking mouse grooming behavioral phenotypes to gene expression using public databases.

    Science.gov (United States)

    Roth, Andrew; Kyzar, Evan J; Cachat, Jonathan; Stewart, Adam Michael; Green, Jeremy; Gaikwad, Siddharth; O'Leary, Timothy P; Tabakoff, Boris; Brown, Richard E; Kalueff, Allan V

    2013-01-10

    Rodent self-grooming is an important, evolutionarily conserved behavior, highly sensitive to pharmacological and genetic manipulations. Mice with aberrant grooming phenotypes are currently used to model various human disorders. Therefore, it is critical to understand the biology of grooming behavior, and to assess its translational validity to humans. The present in-silico study used publicly available gene expression and behavioral data obtained from several inbred mouse strains in the open-field, light-dark box, elevated plus- and elevated zero-maze tests. As grooming duration differed between strains, our analysis revealed several candidate genes with significant correlations between gene expression in the brain and grooming duration. The Allen Brain Atlas, STRING, GoMiner and Mouse Genome Informatics databases were used to functionally map and analyze these candidate mouse genes against their human orthologs, assessing the strain ranking of their expression and the regional distribution of expression in the mouse brain. This allowed us to identify an interconnected network of candidate genes (which have expression levels that correlate with grooming behavior), display altered patterns of expression in key brain areas related to grooming, and underlie important functions in the brain. Collectively, our results demonstrate the utility of large-scale, high-throughput data-mining and in-silico modeling for linking genomic and behavioral data, as well as their potential to identify novel neural targets for complex neurobehavioral phenotypes, including grooming. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Corneal Collagen Cross-Linking with Hypoosmolar Riboflavin Solution in Keratoconic Corneas

    Directory of Open Access Journals (Sweden)

    Shaofeng Gu

    2014-01-01

    Full Text Available Purpose. To report the 12-month outcomes of corneal collagen cross-linking (CXL with a hypoosmolar riboflavin and ultraviolet-A (UVA irradiation in thin corneas. Methods. Eight eyes underwent CXL using a hypoosmolar riboflavin solution after epithelial removal. The corrected distance visual acuity (CDVA, manifest refraction, the mean thinnest corneal thickness (MTCT, and the endothelial cell density (ECD were evaluated before and 6 and 12 months after CXL. Results. The MTCT was 413.9 ± 12.4 μm before treatment and reduced to 381.1 ± 7.3 μm after the removal of the epithelium. After CXL, the thickness decreased to 410.3 ± 14.5 μm at the last follow-up. Before treatment, the mean K-value of the apex of the keratoconus corneas was 58.7 ± 3.5 diopters and slightly decreased (57.7 ± 4.9 diopters at 12 months. The mean CDVA was 0.54 ± 0.23 logarithm of the minimal angle of resolution before treatment and increased to 0.51 ± 0.21 logarithm at the last follow-up. The ECD was 2731.4 ± 191.8 cells/mm2 before treatment and was 2733.4 ± 222.6 cells/mm2 at 12 months after treatment. Conclusions. CXL with a hypoosmolar riboflavin in thin corneas seems to be a promising method for keratoconic eyes with the mean thinnest corneal thickness less than 400 μm without epithelium.

  3. Test bed for real-time image acquisition and processing systems based on FlexRIO, CameraLink, and EPICS

    International Nuclear Information System (INIS)

    Barrera, E.; Ruiz, M.; Sanz, D.; Vega, J.; Castro, R.; Juárez, E.; Salvador, R.

    2014-01-01

    Highlights: • The test bed allows for the validation of real-time image processing techniques. • Offers FPGA (FlexRIO) image processing that does not require CPU intervention. • Is fully compatible with the architecture of the ITER Fast Controllers. • Provides flexibility and easy integration in distributed experiments based on EPICS. - Abstract: Image diagnostics are becoming standard ones in nuclear fusion. At present, images are typically analyzed off-line. However, real-time processing is occasionally required (for instance, hot-spot detection or pattern recognition tasks), which will be the objective for the next generation of fusion devices. In this paper, a test bed for image generation, acquisition, and real-time processing is presented. The proposed solution is built using a Camera Link simulator, a Camera Link frame-grabber, a PXIe chassis, and offers software interface with EPICS. The Camera Link simulator (PCIe card PCIe8 DVa C-Link from Engineering Design Team) generates simulated image data (for example, from video-movies stored in fusion databases) using a Camera Link interface to mimic the frame sequences produced with diagnostic cameras. The Camera Link frame-grabber (FlexRIO Solution from National Instruments) includes a field programmable gate array (FPGA) for image acquisition using a Camera Link interface; the FPGA allows for the codification of ad-hoc image processing algorithms using LabVIEW/FPGA software. The frame grabber is integrated in a PXIe chassis with system architecture similar to that of the ITER Fast Controllers, and the frame grabber provides a software interface with EPICS to program all of its functionalities, capture the images, and perform the required image processing. The use of these four elements allows for the implementation of a test bed system that permits the development and validation of real-time image processing techniques in an architecture that is fully compatible with that of the ITER Fast Controllers

  4. Interactive Multi-Instrument Database of Solar Flares

    Science.gov (United States)

    Ranjan, Shubha S.; Spaulding, Ryan; Deardorff, Donald G.

    2018-01-01

    The fundamental motivation of the project is that the scientific output of solar research can be greatly enhanced by better exploitation of the existing solar/heliosphere space-data products jointly with ground-based observations. Our primary focus is on developing a specific innovative methodology based on recent advances in "big data" intelligent databases applied to the growing amount of high-spatial and multi-wavelength resolution, high-cadence data from NASA's missions and supporting ground-based observatories. Our flare database is not simply a manually searchable time-based catalog of events or list of web links pointing to data. It is a preprocessed metadata repository enabling fast search and automatic identification of all recorded flares sharing a specifiable set of characteristics, features, and parameters. The result is a new and unique database of solar flares and data search and classification tools for the Heliophysics community, enabling multi-instrument/multi-wavelength investigations of flare physics and supporting further development of flare-prediction methodologies.

  5. Internet Databases of the Properties, Enzymatic Reactions, and Metabolism of Small Molecules—Search Options and Applications in Food Science

    Directory of Open Access Journals (Sweden)

    Piotr Minkiewicz

    2016-12-01

    Full Text Available Internet databases of small molecules, their enzymatic reactions, and metabolism have emerged as useful tools in food science. Database searching is also introduced as part of chemistry or enzymology courses for food technology students. Such resources support the search for information about single compounds and facilitate the introduction of secondary analyses of large datasets. Information can be retrieved from databases by searching for the compound name or structure, annotating with the help of chemical codes or drawn using molecule editing software. Data mining options may be enhanced by navigating through a network of links and cross-links between databases. Exemplary databases reviewed in this article belong to two classes: tools concerning small molecules (including general and specialized databases annotating food components and tools annotating enzymes and metabolism. Some problems associated with database application are also discussed. Data summarized in computer databases may be used for calculation of daily intake of bioactive compounds, prediction of metabolism of food components, and their biological activity as well as for prediction of interactions between food component and drugs.

  6. The Coral Trait Database, a curated database of trait information for coral species from the global oceans

    Science.gov (United States)

    Madin, Joshua S.; Anderson, Kristen D.; Andreasen, Magnus Heide; Bridge, Tom C. L.; Cairns, Stephen D.; Connolly, Sean R.; Darling, Emily S.; Diaz, Marcela; Falster, Daniel S.; Franklin, Erik C.; Gates, Ruth D.; Hoogenboom, Mia O.; Huang, Danwei; Keith, Sally A.; Kosnik, Matthew A.; Kuo, Chao-Yang; Lough, Janice M.; Lovelock, Catherine E.; Luiz, Osmar; Martinelli, Julieta; Mizerek, Toni; Pandolfi, John M.; Pochon, Xavier; Pratchett, Morgan S.; Putnam, Hollie M.; Roberts, T. Edward; Stat, Michael; Wallace, Carden C.; Widman, Elizabeth; Baird, Andrew H.

    2016-03-01

    Trait-based approaches advance ecological and evolutionary research because traits provide a strong link to an organism’s function and fitness. Trait-based research might lead to a deeper understanding of the functions of, and services provided by, ecosystems, thereby improving management, which is vital in the current era of rapid environmental change. Coral reef scientists have long collected trait data for corals; however, these are difficult to access and often under-utilized in addressing large-scale questions. We present the Coral Trait Database initiative that aims to bring together physiological, morphological, ecological, phylogenetic and biogeographic trait information into a single repository. The database houses species- and individual-level data from published field and experimental studies alongside contextual data that provide important framing for analyses. In this data descriptor, we release data for 56 traits for 1547 species, and present a collaborative platform on which other trait data are being actively federated. Our overall goal is for the Coral Trait Database to become an open-source, community-led data clearinghouse that accelerates coral reef research.

  7. The Coral Trait Database, a curated database of trait information for coral species from the global oceans.

    Science.gov (United States)

    Madin, Joshua S; Anderson, Kristen D; Andreasen, Magnus Heide; Bridge, Tom C L; Cairns, Stephen D; Connolly, Sean R; Darling, Emily S; Diaz, Marcela; Falster, Daniel S; Franklin, Erik C; Gates, Ruth D; Harmer, Aaron; Hoogenboom, Mia O; Huang, Danwei; Keith, Sally A; Kosnik, Matthew A; Kuo, Chao-Yang; Lough, Janice M; Lovelock, Catherine E; Luiz, Osmar; Martinelli, Julieta; Mizerek, Toni; Pandolfi, John M; Pochon, Xavier; Pratchett, Morgan S; Putnam, Hollie M; Roberts, T Edward; Stat, Michael; Wallace, Carden C; Widman, Elizabeth; Baird, Andrew H

    2016-03-29

    Trait-based approaches advance ecological and evolutionary research because traits provide a strong link to an organism's function and fitness. Trait-based research might lead to a deeper understanding of the functions of, and services provided by, ecosystems, thereby improving management, which is vital in the current era of rapid environmental change. Coral reef scientists have long collected trait data for corals; however, these are difficult to access and often under-utilized in addressing large-scale questions. We present the Coral Trait Database initiative that aims to bring together physiological, morphological, ecological, phylogenetic and biogeographic trait information into a single repository. The database houses species- and individual-level data from published field and experimental studies alongside contextual data that provide important framing for analyses. In this data descriptor, we release data for 56 traits for 1547 species, and present a collaborative platform on which other trait data are being actively federated. Our overall goal is for the Coral Trait Database to become an open-source, community-led data clearinghouse that accelerates coral reef research.

  8. Ei Compendex: A new database makes life easier for engineers

    CERN Multimedia

    2001-01-01

    The Library is expanding its range of databases. The latest arrival, called Ei Compendex, is the world's most comprehensive engineering database, which indexes engineering literature published throughout the world. It also offers bibliographic entries for articles published in scientific journals and for conference proceedings and covers an extensive range of subjects from mechanical engineering to the environment, materials science, solid state physics and superconductivity. Moreover, it is the most relevant quality control and engineering management database. Ei Compendex contains over 4.6 million references from over 2600 journals, conference proceedings and technical reports dating from 1966 to the present. Every year, 220,000 new abstracts are added to the database which is also updated on a weekly basis. In the case of articles published in recent years, it provides an electronic link to the full texts of all major publishers. The database also contains the full texts of Elsevier periodicals (over 250...

  9. Molecular and Clinical Studies of X-linked Deafness Among Pakistani Families

    OpenAIRE

    Waryah, Ali M.; Ahmed, Zubair M.; Choo, Daniel I.; Sisk, Robert A.; Binder, Munir A.; Shahzad, Mohsin; Khan, Shaheen N.; Friedman, Thomas B.; Riazuddin, Sheikh; Riazuddin, Saima

    2011-01-01

    There are 68 sex-linked syndromes that include hearing loss as one feature and five sex-linked nonsyndromic deafness loci listed in the OMIM database. The possibility of additional such sex-linked loci was explored by ascertaining three unrelated Pakistani families (PKDF536, PKDF1132, PKDF740) segregating X-linked recessive deafness. Sequence analysis of POU3F4 (DFN3) in affected members of families PKDF536 and PKDF1132 revealed two novel nonsense mutations, p.Q136X and p.W114X, respectively....

  10. Aerodynamic Analyses and Database Development for Ares I Vehicle First Stage Separation

    Science.gov (United States)

    Pamadi, Bandu N.; Pei, Jing; Pinier, Jeremy T.; Holland, Scott D.; Covell, Peter F.; Klopfer, Goetz, H.

    2012-01-01

    This paper presents the aerodynamic analysis and database development for the first stage separation of the Ares I A106 Crew Launch Vehicle configuration. Separate databases were created for the first stage and upper stage. Each database consists of three components: isolated or free-stream coefficients, power-off proximity increments, and power-on proximity increments. The power-on database consists of three parts, all plumes firing at nominal conditions, the one booster deceleration motor out condition, and the one ullage settling motor out condition. The isolated and power-off incremental databases were developed using wind tunnel test data. The power-on proximity increments were developed using CFD solutions.

  11. Human health risk assessment database, "the NHSRC toxicity value database": supporting the risk assessment process at US EPA's National Homeland Security Research Center.

    Science.gov (United States)

    Moudgal, Chandrika J; Garrahan, Kevin; Brady-Roberts, Eletha; Gavrelis, Naida; Arbogast, Michelle; Dun, Sarah

    2008-11-15

    The toxicity value database of the United States Environmental Protection Agency's (EPA) National Homeland Security Research Center has been in development since 2004. The toxicity value database includes a compilation of agent property, toxicity, dose-response, and health effects data for 96 agents: 84 chemical and radiological agents and 12 biotoxins. The database is populated with multiple toxicity benchmark values and agent property information from secondary sources, with web links to the secondary sources, where available. A selected set of primary literature citations and associated dose-response data are also included. The toxicity value database offers a powerful means to quickly and efficiently gather pertinent toxicity and dose-response data for a number of agents that are of concern to the nation's security. This database, in conjunction with other tools, will play an important role in understanding human health risks, and will provide a means for risk assessors and managers to make quick and informed decisions on the potential health risks and determine appropriate responses (e.g., cleanup) to agent release. A final, stand alone MS ACESSS working version of the toxicity value database was completed in November, 2007.

  12. Human health risk assessment database, 'the NHSRC toxicity value database': Supporting the risk assessment process at US EPA's National Homeland Security Research Center

    International Nuclear Information System (INIS)

    Moudgal, Chandrika J.; Garrahan, Kevin; Brady-Roberts, Eletha; Gavrelis, Naida; Arbogast, Michelle; Dun, Sarah

    2008-01-01

    The toxicity value database of the United States Environmental Protection Agency's (EPA) National Homeland Security Research Center has been in development since 2004. The toxicity value database includes a compilation of agent property, toxicity, dose-response, and health effects data for 96 agents: 84 chemical and radiological agents and 12 biotoxins. The database is populated with multiple toxicity benchmark values and agent property information from secondary sources, with web links to the secondary sources, where available. A selected set of primary literature citations and associated dose-response data are also included. The toxicity value database offers a powerful means to quickly and efficiently gather pertinent toxicity and dose-response data for a number of agents that are of concern to the nation's security. This database, in conjunction with other tools, will play an important role in understanding human health risks, and will provide a means for risk assessors and managers to make quick and informed decisions on the potential health risks and determine appropriate responses (e.g., cleanup) to agent release. A final, stand alone MS ACESSS working version of the toxicity value database was completed in November, 2007

  13. HIM-herbal ingredients in-vivo metabolism database.

    Science.gov (United States)

    Kang, Hong; Tang, Kailin; Liu, Qi; Sun, Yi; Huang, Qi; Zhu, Ruixin; Gao, Jun; Zhang, Duanfeng; Huang, Chenggang; Cao, Zhiwei

    2013-05-31

    Herbal medicine has long been viewed as a valuable asset for potential new drug discovery and herbal ingredients' metabolites, especially the in vivo metabolites were often found to gain better pharmacological, pharmacokinetic and even better safety profiles compared to their parent compounds. However, these herbal metabolite information is still scattered and waiting to be collected. HIM database manually collected so far the most comprehensive available in-vivo metabolism information for herbal active ingredients, as well as their corresponding bioactivity, organs and/or tissues distribution, toxicity, ADME and the clinical research profile. Currently HIM contains 361 ingredients and 1104 corresponding in-vivo metabolites from 673 reputable herbs. Tools of structural similarity, substructure search and Lipinski's Rule of Five are also provided. Various links were made to PubChem, PubMed, TCM-ID (Traditional Chinese Medicine Information database) and HIT (Herbal ingredients' targets databases). A curated database HIM is set up for the in vivo metabolites information of the active ingredients for Chinese herbs, together with their corresponding bioactivity, toxicity and ADME profile. HIM is freely accessible to academic researchers at http://www.bioinformatics.org.cn/.

  14. Documentation for the U.S. Geological Survey Public-Supply Database (PSDB): A database of permitted public-supply wells, surface-water intakes, and systems in the United States

    Science.gov (United States)

    Price, Curtis V.; Maupin, Molly A.

    2014-01-01

    The U.S. Geological Survey (USGS) has developed a database containing information about wells, surface-water intakes, and distribution systems that are part of public water systems across the United States, its territories, and possessions. Programs of the USGS such as the National Water Census, the National Water Use Information Program, and the National Water-Quality Assessment Program all require a complete and current inventory of public water systems, the sources of water used by those systems, and the size of populations served by the systems across the Nation. Although the U.S. Environmental Protection Agency’s Safe Drinking Water Information System (SDWIS) database already exists as the primary national Federal database for information on public water systems, the Public-Supply Database (PSDB) was developed to add value to SDWIS data with enhanced location and ancillary information, and to provide links to other databases, including the USGS’s National Water Information System (NWIS) database.

  15. Efficient Partitioning of Large Databases without Query Statistics

    Directory of Open Access Journals (Sweden)

    Shahidul Islam KHAN

    2016-11-01

    Full Text Available An efficient way of improving the performance of a database management system is distributed processing. Distribution of data involves fragmentation or partitioning, replication, and allocation process. Previous research works provided partitioning based on empirical data about the type and frequency of the queries. These solutions are not suitable at the initial stage of a distributed database as query statistics are not available then. In this paper, I have presented a fragmentation technique, Matrix based Fragmentation (MMF, which can be applied at the initial stage as well as at later stages of distributed databases. Instead of using empirical data, I have developed a matrix, Modified Create, Read, Update and Delete (MCRUD, to partition a large database properly. Allocation of fragments is done simultaneously in my proposed technique. So using MMF, no additional complexity is added for allocating the fragments to the sites of a distributed database as fragmentation is synchronized with allocation. The performance of a DDBMS can be improved significantly by avoiding frequent remote access and high data transfer among the sites. Results show that proposed technique can solve the initial partitioning problem of large distributed databases.

  16. Assessment of the SFC database for analysis and modeling

    Science.gov (United States)

    Centeno, Martha A.

    1994-01-01

    SFC is one of the four clusters that make up the Integrated Work Control System (IWCS), which will integrate the shuttle processing databases at Kennedy Space Center (KSC). The IWCS framework will enable communication among the four clusters and add new data collection protocols. The Shop Floor Control (SFC) module has been operational for two and a half years; however, at this stage, automatic links to the other 3 modules have not been implemented yet, except for a partial link to IOS (CASPR). SFC revolves around a DB/2 database with PFORMS acting as the database management system (DBMS). PFORMS is an off-the-shelf DB/2 application that provides a set of data entry screens and query forms. The main dynamic entity in the SFC and IOS database is a task; thus, the physical storage location and update privileges are driven by the status of the WAD. As we explored the SFC values, we realized that there was much to do before actually engaging in continuous analysis of the SFC data. Half way into this effort, it was realized that full scale analysis would have to be a future third phase of this effort. So, we concentrated on getting to know the contents of the database, and in establishing an initial set of tools to start the continuous analysis process. Specifically, we set out to: (1) provide specific procedures for statistical models, so as to enhance the TP-OAO office analysis and modeling capabilities; (2) design a data exchange interface; (3) prototype the interface to provide inputs to SCRAM; and (4) design a modeling database. These objectives were set with the expectation that, if met, they would provide former TP-OAO engineers with tools that would help them demonstrate the importance of process-based analyses. The latter, in return, will help them obtain the cooperation of various organizations in charting out their individual processes.

  17. Teaching Database Design with Constraint-Based Tutors

    Science.gov (United States)

    Mitrovic, Antonija; Suraweera, Pramuditha

    2016-01-01

    Design tasks are difficult to teach, due to large, unstructured solution spaces, underspecified problems, non-existent problem solving algorithms and stopping criteria. In this paper, we comment on our approach to develop KERMIT, a constraint-based tutor that taught database design. In later work, we re-implemented KERMIT as EER-Tutor, and…

  18. Pulotu: Database of Austronesian Supernatural Beliefs and Practices

    Science.gov (United States)

    Watts, Joseph; Sheehan, Oliver; Greenhill, Simon J.; Gomes-Ng, Stephanie; Atkinson, Quentin D.; Bulbulia, Joseph; Gray, Russell D.

    2015-01-01

    Scholars have debated naturalistic theories of religion for thousands of years, but only recently have scientists begun to test predictions empirically. Existing databases contain few variables on religion, and are subject to Galton’s Problem because they do not sufficiently account for the non-independence of cultures or systematically differentiate the traditional states of cultures from their contemporary states. Here we present Pulotu: the first quantitative cross-cultural database purpose-built to test evolutionary hypotheses of supernatural beliefs and practices. The Pulotu database documents the remarkable diversity of the Austronesian family of cultures, which originated in Taiwan, spread west to Madagascar and east to Easter Island–a region covering over half the world’s longitude. The focus of Austronesian beliefs range from localised ancestral spirits to powerful creator gods. A wide range of practices also exist, such as headhunting, elaborate tattooing, and the construction of impressive monuments. Pulotu is freely available, currently contains 116 cultures, and has 80 variables describing supernatural beliefs and practices, as well as social and physical environments. One major advantage of Pulotu is that it has separate sections on the traditional states of cultures, the post-contact history of cultures, and the contemporary states of cultures. A second major advantage is that cultures are linked to a language-based family tree, enabling the use phylogenetic methods, which can be used to address Galton’s Problem by accounting for common ancestry, to infer deep prehistory, and to model patterns of trait evolution over time. We illustrate the power of phylogenetic methods by performing an ancestral state reconstruction on the Pulotu variable “headhunting", finding evidence that headhunting was practiced in proto-Austronesian culture. Quantitative cross-cultural databases explicitly linking cultures to a phylogeny have the potential to

  19. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  20. ESO telbib: Linking In and Reaching Out

    Science.gov (United States)

    Grothkopf, U.; Meakins, S.

    2015-04-01

    Measuring an observatory's research output is an integral part of its science operations. Like many other observatories, ESO tracks scholarly papers that use observational data from ESO facilities and uses state-of-the-art tools to create, maintain, and further develop the Telescope Bibliography database (telbib). While telbib started out as a stand-alone tool mostly used to compile lists of papers, it has by now developed into a multi-faceted, interlinked system. The core of the telbib database is links between scientific papers and observational data generated by the La Silla Paranal Observatory residing in the ESO archive. This functionality has also been deployed for ALMA data. In addition, telbib reaches out to several other systems, including ESO press releases, the NASA ADS Abstract Service, databases at the CDS Strasbourg, and impact scores at Altmetric.com. We illustrate these features to show how the interconnected telbib system enhances the content of the database as well as the user experience.

  1. Ontology-Based Querying with Bio2RDF's Linked Open Data.

    Science.gov (United States)

    Callahan, Alison; Cruz-Toledo, José; Dumontier, Michel

    2013-04-15

    A key activity for life scientists in this post "-omics" age involves searching for and integrating biological data from a multitude of independent databases. However, our ability to find relevant data is hampered by non-standard web and database interfaces backed by an enormous variety of data formats. This heterogeneity presents an overwhelming barrier to the discovery and reuse of resources which have been developed at great public expense.To address this issue, the open-source Bio2RDF project promotes a simple convention to integrate diverse biological data using Semantic Web technologies. However, querying Bio2RDF remains difficult due to the lack of uniformity in the representation of Bio2RDF datasets. We describe an update to Bio2RDF that includes tighter integration across 19 new and updated RDF datasets. All available open-source scripts were first consolidated to a single GitHub repository and then redeveloped using a common API that generates normalized IRIs using a centralized dataset registry. We then mapped dataset specific types and relations to the Semanticscience Integrated Ontology (SIO) and demonstrate simplified federated queries across multiple Bio2RDF endpoints. This coordinated release marks an important milestone for the Bio2RDF open source linked data framework. Principally, it improves the quality of linked data in the Bio2RDF network and makes it easier to access or recreate the linked data locally. We hope to continue improving the Bio2RDF network of linked data by identifying priority databases and increasing the vocabulary coverage to additional dataset vocabularies beyond SIO.

  2. Serial killer: il database mondiale

    Directory of Open Access Journals (Sweden)

    Gaetano parente

    2016-07-01

    Full Text Available The complex and multisided study of serial killers is partly made difficult by the current level of progress that has led these deviant people to evolve in relation to the aspects of shrewdness (concerning the staging and mobility. Despite the important work of some scholars who proposed important theories, all this shows that, concerning serial murders, it is still particularly frequent not to pay attention to links among homicides committed by the same person but in different parts of the world. It is therefore crucial to develop a worldwide database that allows all police forces to access information collected on crime scenes of murders which are particularly absurd and committed without any apparent reason. It will then be up to the profiler, through ad hoc and technologically advanced tools, to collect this information on the crime scene that would be made available to all police forces thanks to the worldwide database.

  3. High-Performance Secure Database Access Technologies for HEP Grids

    Energy Technology Data Exchange (ETDEWEB)

    Matthew Vranicar; John Weicher

    2006-04-17

    secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.

  4. High-Performance Secure Database Access Technologies for HEP Grids

    International Nuclear Information System (INIS)

    Vranicar, Matthew; Weicher, John

    2006-01-01

    authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory's (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project's current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access

  5. A C programmer's view of a relational database

    International Nuclear Information System (INIS)

    Clifford, T.; Katz, R.; Griffiths, C.

    1989-01-01

    The AGS Distributed Control System (AGSDCS) uses a relational database (Interbase) for the storage of all data on the host system network. This includes the static data which describes the components of the accelerator complex, as well as data for application program setup and data records that are used in analysis. By creating a mapping of each elation in the database to a C record and providing general tools for relation (record) across, all the data in the database is available in a natural fashion (in structures) to all the C programs on any of the nodes of the control system. In this paper the correspondence between the Interbase elations and the C structure is detailed with examples of C typedefs and relation definitions. It is also shown how the relations can be put into memory and linked (related) together when fast access is needed by programs. 1 ref., 2 tabs

  6. A C programmer's view of a relational database

    International Nuclear Information System (INIS)

    Clifford, T.; Katz, R.; Griffiths, C.

    1990-01-01

    The AGS Distributed Control System (AGSDCS) uses a relational database (Interbase) for the storage of all data on the host system network. This includes the static data which describes the components of the accelerator complex, as well as data for application-program setup and data records that are used in analysis. By creating a mapping of each relation in the database to a C record and providing general tools for relation (record) access, all the data in the database is available in a natural fashion to all the C programs on any of the nodes of the control system. In this paper the correspondence between the Interbase relations and the C structure is detailed with examples of C 'typedefs' and relation definitions. It is also shown how the relations can be put into memory and linked (related) together when fast access is needed by programs. (orig.)

  7. A Database Approach to Distributed State Space Generation

    NARCIS (Netherlands)

    Blom, Stefan; Lisser, Bert; van de Pol, Jan Cornelis; Weber, M.

    2007-01-01

    We study distributed state space generation on a cluster of workstations. It is explained why state space partitioning by a global hash function is problematic when states contain variables from unbounded domains, such as lists or other recursive datatypes. Our solution is to introduce a database

  8. A Database Approach to Distributed State Space Generation

    NARCIS (Netherlands)

    Blom, Stefan; Lisser, Bert; van de Pol, Jan Cornelis; Weber, M.; Cerna, I.; Haverkort, Boudewijn R.H.M.

    2008-01-01

    We study distributed state space generation on a cluster of workstations. It is explained why state space partitioning by a global hash function is problematic when states contain variables from unbounded domains, such as lists or other recursive datatypes. Our solution is to introduce a database

  9. Effects of the steric hindrance of micropores in the hyper-cross-linked polymeric adsorbent on the adsorption of p-nitroaniline in aqueous solution

    International Nuclear Information System (INIS)

    Xiao, Guqing; Wen, Ruimin; Wei, Dongmei; Wu, Dan

    2014-01-01

    Graphical abstract: The hyper-cross-linked polymeric adsorbents (GQ-05 and GQ-03) with different steric hindrance of micropores were designed. The adsorption capacity and adsorption rate of PNA onto the two adsorbents followed the order GQ-05 > GQ-03. The steric hindrance of micropores was a crucial factor for the adsorption capacity and adsorption rate order. - Highlights: • Two adsorbents with different steric hindrance of micropores were designed. • The adsorption capacity and adsorption rate followed the order GQ-05 > GQ-03. • The steric hindrance of micropores was a crucial factor for the order. - Abstract: A hyper-cross-linked polymeric adsorbent with “-CH 2 -phenol-CH 2 -” as the cross-linked bridge (denoted GQ-05), and another hyper-cross-linked polymeric adsorbent with “-CH 2 -p-cresol-CH 2 -” as the cross-linked bridge (denoted GQ-03) were synthesized to reveal the effect of the steric hindrance of micropores in the hyper-cross-linked polymeric adsorbent on adsorption capacity and adsorption rate of p-nitroaniline (PNA) from aqueous solution. The results of adsorption kinetics indicated the order of the adsorption rate GQ-05 > GQ-03. The pseudo-first-order rate equation could describe the entire adsorption process of PNA onto GQ-05 while the equation characterized the adsorption process of GQ-03 in two stages. The order of the adsorption capacity GQ-05 > GQ-03 was demonstrated by thermodynamic analysis and dynamic adsorption. The steric hindrance of micropores in the hyper-cross-linked polymeric adsorbent was a crucial factor for the order of the adsorption capacity and adsorption rate

  10. Development of an Engineering Soil Database

    Science.gov (United States)

    2017-12-27

    ER D C TR 1 7- 15 Rapid Airfield Damage Recovery (RADR) Program Development of an Engineering Soil Database En gi ne er R es ea rc...distribution is unlimited. The US Army Engineer Research and Development Center (ERDC) solves the nation’s toughest engineering and environmental...challenges. ERDC develops innovative solutions in civil and military engineering , geospatial sciences, water resources, and environmental sciences

  11. Readmissions after stroke: linked data from the Australian Stroke Clinical Registry and hospital databases.

    Science.gov (United States)

    Kilkenny, Monique F; Dewey, Helen M; Sundararajan, Vijaya; Andrew, Nadine E; Lannin, Natasha; Anderson, Craig S; Donnan, Geoffrey A; Cadilhac, Dominique A

    2015-07-20

    To assess the feasibility of linking a national clinical stroke registry with hospital admissions and emergency department data; and to determine factors associated with hospital readmission after stroke or transient ischaemic attack (TIA) in Australia. Data from the Australian Stroke Clinical Registry (AuSCR) at a single Victorian hospital were linked to coded, routinely collected hospital datasets for admissions (Victorian Admitted Episodes Dataset) and emergency presentations (Victorian Emergency Minimum Dataset) in Victoria from 15 June 2009 to 31 December 2010, using stepwise deterministic data linkage techniques. Association of patient characteristics, social circumstances, processes of care and discharge outcomes with all-cause readmissions within 1 year from time of hospital discharge after an index admission for stroke or TIA. Of 788 patients registered in the AuSCR, 46% (359/781) were female, 83% (658/788) had a stroke, and the median age was 76 years. Data were successfully linked for 782 of these patients (99%). Within 1 year of their index stroke or TIA event, 42% of patients (291/685) were readmitted, with 12% (35/286) readmitted due to a stroke or TIA. Factors significantly associated with 1-year hospital readmission were two or more presentations to an emergency department before the index event (adjusted odds ratio [aOR], 1.57; 95% CI, 1.02-2.43), higher Charlson comorbidity index score (aOR, 1.19; 95% CI, 1.07-1.32) and diagnosis of TIA on the index admission (aOR, 2.15; 95% CI, 1.30-3.56). Linking clinical registry data with routinely collected hospital data for stroke and TIA is feasible in Victoria. Using these linked data, we found that readmission to hospital is common in this patient group and is related to their comorbid conditions.

  12. Primary Numbers Database for ATLAS Detector Description Parameters

    CERN Document Server

    Vaniachine, A; Malon, D; Nevski, P; Wenaus, T

    2003-01-01

    We present the design and the status of the database for detector description parameters in ATLAS experiment. The ATLAS Primary Numbers are the parameters defining the detector geometry and digitization in simulations, as well as certain reconstruction parameters. Since the detailed ATLAS detector description needs more than 10,000 such parameters, a preferred solution is to have a single verified source for all these data. The database stores the data dictionary for each parameter collection object, providing schema evolution support for object-based retrieval of parameters. The same Primary Numbers are served to many different clients accessing the database: the ATLAS software framework Athena, the Geant3 heritage framework Atlsim, the Geant4 developers framework FADS/Goofy, the generator of XML output for detector description, and several end-user clients for interactive data navigation, including web-based browsers and ROOT. The choice of the MySQL database product for the implementation provides addition...

  13. AgdbNet – antigen sequence database software for bacterial typing

    Directory of Open Access Journals (Sweden)

    Maiden Martin CJ

    2006-06-01

    Full Text Available Abstract Background Bacterial typing schemes based on the sequences of genes encoding surface antigens require databases that provide a uniform, curated, and widely accepted nomenclature of the variants identified. Due to the differences in typing schemes, imposed by the diversity of genes targeted, creating these databases has typically required the writing of one-off code to link the database to a web interface. Here we describe agdbNet, widely applicable web database software that facilitates simultaneous BLAST querying of multiple loci using either nucleotide or peptide sequences. Results Databases are described by XML files that are parsed by a Perl CGI script. Each database can have any number of loci, which may be defined by nucleotide and/or peptide sequences. The software is currently in use on at least five public databases for the typing of Neisseria meningitidis, Campylobacter jejuni and Streptococcus equi and can be set up to query internal isolate tables or suitably-configured external isolate databases, such as those used for multilocus sequence typing. The style of the resulting website can be fully configured by modifying stylesheets and through the use of customised header and footer files that surround the output of the script. Conclusion The software provides a rapid means of setting up customised Internet antigen sequence databases. The flexible configuration options enable typing schemes with differing requirements to be accommodated.

  14. Integration of curated databases to identify genotype-phenotype associations

    Directory of Open Access Journals (Sweden)

    Li Jianrong

    2006-10-01

    Full Text Available Abstract Background The ability to rapidly characterize an unknown microorganism is critical in both responding to infectious disease and biodefense. To do this, we need some way of anticipating an organism's phenotype based on the molecules encoded by its genome. However, the link between molecular composition (i.e. genotype and phenotype for microbes is not obvious. While there have been several studies that address this challenge, none have yet proposed a large-scale method integrating curated biological information. Here we utilize a systematic approach to discover genotype-phenotype associations that combines phenotypic information from a biomedical informatics database, GIDEON, with the molecular information contained in National Center for Biotechnology Information's Clusters of Orthologous Groups database (NCBI COGs. Results Integrating the information in the two databases, we are able to correlate the presence or absence of a given protein in a microbe with its phenotype as measured by certain morphological characteristics or survival in a particular growth media. With a 0.8 correlation score threshold, 66% of the associations found were confirmed by the literature and at a 0.9 correlation threshold, 86% were positively verified. Conclusion Our results suggest possible phenotypic manifestations for proteins biochemically associated with sugar metabolism and electron transport. Moreover, we believe our approach can be extended to linking pathogenic phenotypes with functionally related proteins.

  15. The IPE Database: providing information on plant design, core damage frequency and containment performance

    International Nuclear Information System (INIS)

    Lehner, J.R.; Lin, C.C.; Pratt, W.T.; Su, T.; Danziger, L.

    1996-01-01

    A database, called the IPE Database has been developed that stores data obtained from the Individual Plant Examinations (IPEs) which licensees of nuclear power plants have conducted in response to the Nuclear Regulatory Commission's (NRC) Generic Letter GL88-20. The IPE Database is a collection of linked files which store information about plant design, core damage frequency (CDF), and containment performance in a uniform, structured way. The information contained in the various files is based on data contained in the IPE submittals. The information extracted from the submittals and entered into the IPE Database can be manipulated so that queries regarding individual or groups of plants can be answered using the IPE Database

  16. The NatCarb geoportal: Linking distributed data from the Carbon Sequestration Regional Partnerships

    Science.gov (United States)

    Carr, T.R.; Rich, P.M.; Bartley, J.D.

    2007-01-01

    The Department of Energy (DOE) Carbon Sequestration Regional Partnerships are generating the data for a "carbon atlas" of key geospatial data (carbon sources, potential sinks, etc.) required for rapid implementation of carbon sequestration on a broad scale. The NATional CARBon Sequestration Database and Geographic Information System (NatCarb) provides Web-based, nation-wide data access. Distributed computing solutions link partnerships and other publicly accessible repositories of geological, geophysical, natural resource, infrastructure, and environmental data. Data are maintained and enhanced locally, but assembled and accessed through a single geoportal. NatCarb, as a first attempt at a national carbon cyberinfrastructure (NCCI), assembles the data required to address technical and policy challenges of carbon capture and storage. We present a path forward to design and implement a comprehensive and successful NCCI. ?? 2007 The Haworth Press, Inc. All rights reserved.

  17. PACSY, a relational database management system for protein structure and chemical shift analysis.

    Science.gov (United States)

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L

    2012-10-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu.

  18. PACSY, a relational database management system for protein structure and chemical shift analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Woonghee, E-mail: whlee@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison, and Biochemistry Department (United States); Yu, Wookyung [Center for Proteome Biophysics, Pusan National University, Department of Physics (Korea, Republic of); Kim, Suhkmann [Pusan National University, Department of Chemistry and Chemistry Institute for Functional Materials (Korea, Republic of); Chang, Iksoo [Center for Proteome Biophysics, Pusan National University, Department of Physics (Korea, Republic of); Lee, Weontae, E-mail: wlee@spin.yonsei.ac.kr [Yonsei University, Structural Biochemistry and Molecular Biophysics Laboratory, Department of Biochemistry (Korea, Republic of); Markley, John L., E-mail: markley@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison, and Biochemistry Department (United States)

    2012-10-15

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.eduhttp://pacsy.nmrfam.wisc.edu.

  19. PACSY, a relational database management system for protein structure and chemical shift analysis

    Science.gov (United States)

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo

    2012-01-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu. PMID:22903636

  20. PACSY, a relational database management system for protein structure and chemical shift analysis

    International Nuclear Information System (INIS)

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L.

    2012-01-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.eduhttp://pacsy.nmrfam.wisc.edu.

  1. Survey on utilization of database for research and development of global environmental industry technology; Chikyu kankyo sangyo gijutsu kenkyu kaihatsu no tame no database nado no riyo ni kansuru chosa

    Energy Technology Data Exchange (ETDEWEB)

    1993-03-01

    To optimize networks and database systems for promotion of the industry technology development contributing to the solution of the global environmental problem, studies are made on reusable information resource and its utilization methods. As reusable information resource, there are external database and network system for researchers` information exchange and for computer use. The external database includes commercial database and academic database. As commercial database, 6 agents and 13 service systems are selected. As academic database, there are NACSIS-IR and the database which is connected with INTERNET in the U.S. These are used in connection with the UNIX academic research network called INTERNET. For connection with INTERNET, a commercial UNIX network service called IIJ which starts service in April 1993 can be used. However, personal computer communication network is used for the time being. 6 figs., 4 tabs.

  2. YMDB 2.0: a significantly expanded version of the yeast metabolome database.

    Science.gov (United States)

    Ramirez-Gaona, Miguel; Marcu, Ana; Pon, Allison; Guo, An Chi; Sajed, Tanvir; Wishart, Noah A; Karu, Naama; Djoumbou Feunang, Yannick; Arndt, David; Wishart, David S

    2017-01-04

    YMDB or the Yeast Metabolome Database (http://www.ymdb.ca/) is a comprehensive database containing extensive information on the genome and metabolome of Saccharomyces cerevisiae Initially released in 2012, the YMDB has gone through a significant expansion and a number of improvements over the past 4 years. This manuscript describes the most recent version of YMDB (YMDB 2.0). More specifically, it provides an updated description of the database that was previously described in the 2012 NAR Database Issue and it details many of the additions and improvements made to the YMDB over that time. Some of the most important changes include a 7-fold increase in the number of compounds in the database (from 2007 to 16 042), a 430-fold increase in the number of metabolic and signaling pathway diagrams (from 66 to 28 734), a 16-fold increase in the number of compounds linked to pathways (from 742 to 12 733), a 17-fold increase in the numbers of compounds with nuclear magnetic resonance or MS spectra (from 783 to 13 173) and an increase in both the number of data fields and the number of links to external databases. In addition to these database expansions, a number of improvements to YMDB's web interface and its data visualization tools have been made. These additions and improvements should greatly improve the ease, the speed and the quantity of data that can be extracted, searched or viewed within YMDB. Overall, we believe these improvements should not only improve the understanding of the metabolism of S. cerevisiae, but also allow more in-depth exploration of its extensive metabolic networks, signaling pathways and biochemistry. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. The PEP-II/BaBar Project-Wide Database using World Wide Web and Oracle*Case

    International Nuclear Information System (INIS)

    Chan, A.; Crane, G.; MacGregor, I.; Meyer, S.

    1995-12-01

    The PEP-II/BaBar Project Database is a tool for monitoring the technical and documentation aspects of the accelerator and detector construction. It holds the PEP-II/BaBar design specifications, fabrication and installation data in one integrated system. Key pieces of the database include the machine parameter list, components fabrication and calibration data, survey and alignment data, property control, CAD drawings, publications and documentation. This central Oracle database on a UNIX server is built using Oracle*Case tools. Users at the collaborating laboratories mainly access the data using World Wide Web (WWW). The Project Database is being extended to link to legacy databases required for the operations phase

  4. GENISES: A GIS Database for the Yucca Mountain Site Characterization Project

    International Nuclear Information System (INIS)

    Beckett, J.

    1991-01-01

    This paper provides a general description of the Geographic Nodal Information Study and Evaluation System (GENISES) database design. The GENISES database is the Geographic Information System (GIS) component of the Yucca Mountain Site Characterization Project Technical Database (TDB). The GENISES database has been developed and is maintained by EG ampersand G Energy Measurements, Inc., Las Vegas, NV (EG ampersand G/EM). As part of the Yucca Mountain Project (YMP) Site Characterization Technical Data Management System, GENISES provides a repository for geographically oriented technical data. The primary objective of the GENISES database is to support the Yucca Mountain Site Characterization Project with an effective tool for describing, analyzing, and archiving geo-referenced data. The database design provides the maximum efficiency in input/output, data analysis, data management and information display. This paper provides the systematic approach or plan for the GENISES database design and operation. The paper also discusses the techniques used for data normalization or the decomposition of complex data structures as they apply to GIS database. ARC/INFO and INGRES files are linked or joined by establishing ''relate'' fields through the common attribute names. Thus, through these keys, ARC can allow access to normalized INGRES files greatly reducing redundancy and the size of the database

  5. Automated tools for cross-referencing large databases. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Clapp, N E; Green, P L; Bell, D [and others

    1997-05-01

    A Cooperative Research and Development Agreement (CRADA) was funded with TRESP Associates, Inc., to develop a limited prototype software package operating on one platform (e.g., a personal computer, small workstation, or other selected device) to demonstrate the concepts of using an automated database application to improve the process of detecting fraud and abuse of the welfare system. An analysis was performed on Tennessee`s welfare administration system. This analysis was undertaken to determine if the incidence of welfare waste, fraud, and abuse could be reduced and if the administrative process could be improved to reduce benefits overpayment errors. The analysis revealed a general inability to obtain timely data to support the verification of a welfare recipient`s economic status and eligibility for benefits. It has been concluded that the provision of more modern computer-based tools and the establishment of electronic links to other state and federal data sources could increase staff efficiency, reduce the incidence of out-of-date information provided to welfare assistance staff, and make much of the new data required available in real time. Electronic data links have been proposed to allow near-real-time access to data residing in databases located in other states and at federal agency data repositories. The ability to provide these improvements to the local office staff would require the provision of additional computers, software, and electronic data links within each of the offices and the establishment of approved methods of accessing remote databases and transferring potentially sensitive data. In addition, investigations will be required to ascertain if existing laws would allow such data transfers, and if not, what changed or new laws would be required. The benefits, in both cost and efficiency, to the state of Tennessee of having electronically-enhanced welfare system administration and control are expected to result in a rapid return of investment.

  6. Expert Oracle database architecture Oracle database programming 9i, 10g, and 11g : Techniques and solution

    CERN Document Server

    Kyte, Thomas

    2010-01-01

    Now in its second edition, this best-selling book by Tom Kyte of Ask Tom fame continues to bring you some of the best thinking on how to apply Oracle Database to produce scalable applications that perform well and deliver correct results. Tom has a simple philosophy: you can treat Oracle as a black box and just stick data into it or you can understand how it works and exploit it as a powerful computing environment. If you choose the latter, then you'll find that there are few information management problems that you cannot solve quickly and elegantly. This fully revised second edition covers t

  7. Solutions in radiology services management: a literature review.

    Science.gov (United States)

    Pereira, Aline Garcia; Vergara, Lizandra Garcia Lupi; Merino, Eugenio Andrés Díaz; Wagner, Adriano

    2015-01-01

    The present study was aimed at reviewing the literature to identify solutions for problems observed in radiology services. Basic, qualitative, exploratory literature review at Scopus and SciELO databases, utilizing the Mendeley and Illustrator CC Adobe softwares. In the databases, 565 papers - 120 out of them, pdf free - were identified. Problems observed in the radiology sector are related to procedures scheduling, humanization, lack of training, poor knowledge and use of management techniques, and interaction with users. The design management provides the services with interesting solutions such as Benchmarking, CRM, Lean Approach, ServiceBlueprinting, continued education, among others. Literature review is an important tool to identify problems and respective solutions. However, considering the small number of studies approaching management of radiology services, this is a great field of research for the development of deeper studies.

  8. Update History of This Database - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27 Arabidopsis Phenome Data...base English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...

  9. NoSQL technologies for the CMS Conditions Database

    Science.gov (United States)

    Sipos, Roland

    2015-12-01

    With the restart of the LHC in 2015, the growth of the CMS Conditions dataset will continue, therefore the need of consistent and highly available access to the Conditions makes a great cause to revisit different aspects of the current data storage solutions. We present a study of alternative data storage backends for the Conditions Databases, by evaluating some of the most popular NoSQL databases to support a key-value representation of the CMS Conditions. The definition of the database infrastructure is based on the need of storing the conditions as BLOBs. Because of this, each condition can reach the size that may require special treatment (splitting) in these NoSQL databases. As big binary objects may be problematic in several database systems, and also to give an accurate baseline, a testing framework extension was implemented to measure the characteristics of the handling of arbitrary binary data in these databases. Based on the evaluation, prototypes of a document store, using a column-oriented and plain key-value store, are deployed. An adaption layer to access the backends in the CMS Offline software was developed to provide transparent support for these NoSQL databases in the CMS context. Additional data modelling approaches and considerations in the software layer, deployment and automatization of the databases are also covered in the research. In this paper we present the results of the evaluation as well as a performance comparison of the prototypes studied.

  10. Report from the 2nd Workshop on Extremely Large Databases

    Directory of Open Access Journals (Sweden)

    Jacek Becla

    2009-03-01

    Full Text Available The complexity and sophistication of large scale analytics in science and industry have advanced dramatically in recent years. Analysts are struggling to use complex techniques such as time series analysis and classification algorithms because their familiar, powerful tools are not scalable and cannot effectively use scalable database systems. The 2nd Extremely Large Databases (XLDB workshop was organized to understand these issues, examine their implications, and brainstorm possible solutions. The design of a new open source science database, SciDB that emerged from the first workshop in this series was also debated. This paper is the final report of the discussions and activities at this workshop.

  11. Moving to Google Cloud: Renovation of Global Borehole Temperature Database for Climate Research

    Science.gov (United States)

    Xiong, Y.; Huang, S.

    2013-12-01

    Borehole temperature comprises an independent archive of information on climate change which is complementary to the instrumental and other proxy climate records. With support from the international geothermal community, a global database of borehole temperatures has been constructed for the specific purpose of the study on climate change. Although this database has become an important data source in climate research, there are certain limitations partially because the framework of the existing borehole temperature database was hand-coded some twenty years ago. A database renovation work is now underway to take the advantages of the contemporary online database technologies. The major intended improvements include 1) dynamically linking a borehole site to Google Earth to allow for inspection of site specific geographical information; 2) dynamically linking an original key reference of a given borehole site to Google Scholar to allow for a complete list of related publications; and 3) enabling site selection and data download based on country, coordinate range, and contributor. There appears to be a good match between the enhancement requirements for this database and the functionalities of the newly released Google Fusion Tables application. Google Fusion Tables is a cloud-based service for data management, integration, and visualization. This experimental application can consolidate related online resources such as Google Earth, Google Scholar, and Google Drive for sharing and enriching an online database. It is user friendly, allowing users to apply filters and to further explore the internet for additional information regarding the selected data. The users also have ways to map, to chart, and to calculate on the selected data, and to download just the subset needed. The figure below is a snapshot of the database currently under Google Fusion Tables renovation. We invite contribution and feedback from the geothermal and climate research community to make the

  12. An Interactive Database of Cocaine-Responsive Gene Expression

    Directory of Open Access Journals (Sweden)

    Willard M. Freeman

    2002-01-01

    Full Text Available The postgenomic era of large-scale gene expression studies is inundating drug abuse researchers and many other scientists with findings related to gene expression. This information is distributed across many different journals, and requires laborious literature searches. Here, we present an interactive database that combines existing information related to cocaine-mediated changes in gene expression in an easy-to-use format. The database is limited to statistically significant changes in mRNA or protein expression after cocaine administration. The Flash-based program is integrated into a Web page, and organizes changes in gene expression based on neuroanatomical region, general function, and gene name. Accompanying each gene is a description of the gene, links to the original publications, and a link to the appropriate OMIM (Online Mendelian Inheritance in Man entry. The nature of this review allows for timely modifications and rapid inclusion of new publications, and should help researchers build second-generation hypotheses on the role of gene expression changes in the physiology and behavior of cocaine abuse. Furthermore, this method of organizing large volumes of scientific information can easily be adapted to assist researchers in fields outside of drug abuse.

  13. A hybrid ICT-solution for smart meter data analytics

    DEFF Research Database (Denmark)

    Liu, Xiufeng; Nielsen, Per Sieverts

    2016-01-01

    data processing, and using the machine learning toolkit, MADlib, for doing in-database data analytics in PostgreSQL database. This paper evaluates the key technologies of the proposed ICT-solution, and the results show the effectiveness and efficiency of using the system for both batch and online...

  14. Building a columnar database on shared main memory-based storage

    OpenAIRE

    Tinnefeld, Christian

    2014-01-01

    In the field of disk-based parallel database management systems exists a great variety of solutions based on a shared-storage or a shared-nothing architecture. In contrast, main memory-based parallel database management systems are dominated solely by the shared-nothing approach as it preserves the in-memory performance advantage by processing data locally on each server. We argue that this unilateral development is going to cease due to the combination of the following three trends: a) Nowad...

  15. A Taxonomic Search Engine: federating taxonomic databases using web services.

    Science.gov (United States)

    Page, Roderic D M

    2005-03-09

    The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. The Taxonomic Search Engine (TSE) is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO) and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID) authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata) for each name. The Taxonomic Search Engine is available at http://darwin.zoology.gla.ac.uk/~rpage/portal/ and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names.

  16. Cross-linked polyelectrolyte multilayers for marine antifouling applications

    NARCIS (Netherlands)

    Zhu, X.; Janczewski, D.; Lee, S.S.C.; Teo, S.L-M.; Vancso, Gyula J.

    2013-01-01

    A polyionic multilayer film was fabricated by layer-by-layer (LbL) sequential deposition followed by cross-linking under mild conditions on a substrate surface to inhibit marine fouling. A novel polyanion, featuring methyl ester groups for an easy cross-linking was used as a generic solution for

  17. Digging in to Link Analysis Researches in Iran and all around the World: a Review Article

    Directory of Open Access Journals (Sweden)

    Fatemeh Nooshinfard

    2011-10-01

    Full Text Available Increasing websites quantity, specially scientific websites, there were many researches with concern of link analysis using webometrics by librarian and other scholars in different academic majors around the world. The purpose of this article was link analysis of all link analysis related papers from the beginning to February 19th 2009. The research based on Weiner, Amick, and Lee searching model in 2008, this study included 96 refereed papers extracted from international databases like Springer, Proquest, Sage, Emerald, IEEE, Science Direct and national databases such as Magiran and SID. These papers were studied focusing on their different parts like authors, affiliated organizations, purpose, methods, tools, keywords, date of publishing, publication, indexing databases and their suggestions. Moreover, analyzing those papers and studying any related models were the other purposes of the current article. The findings have been categorized and analyses in ten different sections.

  18. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  19. Cross-link guided molecular modeling with ROSETTA.

    Directory of Open Access Journals (Sweden)

    Abdullah Kahraman

    Full Text Available Chemical cross-links identified by mass spectrometry generate distance restraints that reveal low-resolution structural information on proteins and protein complexes. The technology to reliably generate such data has become mature and robust enough to shift the focus to the question of how these distance restraints can be best integrated into molecular modeling calculations. Here, we introduce three workflows for incorporating distance restraints generated by chemical cross-linking and mass spectrometry into ROSETTA protocols for comparative and de novo modeling and protein-protein docking. We demonstrate that the cross-link validation and visualization software Xwalk facilitates successful cross-link data integration. Besides the protocols we introduce XLdb, a database of chemical cross-links from 14 different publications with 506 intra-protein and 62 inter-protein cross-links, where each cross-link can be mapped on an experimental structure from the Protein Data Bank. Finally, we demonstrate on a protein-protein docking reference data set the impact of virtual cross-links on protein docking calculations and show that an inter-protein cross-link can reduce on average the RMSD of a docking prediction by 5.0 Å. The methods and results presented here provide guidelines for the effective integration of chemical cross-link data in molecular modeling calculations and should advance the structural analysis of particularly large and transient protein complexes via hybrid structural biology methods.

  20. Update History of This Database - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Update History of This Database Date Update contents 2017/03/13 SKIP Stemcell Database... English archive site is opened. 2013/03/29 SKIP Stemcell Database ( https://www.skip.med.k...eio.ac.jp/SKIPSearch/top?lang=en ) is opened. About This Database Database Description Download License Update History of This Databa...se Site Policy | Contact Us Update History of This Database - SKIP Stemcell Database | LSDB Archive ...

  1. Implementation of SQLite database support in program gama-local

    Directory of Open Access Journals (Sweden)

    Vaclav Petras

    2012-03-01

    Full Text Available The program gama-local is a part of GNU Gama project and allows adjustment of local geodetic networks. Before realization of this project the program gama-local supported only XML as an input. I designed and implemented support for the SQLite database and thanks to this extension gama-local can read input data from the SQLite database. This article is focused on the specifics of the use of callback functions in C++ using the native SQLite C/C++ Application Programming Interface. The article provides solution to safe calling of callback functions written in C++. Callback functions are called from C library and C library itself is used by C++ program. Provided solution combines several programing techniques which are described in detail, so this article can serve as a cookbook even for beginner programmers.  This project was accomplished within my bachelor thesis.

  2. The UKNG database: a simple audit tool for interventional neuroradiology

    International Nuclear Information System (INIS)

    Millar, J.S.; Burke, M.

    2007-01-01

    The UK Neurointerventional Group (UKNG) has developed a unified database for the purposes of recording, analysis and clinical audit of neuroangiography and neurointerventional procedures. It has been in use since January 2002. The database utilizes an Access platform (Microsoft) comprising separate but linked programs for data collection and analysis. The program that analyses aneurysm therapy has been designed to mirror the criteria used in the International Subarachnoid Aneurysm Trial (ISAT). Data entered into the main database immediately update the analysis program producing clinical outcome scores in the form of a report. Our local database (Wessex) now contains records on more than 1,750 patients including nearly 350 aneurysm coilings and a total of approximately 500 neurointerventional, vascular procedures. Every time a new piece of information is added to the main database the reporting database is automatically updated which allows 'real-time' audit and analysis of one's clinical practice. The clinical outcome scores for aneurysm treatment are presented in such a way that we can directly compare our results with the 'Clinical Standard' set by ISAT. This database provides a unique opportunity to monitor and review practice at national level. The UKNG wishes to share this database with the wider neurointerventional community and a copy of the software can be obtained free of charge from the authors. (orig.)

  3. The UKNG database: a simple audit tool for interventional neuroradiology

    Energy Technology Data Exchange (ETDEWEB)

    Millar, J.S.; Burke, M. [Southampton General Hospital, Departments of Neuroradiology and IT, Wessex Neurological Centre, Southampton (United Kingdom)

    2007-06-15

    The UK Neurointerventional Group (UKNG) has developed a unified database for the purposes of recording, analysis and clinical audit of neuroangiography and neurointerventional procedures. It has been in use since January 2002. The database utilizes an Access platform (Microsoft) comprising separate but linked programs for data collection and analysis. The program that analyses aneurysm therapy has been designed to mirror the criteria used in the International Subarachnoid Aneurysm Trial (ISAT). Data entered into the main database immediately update the analysis program producing clinical outcome scores in the form of a report. Our local database (Wessex) now contains records on more than 1,750 patients including nearly 350 aneurysm coilings and a total of approximately 500 neurointerventional, vascular procedures. Every time a new piece of information is added to the main database the reporting database is automatically updated which allows 'real-time' audit and analysis of one's clinical practice. The clinical outcome scores for aneurysm treatment are presented in such a way that we can directly compare our results with the 'Clinical Standard' set by ISAT. This database provides a unique opportunity to monitor and review practice at national level. The UKNG wishes to share this database with the wider neurointerventional community and a copy of the software can be obtained free of charge from the authors. (orig.)

  4. A Database for Climatic Conditions around Europe for Promoting GSHP Solutions

    Directory of Open Access Journals (Sweden)

    Michele De Carli

    2018-02-01

    Full Text Available Weather plays an important role for energy uses in buildings. For this reason, it is required to define the proper boundary conditions in terms of the different parameters affecting energy and comfort in buildings. They are also the basis for determining the ground temperature in different locations, as well as for determining the potential for using geothermal energy. This paper presents a database for climates in Europe that has been used in a freeware tool developed as part of the H2020 research project named “Cheap-GSHPs”. The standard Köppen-Geiger climate classification has been matched with the weather data provided by the ENERGYPLUS and METEONORM software database. The Test Reference Years of more than 300 locations have been considered. These locations have been labelled according to the degree-days for heating and cooling, as well as by the Köppen-Geiger scale. A comprehensive data set of weather conditions in Europe has been created and used as input for a GSHP sizing software, helping the user in selecting the weather conditions closest to the location of interest. The proposed method is based on lapse rates and has been tested at two locations in Switzerland and Ireland. It has been demonstrated as quite valid for the project purposes, considering the spatial distribution and density of available data and the lower computing load, in particular for locations where altitude is the main factor controlling on the temperature variations.

  5. Network worlds : from link analysis to virtual places.

    Energy Technology Data Exchange (ETDEWEB)

    Joslyn, C. (Cliff)

    2002-01-01

    Significant progress is being made in knowledge systems through recent advances in the science of very large networks. Attention is now turning in many quarters to the potential impact on counter-terrorism methods. After reviewing some of these advances, we will discuss the difference between such 'network analytic' approaches, which focus on large, homogeneous graph strucures, and what we are calling 'link analytic' approaches, which focus on somewhat smaller graphs with heterogeneous link types. We use this venue to begin the process of rigorously defining link analysis methods, especially the concept of chaining of views of multidimensional databases. We conclude with some speculation on potential connections to virtual world architectures.

  6. Enzyme-linked immunosorbent assay gliadin assessment in processed food products available for persons with celiac disease: a feasibility study for developing a gluten-free food database.

    Science.gov (United States)

    Agakidis, Charalampos; Karagiozoglou-Lampoudi, Thomais; Kalaitsidou, Marina; Papadopoulos, Theodoros; Savvidou, Afroditi; Daskalou, Efstratia; Dimitrios, Triantafyllou

    2011-12-01

    Inappropriate food labeling and unwillingness of food companies to officially register their own gluten-free products in the Greek National Food Intolerance Database (NFID) result in a limited range of processed food products available for persons with celiac disease (CDP). The objective of the study was to evaluate the feasibility of developing a gluten-free food product database based on the assessment of the gluten content in processed foods available for CDP. Gluten was assessed in 41 processed food products available for CDP. Group A consisted of 26 products for CDP included in the NFID, and group B contained 15 food products for CDP not registered in the NFID but listed in the safe lists of the local Celiac Association (CA). High-sensitivity ω-gliadin enzyme-linked immunosorbent assay (ELISA) was used for analysis. Gluten was lower than 20 ppm in 37 of 41 analyzed products (90.2%): in 24 of 26 (92.3%) products in group A and in 13 of 15 (86.7%) products in group B (P = .61). No significant difference was found between the 2 groups regarding gluten content. No product in either group contained gluten in excess of 100 ppm. Most of the analyzed products included in the Greek NFID or listed in the lists of the local CA, even those not officially labeled "gluten free," can be safely consumed by CDP. The use of commercially available ω-gliadin ELISA is able to identify those products that contain inappropriate levels of gluten, making feasible it to develop an integrated gluten-free processed food database.

  7. Data Model and Relational Database Design for Highway Runoff Water-Quality Metadata

    Science.gov (United States)

    Granato, Gregory E.; Tessler, Steven

    2001-01-01

    A National highway and urban runoff waterquality metadatabase was developed by the U.S. Geological Survey in cooperation with the Federal Highway Administration as part of the National Highway Runoff Water-Quality Data and Methodology Synthesis (NDAMS). The database was designed to catalog available literature and to document results of the synthesis in a format that would facilitate current and future research on highway and urban runoff. This report documents the design and implementation of the NDAMS relational database, which was designed to provide a catalog of available information and the results of an assessment of the available data. All the citations and the metadata collected during the review process are presented in a stratified metadatabase that contains citations for relevant publications, abstracts (or previa), and reportreview metadata for a sample of selected reports that document results of runoff quality investigations. The database is referred to as a metadatabase because it contains information about available data sets rather than a record of the original data. The database contains the metadata needed to evaluate and characterize how valid, current, complete, comparable, and technically defensible published and available information may be when evaluated for application to the different dataquality objectives as defined by decision makers. This database is a relational database, in that all information is ultimately linked to a given citation in the catalog of available reports. The main database file contains 86 tables consisting of 29 data tables, 11 association tables, and 46 domain tables. The data tables all link to a particular citation, and each data table is focused on one aspect of the information collected in the literature search and the evaluation of available information. This database is implemented in the Microsoft (MS) Access database software because it is widely used within and outside of government and is familiar to many

  8. Hydroxyl radical induced cross-linking of cytosine and tyrosine in nucleohistone

    International Nuclear Information System (INIS)

    Gajewski, E.; Dizdaroglu, M.

    1990-01-01

    Hydroxyl radical induced formation of a DNA-protein cross-link involving cytosine and tyrosine in nucleohistone in buffered aqueous solution is reported. The technique of gas chromatography-mass spectrometry was used for this investigation. A γ-irradiated aqueous mixture of cytosine and tyrosine was first investigated in order to obtain gas chromatographic-mass spectrometric properties of possible cytosine-tyrosine cross-links. One cross-link was observed, and its structure was identified as the product from the formation of a covalent bond between carbon 6 of cytosine and carbon 3 of tyrosine. With the use of gas chromatography-mass spectrometry with selected-ion monitoring, this cytosine-tyrosine cross-link was identified in acidic hydrolysates of calf thymus nucleohistone γ-irradiated in N 2 O-saturated aqueous solution. The yield of this DNA-protein cross-link in nucleohistone was found to be a linear function of the radiation dose in the range of 100-500 Gy (J·kg -1 ). This yield amounted to 0.05 nmol·J -1 . Mechanisms underlying the formation of the cytosine-tyrosine cross-link in nucleohistone were proposed to involve radical-radical and/or radical addition reactions of hydroxyl adduct radicals of cytosine and tyrosine moieties, forming a covalent bond between carbon 6 of cytosine and carbon 3 of tyrosine. When oxygen was present in irradiated solutions, no cytosine-tyrosine cross-links were observed

  9. HCVpro: Hepatitis C virus protein interaction database

    KAUST Repository

    Kwofie, Samuel K.

    2011-12-01

    It is essential to catalog characterized hepatitis C virus (HCV) protein-protein interaction (PPI) data and the associated plethora of vital functional information to augment the search for therapies, vaccines and diagnostic biomarkers. In furtherance of these goals, we have developed the hepatitis C virus protein interaction database (HCVpro) by integrating manually verified hepatitis C virus-virus and virus-human protein interactions curated from literature and databases. HCVpro is a comprehensive and integrated HCV-specific knowledgebase housing consolidated information on PPIs, functional genomics and molecular data obtained from a variety of virus databases (VirHostNet, VirusMint, HCVdb and euHCVdb), and from BIND and other relevant biology repositories. HCVpro is further populated with information on hepatocellular carcinoma (HCC) related genes that are mapped onto their encoded cellular proteins. Incorporated proteins have been mapped onto Gene Ontologies, canonical pathways, Online Mendelian Inheritance in Man (OMIM) and extensively cross-referenced to other essential annotations. The database is enriched with exhaustive reviews on structure and functions of HCV proteins, current state of drug and vaccine development and links to recommended journal articles. Users can query the database using specific protein identifiers (IDs), chromosomal locations of a gene, interaction detection methods, indexed PubMed sources as well as HCVpro, BIND and VirusMint IDs. The use of HCVpro is free and the resource can be accessed via http://apps.sanbi.ac.za/hcvpro/ or http://cbrc.kaust.edu.sa/hcvpro/. © 2011 Elsevier B.V.

  10. ODG: Omics database generator - a tool for generating, querying, and analyzing multi-omics comparative databases to facilitate biological understanding.

    Science.gov (United States)

    Guhlin, Joseph; Silverstein, Kevin A T; Zhou, Peng; Tiffin, Peter; Young, Nevin D

    2017-08-10

    Rapid generation of omics data in recent years have resulted in vast amounts of disconnected datasets without systemic integration and knowledge building, while individual groups have made customized, annotated datasets available on the web with few ways to link them to in-lab datasets. With so many research groups generating their own data, the ability to relate it to the larger genomic and comparative genomic context is becoming increasingly crucial to make full use of the data. The Omics Database Generator (ODG) allows users to create customized databases that utilize published genomics data integrated with experimental data which can be queried using a flexible graph database. When provided with omics and experimental data, ODG will create a comparative, multi-dimensional graph database. ODG can import definitions and annotations from other sources such as InterProScan, the Gene Ontology, ENZYME, UniPathway, and others. This annotation data can be especially useful for studying new or understudied species for which transcripts have only been predicted, and rapidly give additional layers of annotation to predicted genes. In better studied species, ODG can perform syntenic annotation translations or rapidly identify characteristics of a set of genes or nucleotide locations, such as hits from an association study. ODG provides a web-based user-interface for configuring the data import and for querying the database. Queries can also be run from the command-line and the database can be queried directly through programming language hooks available for most languages. ODG supports most common genomic formats as well as generic, easy to use tab-separated value format for user-provided annotations. ODG is a user-friendly database generation and query tool that adapts to the supplied data to produce a comparative genomic database or multi-layered annotation database. ODG provides rapid comparative genomic annotation and is therefore particularly useful for non-model or

  11. Business Intelligence Integrated Solutions

    Directory of Open Access Journals (Sweden)

    Cristescu Marian Pompiliu

    2017-12-01

    Full Text Available A Business Intelligence solution concerns the simple, real-time access to complete information about the business shown in a relevant format of the report, graphic or dashboard type in order help the taking of strategic decisions regarding the direction in which the company goes. Business Intelligence does not produce data, but uses the data produced by the company’s applications. BI solutions extract their data from ERP (Enterprise Resource Planning, CRM (Customer Relationship Management, HCM (Human Capital Management, and Retail, eCommerce or other databases used in the company.

  12. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  13. A Generic Data Harmonization Process for Cross-linked Research and Network Interaction. Construction and Application for the Lung Cancer Phenotype Database of the German Center for Lung Research.

    Science.gov (United States)

    Firnkorn, D; Ganzinger, M; Muley, T; Thomas, M; Knaup, P

    2015-01-01

    Joint data analysis is a key requirement in medical research networks. Data are available in heterogeneous formats at each network partner and their harmonization is often rather complex. The objective of our paper is to provide a generic approach for the harmonization process in research networks. We applied the process when harmonizing data from three sites for the Lung Cancer Phenotype Database within the German Center for Lung Research. We developed a spreadsheet-based solution as tool to support the harmonization process for lung cancer data and a data integration procedure based on Talend Open Studio. The harmonization process consists of eight steps describing a systematic approach for defining and reviewing source data elements and standardizing common data elements. The steps for defining common data elements and harmonizing them with local data definitions are repeated until consensus is reached. Application of this process for building the phenotype database led to a common basic data set on lung cancer with 285 structured parameters. The Lung Cancer Phenotype Database was realized as an i2b2 research data warehouse. Data harmonization is a challenging task requiring informatics skills as well as domain knowledge. Our approach facilitates data harmonization by providing guidance through a uniform process that can be applied in a wide range of projects.

  14. Relational Database for the Geology of the Northern Rocky Mountains - Idaho, Montana, and Washington

    Science.gov (United States)

    Causey, J. Douglas; Zientek, Michael L.; Bookstrom, Arthur A.; Frost, Thomas P.; Evans, Karl V.; Wilson, Anna B.; Van Gosen, Bradley S.; Boleneus, David E.; Pitts, Rebecca A.

    2008-01-01

    A relational database was created to prepare and organize geologic map-unit and lithologic descriptions for input into a spatial database for the geology of the northern Rocky Mountains, a compilation of forty-three geologic maps for parts of Idaho, Montana, and Washington in U.S. Geological Survey Open File Report 2005-1235. Not all of the information was transferred to and incorporated in the spatial database due to physical file limitations. This report releases that part of the relational database that was completed for that earlier product. In addition to descriptive geologic information for the northern Rocky Mountains region, the relational database contains a substantial bibliography of geologic literature for the area. The relational database nrgeo.mdb (linked below) is available in Microsoft Access version 2000, a proprietary database program. The relational database contains data tables and other tables used to define terms, relationships between the data tables, and hierarchical relationships in the data; forms used to enter data; and queries used to extract data.

  15. Assessment of respiratory disorders in relation to solution gas flaring activities in Alberta

    International Nuclear Information System (INIS)

    1998-02-01

    A study was conducted by Alberta Health to address the issue of whether or not flaring of solution gas has a negative impact on human health. The Flaring Working Group of the Clean Air Strategic Alliance initiated this study which focused on the assessment of the relationship between human health disorders (such as asthma, bronchitis, pneumonia and upper respiratory infections) and solution gas flaring activities in rural, urban and aboriginal populations. The personal exposure to flaring emissions was estimated by physical proximity to the source of emissions. A small area was studied in which geographical variations in human health disorders were compared to geographical variations of socioeconomic and environmental factors. Data was gathered from 1989 to 1996 to evaluate long term average conditions and changes over the time period investigated. Notwithstanding physicians' claims for increased rates of respiratory infections and hospitalization attributed to solution gas flaring, the study found no evidence linking respiratory infections and solution gas flaring. This was the conclusion regardless of the measure of health outcomes, the rural-urban status, ethnicity, or age. Nevertheless, the study recommended identification of bio-markers of exposure and effect reflective of the compounds of interest, and the development of a responsive and comprehensive geographic information database that would allow data linkage at all geographic levels for different periods of time. refs., 10 tabs., 15 figs., 1 appendix

  16. RTDB: A memory resident real-time object database

    International Nuclear Information System (INIS)

    Nogiec, Jerzy M.; Desavouret, Eugene

    2003-01-01

    RTDB is a fast, memory-resident object database with built-in support for distribution. It constitutes an attractive alternative for architecting real-time solutions with multiple, possibly distributed, processes or agents sharing data. RTDB offers both direct and navigational access to stored objects, with local and remote random access by object identifiers, and immediate direct access via object indices. The database supports transparent access to objects stored in multiple collaborating dispersed databases and includes a built-in cache mechanism that allows for keeping local copies of remote objects, with specifiable invalidation deadlines. Additional features of RTDB include a trigger mechanism on objects that allows for issuing events or activating handlers when objects are accessed or modified and a very fast, attribute based search/query mechanism. The overall architecture and application of RTDB in a control and monitoring system is presented

  17. ArcGIS 9.3 ed i database spaziali: gli scenari di utilizzo

    Directory of Open Access Journals (Sweden)

    Francesco Bartoli

    2009-03-01

    Full Text Available ArcGis 9.3 and spatial databases: application sceneriesThe latest news from ESRI suggests that it will soon be possible to link to the PostgreSQL database. This resulted in a collaboration between the PostGis geometry model with SDOGEOMETRY - the oracle database - a hierarchial and spatial design database. This had a direct impact on the PMI review and the business models of local governments. ArcSdewould be replaced by Zig-Gis 2.0 providing greater offerings to the GIS community. Harnessing this system will take advantage of human resources to aid in the design of potentconceptual data models. Further funds are still requiredto promote the product under a prominent license.

  18. Linking the Congenital Heart Surgery Databases of the Society of Thoracic Surgeons and the Congenital Heart Surgeons’ Society: Part 1—Rationale and Methodology

    Science.gov (United States)

    Jacobs, Jeffrey P.; Pasquali, Sara K.; Austin, Erle; Gaynor, J. William; Backer, Carl; Hirsch-Romano, Jennifer C.; Williams, William G.; Caldarone, Christopher A.; McCrindle, Brian W.; Graham, Karen E.; Dokholyan, Rachel S.; Shook, Gregory J.; Poteat, Jennifer; Baxi, Maulik V.; Karamlou, Tara; Blackstone, Eugene H.; Mavroudis, Constantine; Mayer, John E.; Jonas, Richard A.; Jacobs, Marshall L.

    2014-01-01

    Purpose The Society of Thoracic Surgeons Congenital Heart Surgery Database (STS-CHSD) is the largest Registry in the world of patients who have undergone congenital and pediatric cardiac surgical operations. The Congenital Heart Surgeons’ Society Database (CHSS-D) is an Academic Database designed for specialized detailed analyses of specific congenital cardiac malformations and related treatment strategies. The goal of this project was to create a link between the STS-CHSD and the CHSS-D in order to facilitate studies not possible using either individual database alone and to help identify patients who are potentially eligible for enrollment in CHSS studies. Methods Centers were classified on the basis of participation in the STS-CHSD, the CHSS-D, or both. Five matrices, based on CHSS inclusionary criteria and STS-CHSD codes, were created to facilitate the automated identification of patients in the STS-CHSD who meet eligibility criteria for the five active CHSS studies. The matrices were evaluated with a manual adjudication process and were iteratively refined. The sensitivity and specificity of the original matrices and the refined matrices were assessed. Results In January 2012, a total of 100 centers participated in the STS-CHSD and 74 centers participated in the CHSS. A total of 70 centers participate in both and 40 of these 70 agreed to participate in this linkage project. The manual adjudication process and the refinement of the matrices resulted in an increase in the sensitivity of the matrices from 93% to 100% and an increase in the specificity of the matrices from 94% to 98%. Conclusion Matrices were created to facilitate the automated identification of patients potentially eligible for the five active CHSS studies using the STS-CHSD. These matrices have a sensitivity of 100% and a specificity of 98%. In addition to facilitating identification of patients potentially eligible for enrollment in CHSS studies, these matrices will allow (1) estimation of

  19. Free-radical-induced chain scission and cross-linking of polymers in aqueous solution. An overview

    International Nuclear Information System (INIS)

    Von Sonntag, C.

    2002-01-01

    Complete text of publication follows. In the radiolysis of N 2 O-saturated aqueous solutions OH are generated. In their reactions with polymers, they give rise to polymer-derived radicals. The kinetics of the formation and decay of these radicals are reviewed. The rate of reaction of a polymer with a reactive free radical is noticeably lower than that of an equivalent concentration of monomer due to the non-random distribution of the reaction sites. Once a larger number of radicals are formed on one polymer molecule, e.g. upon pulse radiolysis, close-by radicals recombine more rapidly while the more distant ones survive for much longer times than an equivalent concentration of freely diffusing radicals. Intermolecular cross-linking (between two polymer chains, increase in molecular weight) and intramolecular cross-linking (formation of small loops, no increase in polymer weight) are competing processes, and their relative yields thus depend on the dose rate and polymer concentration. Hydrogen-transfer reactions within the polymer, e.g. transformation of a secondary radical into a tertiary one, are common and facilitated by the high local density of reactive sites. Due to repulsive forces, the lifetime of radicals of charged polymers is substantially increased. This enables even relatively slow b-fragmentation reactions to become of importance. In the case of poly(methacrylic acid), where β-fragmentation is comparatively fast, this even leads to an unzipping, and as a consequence of the efficient release of methacrylic acid the situation of equilibrium polymerization is approached. Heterolytic β-fragmentation is possible when adequate leaving groups are available, e.g. in polynucleotides and DNA. In the presence of O 2 , chain scission occurs via oxyl radicals as intermediates. Some implications for technical applications are discussed

  20. The SIB Swiss Institute of Bioinformatics' resources: focus on curated databases

    OpenAIRE

    Bultet, Lisandra Aguilar; Aguilar Rodriguez, Jose; Ahrens, Christian H; Ahrne, Erik Lennart; Ai, Ni; Aimo, Lucila; Akalin, Altuna; Aleksiev, Tyanko; Alocci, Davide; Altenhoff, Adrian; Alves, Isabel; Ambrosini, Giovanna; Pedone, Pascale Anderle; Angelina, Paolo; Anisimova, Maria

    2016-01-01

    The SIB Swiss Institute of Bioinformatics (www.isb-sib.ch) provides world-class bioinformatics databases, software tools, services and training to the international life science community in academia and industry. These solutions allow life scientists to turn the exponentially growing amount of data into knowledge. Here, we provide an overview of SIB's resources and competence areas, with a strong focus on curated databases and SIB's most popular and widely used resources. In particular, SIB'...

  1. SoyDB: a knowledge database of soybean transcription factors

    Directory of Open Access Journals (Sweden)

    Valliyodan Babu

    2010-01-01

    Full Text Available Abstract Background Transcription factors play the crucial rule of regulating gene expression and influence almost all biological processes. Systematically identifying and annotating transcription factors can greatly aid further understanding their functions and mechanisms. In this article, we present SoyDB, a user friendly database containing comprehensive knowledge of soybean transcription factors. Description The soybean genome was recently sequenced by the Department of Energy-Joint Genome Institute (DOE-JGI and is publicly available. Mining of this sequence identified 5,671 soybean genes as putative transcription factors. These genes were comprehensively annotated as an aid to the soybean research community. We developed SoyDB - a knowledge database for all the transcription factors in the soybean genome. The database contains protein sequences, predicted tertiary structures, putative DNA binding sites, domains, homologous templates in the Protein Data Bank (PDB, protein family classifications, multiple sequence alignments, consensus protein sequence motifs, web logo of each family, and web links to the soybean transcription factor database PlantTFDB, known EST sequences, and other general protein databases including Swiss-Prot, Gene Ontology, KEGG, EMBL, TAIR, InterPro, SMART, PROSITE, NCBI, and Pfam. The database can be accessed via an interactive and convenient web server, which supports full-text search, PSI-BLAST sequence search, database browsing by protein family, and automatic classification of a new protein sequence into one of 64 annotated transcription factor families by hidden Markov models. Conclusions A comprehensive soybean transcription factor database was constructed and made publicly accessible at http://casp.rnet.missouri.edu/soydb/.

  2. Continuous country-wide rainfall observation using a large network of commercial microwave links: Challenges, solutions and applications

    Science.gov (United States)

    Chwala, Christian; Boose, Yvonne; Smiatek, Gerhard; Kunstmann, Harald

    2017-04-01

    Commercial microwave link (CML) networks have proven to be a valuable source for rainfall information over the last years. However, up to now, analysis of CML data was always limited to certain snapshots of data for historic periods due to limited data access. With the real-time availability of CML data in Germany (Chwala et al. 2016) this situation has improved significantly. We are continuously acquiring and processing data from 3000 CMLs in Germany in near real-time with one minute temporal resolution. Currently the data acquisition system is extended to 10000 CMLs so that the whole of Germany is covered and a continuous country-wide rainfall product can be provided. In this contribution we will elaborate on the challenges and solutions regarding data acquisition, data management and robust processing. We will present the details of our data acquisition system that we run operationally at the network of the CML operator Ericsson Germany to solve the problem of limited data availability. Furthermore we will explain the implementation of our data base, its web-frontend for easy data access and present our data processing algorithms. Finally we will showcase an application of our data in hydrological modeling and its potential usage to improve radar QPE. Bibliography: Chwala, C., Keis, F., and Kunstmann, H.: Real-time data acquisition of commercial microwave link networks for hydrometeorological applications, Atmos. Meas. Tech., 9, 991-999, doi:10.5194/amt-9-991-2016, 2016

  3. A59 waste repackaging database (AWARD)

    International Nuclear Information System (INIS)

    Keel, A.

    1993-06-01

    This paper sets out the requirements for AWARD (the A59 Waste Repackaging Database); a computer-based system to record LLW sorting and repacking information from the North Cave Line in A59. A solution will be developed on the basis of this document. AWARD will record and store details entered from waste sorting and LLW repackaging operations. This document will be used as the basis of the development of the host computer system. (Author)

  4. Database Security for an Integrated Solution to Automate Sales Processes in Banking

    OpenAIRE

    Alexandra Maria Ioana FLOREA

    2013-01-01

    In order to maintain a competitive edge in a very active banking market the implementation of a web-based solution to standardize, optimize and manage the flow of sales / pre-sales and generating new leads is requested by a company. This article presents the realization of a development framework for software interoperability in the banking financial institutions and an integrated solution for achieving sales process automation in banking. The paper focuses on presenting the requirements for ...

  5. Real time data acquisition of a countrywide commercial microwave link network

    Science.gov (United States)

    Chwala, Christian; Keis, Felix; Kunstmann, Harald

    2015-04-01

    Research in recent years has shown that data from commercial microwave link networks can provide very valuable precipitation information. Since these networks comprise the backbone of the cell phone network, they provide countrywide coverage. However acquiring the necessary data from the network operators is still difficult. Data is usually made available for researchers with a large time delay and often at irregular basis. This of course hinders the exploitation of commercial microwave link data in operational applications like QPE forecasts running at national meteorological services. To overcome this, we have developed a custom software in joint cooperation with our industry partner Ericsson. The software is installed on a dedicated server at Ericsson and is capable of acquiring data from the countrywide microwave link network in Germany. In its current first operational testing phase, data from several hundred microwave links in southern Germany is recorded. All data is instantaneously sent to our server where it is stored and organized in an emerging database. Time resolution for the Ericsson data is one minute. The custom acquisition software, however, is capable of processing higher sampling rates. Additionally we acquire and manage 1 Hz data from four microwave links operated by the skiing resort in Garmisch-Partenkirchen. We will present the concept of the data acquisition and show details of the custom-built software. Additionally we will showcase the accessibility and basic processing of real time microwave link data via our database web frontend.

  6. Update History of This Database - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Update History of This Database Date Update contents 201...0/03/29 Yeast Interacting Proteins Database English archive site is opened. 2000/12/4 Yeast Interacting Proteins Database...( http://itolab.cb.k.u-tokyo.ac.jp/Y2H/ ) is released. About This Database Database Description... Download License Update History of This Database Site Policy | Contact Us Update History of This Database... - Yeast Interacting Proteins Database | LSDB Archive ...

  7. Technical evaluation of methods for identifying chemotherapy-induced febrile neutropenia in healthcare claims databases

    OpenAIRE

    Weycker Derek; Sofrygin Oleg; Seefeld Kim; Deeter Robert G; Legg Jason; Edelsberg John

    2013-01-01

    Abstract Background Healthcare claims databases have been used in several studies to characterize the risk and burden of chemotherapy-induced febrile neutropenia (FN) and effectiveness of colony-stimulating factors against FN. The accuracy of methods previously used to identify FN in such databases has not been formally evaluated. Methods Data comprised linked electronic medical records from Geisinger Health System and healthcare claims data from Geisinger Health Plan. Subjects were classifie...

  8. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...19&lang=en Whole data download - Referenced database Rice Expression Database (RED) Rice full-length cDNA Database... (KOME) Rice Genome Integrated Map Database (INE) Rice Mutant Panel Database (Tos17) Rice Genome Annotation Database

  9. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  10. Infant feeding practices within a large electronic medical record database.

    Science.gov (United States)

    Bartsch, Emily; Park, Alison L; Young, Jacqueline; Ray, Joel G; Tu, Karen

    2018-01-02

    The emerging adoption of the electronic medical record (EMR) in primary care enables clinicians and researchers to efficiently examine epidemiological trends in child health, including infant feeding practices. We completed a population-based retrospective cohort study of 8815 singleton infants born at term in Ontario, Canada, April 2002 to March 2013. Newborn records were linked to the Electronic Medical Record Administrative data Linked Database (EMRALD™), which uses patient-level information from participating family practice EMRs across Ontario. We assessed exclusive breastfeeding patterns using an automated electronic search algorithm, with manual review of EMRs when the latter was not possible. We examined the rate of breastfeeding at visits corresponding to 2, 4 and 6 months of age, as well as sociodemographic factors associated with exclusive breastfeeding. Of the 8815 newborns, 1044 (11.8%) lacked breastfeeding information in their EMR. Rates of exclusive breastfeeding were 39.5% at 2 months, 32.4% at 4 months and 25.1% at 6 months. At age 6 months, exclusive breastfeeding rates were highest among mothers aged ≥40 vs. database.

  11. A Taxonomic Search Engine: Federating taxonomic databases using web services

    Directory of Open Access Journals (Sweden)

    Page Roderic DM

    2005-03-01

    Full Text Available Abstract Background The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. Results The Taxonomic Search Engine (TSE is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata for each name. Conclusion The Taxonomic Search Engine is available at http://darwin.zoology.gla.ac.uk/~rpage/portal/ and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names.

  12. Dehydration of an ethanol/water azeotrope through alginate-DNA membranes cross-linked with metal ions by pervaporation.

    Science.gov (United States)

    Uragami, Tadashi; Banno, Masashi; Miyata, Takashi

    2015-12-10

    To obtain high dehydration membranes for an ethanol/water azeotrope, dried blend membranes prepared from mixtures of sodium alginate (Alg-Na) and sodium deoxyribonucleate (DNA-Na) were cross-linked by immersing in a methanol solution of CaCl2 or MaCl2. In the dehydration of an ethanol/water azeotropic mixture by pervaporation, the effects of immersion time in methanol solution of CaCl2 or MaCl2 on the permeation rate and water/ethanol selectivity through Alg-DNA/Ca(2+) and Alg-DNA/Mg(2+) cross-linked membranes were investigated. Alg-DNA/Mg(2+) cross-linked membrane immersed for 12h in methanol solution of MaCl2 exhibited the highest water/ethanol selectivity. This results from depressed swelling of the membranes by formation of a cross-linked structure. However, excess immersion in solution containing cross-linker led to an increase in the hydrophobicity of cross-linked membrane. Therefore, the water/ethanol selectivity of Alg-DNA/Mg(2+) cross-linked membranes with an excess immersion in cross-linking solution was lowered. The relationship between the structure of Alg-DNA/Ca(2+) and Alg-DNA/Mg(2+) cross-linked membranes and their permeation and separation characteristics during pervaporation of an ethanol/water azeotropic mixture is discussed in detail. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Performance emulation and parameter estimation for nonlinear fibre-optic links

    DEFF Research Database (Denmark)

    Piels, Molly; Porto da Silva, Edson; Zibar, Darko

    2016-01-01

    Fibre-optic communication systems, especially when operating in the nonlinear regime, generally do not perform exactly as theory would predict. A number of methods for data-based evaluation of nonlinear fibre-optic link parameters, both for accurate performance emulation and optimization...

  14. Solutions in radiology services management: a literature review

    Directory of Open Access Journals (Sweden)

    Aline Garcia Pereira

    2015-10-01

    Full Text Available AbstractObjective:The present study was aimed at reviewing the literature to identify solutions for problems observed in radiology services.Materials and Methods:Basic, qualitative, exploratory literature review at Scopus and SciELO databases, utilizing the Mendeley and Illustrator CC Adobe softwares.Results:In the databases, 565 papers – 120 out of them, pdf free – were identified. Problems observed in the radiology sector are related to procedures scheduling, humanization, lack of training, poor knowledge and use of management techniques, and interaction with users. The design management provides the services with interesting solutions such as Benchmarking, CRM, Lean Approach, ServiceBlueprinting, continued education, among others.Conclusion:Literature review is an important tool to identify problems and respective solutions. However, considering the small number of studies approaching management of radiology services, this is a great field of research for the development of deeper studies.

  15. Solutions in radiology services management: a literature review*

    Science.gov (United States)

    Pereira, Aline Garcia; Vergara, Lizandra Garcia Lupi; Merino, Eugenio Andrés Díaz; Wagner, Adriano

    2015-01-01

    Objective The present study was aimed at reviewing the literature to identify solutions for problems observed in radiology services. Materials and Methods Basic, qualitative, exploratory literature review at Scopus and SciELO databases, utilizing the Mendeley and Illustrator CC Adobe softwares. Results In the databases, 565 papers – 120 out of them, pdf free – were identified. Problems observed in the radiology sector are related to procedures scheduling, humanization, lack of training, poor knowledge and use of management techniques, and interaction with users. The design management provides the services with interesting solutions such as Benchmarking, CRM, Lean Approach, ServiceBlueprinting, continued education, among others. Conclusion Literature review is an important tool to identify problems and respective solutions. However, considering the small number of studies approaching management of radiology services, this is a great field of research for the development of deeper studies. PMID:26543281

  16. libChEBI: an API for accessing the ChEBI database.

    Science.gov (United States)

    Swainston, Neil; Hastings, Janna; Dekker, Adriano; Muthukrishnan, Venkatesh; May, John; Steinbeck, Christoph; Mendes, Pedro

    2016-01-01

    ChEBI is a database and ontology of chemical entities of biological interest. It is widely used as a source of identifiers to facilitate unambiguous reference to chemical entities within biological models, databases, ontologies and literature. ChEBI contains a wealth of chemical data, covering over 46,500 distinct chemical entities, and related data such as chemical formula, charge, molecular mass, structure, synonyms and links to external databases. Furthermore, ChEBI is an ontology, and thus provides meaningful links between chemical entities. Unlike many other resources, ChEBI is fully human-curated, providing a reliable, non-redundant collection of chemical entities and related data. While ChEBI is supported by a web service for programmatic access and a number of download files, it does not have an API library to facilitate the use of ChEBI and its data in cheminformatics software. To provide this missing functionality, libChEBI, a comprehensive API library for accessing ChEBI data, is introduced. libChEBI is available in Java, Python and MATLAB versions from http://github.com/libChEBI, and provides full programmatic access to all data held within the ChEBI database through a simple and documented API. libChEBI is reliant upon the (automated) download and regular update of flat files that are held locally. As such, libChEBI can be embedded in both on- and off-line software applications. libChEBI allows better support of ChEBI and its data in the development of new cheminformatics software. Covering three key programming languages, it allows for the entirety of the ChEBI database to be accessed easily and quickly through a simple API. All code is open access and freely available.

  17. Karst database development in Minnesota: Design and data assembly

    Science.gov (United States)

    Gao, Y.; Alexander, E.C.; Tipping, R.G.

    2005-01-01

    The Karst Feature Database (KFD) of Minnesota is a relational GIS-based Database Management System (DBMS). Previous karst feature datasets used inconsistent attributes to describe karst features in different areas of Minnesota. Existing metadata were modified and standardized to represent a comprehensive metadata for all the karst features in Minnesota. Microsoft Access 2000 and ArcView 3.2 were used to develop this working database. Existing county and sub-county karst feature datasets have been assembled into the KFD, which is capable of visualizing and analyzing the entire data set. By November 17 2002, 11,682 karst features were stored in the KFD of Minnesota. Data tables are stored in a Microsoft Access 2000 DBMS and linked to corresponding ArcView applications. The current KFD of Minnesota has been moved from a Windows NT server to a Windows 2000 Citrix server accessible to researchers and planners through networked interfaces. ?? Springer-Verlag 2005.

  18. DMPD: Nuclear receptors in macrophages: a link between metabolism and inflammation. [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 18022390 Nuclear receptors in macrophages: a link between metabolism and inflammati...on. Szanto A, Roszer T. FEBS Lett. 2008 Jan 9;582(1):106-16. Epub 2007 Nov 20. (.png) (.svg) (.html) (.csml) Show Nuclear... receptors in macrophages: a link between metabolism and inflammation. PubmedID 18022390 Title Nuclear

  19. STITCH 2: an interaction network database for small molecules and proteins

    DEFF Research Database (Denmark)

    Kuhn, Michael; Szklarczyk, Damian; Franceschini, Andrea

    2010-01-01

    Over the last years, the publicly available knowledge on interactions between small molecules and proteins has been steadily increasing. To create a network of interactions, STITCH aims to integrate the data dispersed over the literature and various databases of biological pathways, drug......-target relationships and binding affinities. In STITCH 2, the number of relevant interactions is increased by incorporation of BindingDB, PharmGKB and the Comparative Toxicogenomics Database. The resulting network can be explored interactively or used as the basis for large-scale analyses. To facilitate links to other...... chemical databases, we adopt InChIKeys that allow identification of chemicals with a short, checksum-like string. STITCH 2.0 connects proteins from 630 organisms to over 74,000 different chemicals, including 2200 drugs. STITCH can be accessed at http://stitch.embl.de/....

  20. ECOS E-MATRIX Methane and Volatile Organic Carbon (VOC) Emissions Best Practices Database

    Energy Technology Data Exchange (ETDEWEB)

    Parisien, Lia [The Environmental Council Of The States, Washington, DC (United States)

    2016-01-31

    This final scientific/technical report on the ECOS e-MATRIX Methane and Volatile Organic Carbon (VOC) Emissions Best Practices Database provides a disclaimer and acknowledgement, table of contents, executive summary, description of project activities, and briefing/technical presentation link.

  1. Atlantic Canada's energy research and development website and database

    International Nuclear Information System (INIS)

    2005-01-01

    Petroleum Research Atlantic Canada maintains a website devoted to energy research and development in Atlantic Canada. The site can be viewed on the world wide web at www.energyresearch.ca. It includes a searchable database with information about researchers in Nova Scotia, their projects and published materials on issues related to hydrocarbons, alternative energy technologies, energy efficiency, climate change, environmental impacts and policy. The website also includes links to research funding agencies, external related databases and related energy organizations around the world. Nova Scotia-based users are invited to submit their academic, private or public research to the site. Before being uploaded into the database, a site administrator reviews and processes all new information. Users are asked to identify their areas of interest according to the following research categories: alternative or renewable energy technologies; climate change; coal; computer applications; economics; energy efficiency; environmental impacts; geology; geomatics; geophysics; health and safety; human factors; hydrocarbons; meteorology and oceanology (metocean) activities; petroleum operations in deep and shallow waters; policy; and power generation and supply. The database can be searched 5 ways according to topic, researchers, publication, projects or funding agency. refs., tabs., figs

  2. Linking Hospital and Tax data to support research on the economic impacts of hospitalization

    Directory of Open Access Journals (Sweden)

    Claudia Sanmartin

    2017-04-01

    This project has created a unique linked database that will support research on the economic consequences of ‘health shocks’ for individuals and their families, and the implications for income, labour and health policies. This database represents a new and unique resource that will fill an important national data gap, and enable a wide range of relevant research.

  3. Linkage between the Danish National Health Service Prescription Database, the Danish Fetal Medicine Database, and other Danish registries as a tool for the study of drug safety in pregnancy

    Directory of Open Access Journals (Sweden)

    Pedersen LH

    2016-05-01

    Full Text Available Lars H Pedersen,1,2 Olav B Petersen,1,2 Mette Nørgaard,3 Charlotte Ekelund,4 Lars Pedersen,3 Ann Tabor,4 Henrik T Sørensen3 1Department of Clinical Medicine, Aarhus University, 2Department of Obstetrics and Gynecology, 3Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, 4Department of Fetal Medicine, Rigshospitalet, Copenhagen, Denmark Abstract: A linked population-based database is being created in Denmark for research on drug safety during pregnancy. It combines information from the Danish National Health Service Prescription Database (with information on all prescriptions reimbursed in Denmark since 2004, the Danish Fetal Medicine Database, the Danish National Registry of Patients, and the Medical Birth Registry. The new linked database will provide validated information on malformations diagnosed both prenatally and postnatally. The cohort from 2008 to 2014 will comprise 589,000 pregnancies with information on 424,000 pregnancies resulting in live-born children, ~420,000 pregnancies undergoing prenatal ultrasound scans, 65,000 miscarriages, and 92,000 terminations. It will be updated yearly with information on ~80,000 pregnancies. The cohort will enable identification of drug exposures associated with severe malformations, not only based on malformations diagnosed after birth but also including those having led to termination of pregnancy or miscarriage. Such combined data will provide a unique source of information for research on the safety of medications used during pregnancy. Keywords: malformations, teratology, therapeutic drug monitoring, epidemiological methods, registries

  4. Database Security for an Integrated Solution to Automate Sales Processes in Banking

    Directory of Open Access Journals (Sweden)

    Alexandra Maria Ioana FLOREA

    2013-05-01

    Full Text Available In order to maintain a competitive edge in a very active banking market the implementation of a web-based solution to standardize, optimize and manage the flow of sales / pre-sales and generating new leads is requested by a company. This article presents the realization of a development framework for software interoperability in the banking financial institutions and an integrated solution for achieving sales process automation in banking. The paper focuses on presenting the requirements for security and confidentiality of stored data and also on presenting the identified techniques and procedures to implement these requirements.

  5. PMF5.0 vs. CMB8.2: An inter-comparison study based on the new European SPECIEUROPE database

    Science.gov (United States)

    Bove, Maria Chiara; Massabò, Dario; Prati, Paolo

    2018-03-01

    Receptor Models are tools widely adopted in source apportionment studies. We describe here an experiment in which we integrated two different approaches, i.e. Positive Matrix Factorization (PMF) and Chemical Mass Balance (CMB) to apportion a set of PM10 (i.e. Particulate Matter with aerodynamic diameter lower than 10 μm) concentration values. The study was performed in the city of Genoa (Italy): a sampling campaign was carried out collecting daily PM10 samples for about two months in an urban background site. PM10 was collected on Quartz fiber filters by a low-volume sampler. A quite complete speciation of PM samples was obtained via Energy Dispersive-X Ray Fluorescence (ED-XRF, for elements), Ionic Chromatography (IC, for major ions and levoglucosan), thermo-optical Analysis (TOT, for organic and elemental carbon). The chemical analyses provided the input database for source apportionment by both PMF and CMB. Source profiles were directly calculated from the input data by PMF while in the CMB runs they were first calculated by averaging the profiles of similar sources collected in the European database SPECIEUROPE. Differences between the two receptor models emerged in particular with PM10 sources linked to very local processes. For this reason, PMF source profiles were adopted in refined CMB runs thus testing a new hybrid approach. Finally, PMF and the "tuned" CMB showed a better agreement even if some discrepancies could not completely been resolved. In this work, we compared the results coming from the last available PMF and CMB versions applied on a set of PM10 samples. Input profiles used in CMB analysis were obtained by averaging the profiles of the new European SPECIEUROPE database. The main differences between PMF and CMB results were linked to very local processes: we obtained the best solution by integrating the two different approaches with the implementation of some output PMF profiles to CMB runs.

  6. Database Description - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RPD Alternative name Rice Proteome Database...titute of Crop Science, National Agriculture and Food Research Organization Setsuko Komatsu E-mail: Database... classification Proteomics Resources Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database... description Rice Proteome Database contains information on protei...and entered in the Rice Proteome Database. The database is searchable by keyword,

  7. DisFace: A Database of Human Facial Disorders

    Directory of Open Access Journals (Sweden)

    Paramjit Kaur

    2017-10-01

    Full Text Available Face is an integral part of human body by which an individual communicates in the society. Its importance can be highlighted by the fact that a person deprived of face cannot sustain in the living world. In the past few decades, human face has gained attention of several researchers, whether it is related to facial anthropometry, facial disorder, face transplantation or face reconstruction. Several researches have also shown the correlation between neuropsychiatry disorders and human face and also that how face recognition abilities are correlated with these disorders. Currently, several databases exist which contain the facial images of several individuals captured from different sources. The advantage of these databases is that the images in these databases can be used for testing and training purpose. However, in current date no such database exists which would provide not only facial images of individuals; but also the literature concerning the human face, list of several genes controlling human face, list of facial disorders and various tools which work on facial images. Thus, the current research aims at developing a database of human facial disorders using bioinformatics approach. The database will contain information about facial diseases, medications, symptoms, findings, etc. The information will be extracted from several other databases like OMIM, PubChem, Radiopedia, Medline Plus, FDA, etc. and links to them will also be provided. Initially, the diseases specific for human face have been obtained from already created published corpora of literature using text mining approach. Becas tool was used to obtain the specific task.  A dataset will be created and stored in the form of database. It will be a database containing cross-referenced index of human facial diseases, medications, symptoms, signs, etc. Thus, a database on human face with complete existing information about human facial disorders will be developed. The novelty of the

  8. Implementing SaaS Solution for CRM

    OpenAIRE

    Adriana LIMBÄ‚ÅžAN; Lucia RUSU

    2011-01-01

    Greatest innovations in virtualization and distributed computing have accelerated interest in cloud computing (IaaS, PaaS, SaaS, aso). This paper presents the SaaS prototype for Customer Relationship Management of a real estate company. Starting from several approaches of e-marketing and SaaS features and architectures, we adopted a model for a CRM solution using SaaS Level 2 architecture and distributed database. Based on the system objective, functionality, we developed a modular solution f...

  9. A spatial database for landslides in northern Bavaria: A methodological approach

    Science.gov (United States)

    Jäger, Daniel; Kreuzer, Thomas; Wilde, Martina; Bemm, Stefan; Terhorst, Birgit

    2018-04-01

    Landslide databases provide essential information for hazard modeling, damages on buildings and infrastructure, mitigation, and research needs. This study presents the development of a landslide database system named WISL (Würzburg Information System on Landslides), currently storing detailed landslide data for northern Bavaria, Germany, in order to enable scientific queries as well as comparisons with other regional landslide inventories. WISL is based on free open source software solutions (PostgreSQL, PostGIS) assuring good correspondence of the various softwares and to enable further extensions with specific adaptions of self-developed software. Apart from that, WISL was designed to be particularly compatible for easy communication with other databases. As a central pre-requisite for standardized, homogeneous data acquisition in the field, a customized data sheet for landslide description was compiled. This sheet also serves as an input mask for all data registration procedures in WISL. A variety of "in-database" solutions for landslide analysis provides the necessary scalability for the database, enabling operations at the local server. In its current state, WISL already enables extensive analysis and queries. This paper presents an example analysis of landslides in Oxfordian Limestones in the northeastern Franconian Alb, northern Bavaria. The results reveal widely differing landslides in terms of geometry and size. Further queries related to landslide activity classifies the majority of the landslides as currently inactive, however, they clearly possess a certain potential for remobilization. Along with some active mass movements, a significant percentage of landslides potentially endangers residential areas or infrastructure. The main aspect of future enhancements of the WISL database is related to data extensions in order to increase research possibilities, as well as to transfer the system to other regions and countries.

  10. Hmrbase: a database of hormones and their receptors

    Science.gov (United States)

    Rashid, Mamoon; Singla, Deepak; Sharma, Arun; Kumar, Manish; Raghava, Gajendra PS

    2009-01-01

    Background Hormones are signaling molecules that play vital roles in various life processes, like growth and differentiation, physiology, and reproduction. These molecules are mostly secreted by endocrine glands, and transported to target organs through the bloodstream. Deficient, or excessive, levels of hormones are associated with several diseases such as cancer, osteoporosis, diabetes etc. Thus, it is important to collect and compile information about hormones and their receptors. Description This manuscript describes a database called Hmrbase which has been developed for managing information about hormones and their receptors. It is a highly curated database for which information has been collected from the literature and the public databases. The current version of Hmrbase contains comprehensive information about ~2000 hormones, e.g., about their function, source organism, receptors, mature sequences, structures etc. Hmrbase also contains information about ~3000 hormone receptors, in terms of amino acid sequences, subcellular localizations, ligands, and post-translational modifications etc. One of the major features of this database is that it provides data about ~4100 hormone-receptor pairs. A number of online tools have been integrated into the database, to provide the facilities like keyword search, structure-based search, mapping of a given peptide(s) on the hormone/receptor sequence, sequence similarity search. This database also provides a number of external links to other resources/databases in order to help in the retrieving of further related information. Conclusion Owing to the high impact of endocrine research in the biomedical sciences, the Hmrbase could become a leading data portal for researchers. The salient features of Hmrbase are hormone-receptor pair-related information, mapping of peptide stretches on the protein sequences of hormones and receptors, Pfam domain annotations, categorical browsing options, online data submission, Drug

  11. Database Description - ConfC | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ConfC Alternative name Database...amotsu Noguchi Tel: 042-495-8736 E-mail: Database classification Structure Database...s - Protein structure Structure Databases - Small molecules Structure Databases - Nucleic acid structure Database... services - Need for user registration - About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Database Description - ConfC | LSDB Archive ...

  12. Designing the database for a reliability aware Model-Based System Engineering process

    International Nuclear Information System (INIS)

    Cressent, Robin; David, Pierre; Idasiak, Vincent; Kratz, Frederic

    2013-01-01

    This article outlines the need for a reliability database to implement model-based description of components failure modes and dysfunctional behaviors. We detail the requirements such a database should honor and describe our own solution: the Dysfunctional Behavior Database (DBD). Through the description of its meta-model, the benefits of integrating the DBD in the system design process is highlighted. The main advantages depicted are the possibility to manage feedback knowledge at various granularity and semantic levels and to ease drastically the interactions between system engineering activities and reliability studies. The compliance of the DBD with other reliability database such as FIDES is presented and illustrated. - Highlights: ► Model-Based System Engineering is more and more used in the industry. ► It results in a need for a reliability database able to deal with model-based description of dysfunctional behavior. ► The Dysfunctional Behavior Database aims to fulfill that need. ► It helps dealing with feedback management thanks to its structured meta-model. ► The DBD can profit from other reliability database such as FIDES.

  13. Soliton solutions of the (2 + 1)-dimensional Harry Dym equation via Darboux transformation

    International Nuclear Information System (INIS)

    Halim, A.A.

    2008-01-01

    This work introduces solitons solutions for the (2 + 1)-dimensional Harry Dym equation using Darboux transformation. The link between the (2 + 1)-dimensional Harry Dym equation and the linear system associated with the modified Kadomtzev-Patvishvili equation is used. Namely, soliton solutions for the linear system associated with the later equation are produced using Darboux transformation. These solutions are inserted in the mentioned link to produce soliton solutions for the (2 + 1)-dimensional Harry Dym equation

  14. MOSAIC: An organic geochemical and sedimentological database for marine surface sediments

    Science.gov (United States)

    Tavagna, Maria Luisa; Usman, Muhammed; De Avelar, Silvania; Eglinton, Timothy

    2015-04-01

    MOSAIC. MOSAIC is programmed with PostgreSQL, an open-source database management system. In order to locate geographically the data, each element/datum is associated to a latitude, longitude and depth, facilitating creation of a geospatial database which can be easily interfaced to a Geographic Information System (GIS). In order to make the database broadly accessible, a HTML-PHP language-based website will ultimately be created and linked to the database. Consulting the website will allow for both data visualization as well as export of data in txt format for utilization with common software solutions (e.g. ODV, Excel, Matlab, Python, Word, PPT, Illustrator…). In this very early stage, MOSAIC presently contains approximately 10000 analyses conducted on more than 1800 samples which were collected from over 1600 different geographical locations around the world. Through participation of the international research community, MOSAIC will rapidly develop into a rich archive and versatile tool for investigation of distribution and composition of organic matter accumulating in seafloor sediments. The present contribution will outline the structure of MOSAIC, provide examples of data output, and solicit feedback on desirable features to be included in the database and associated software tools.

  15. Connecting geoscience systems and data using Linked Open Data in the Web of Data

    Science.gov (United States)

    Ritschel, Bernd; Neher, Günther; Iyemori, Toshihiko; Koyama, Yukinobu; Yatagai, Akiyo; Murayama, Yasuhiro; Galkin, Ivan; King, Todd; Fung, Shing F.; Hughes, Steve; Habermann, Ted; Hapgood, Mike; Belehaki, Anna

    2014-05-01

    Linked Data or Linked Open Data (LOD) in the realm of free and publically accessible data is one of the most promising and most used semantic Web frameworks connecting various types of data and vocabularies including geoscience and related domains. The semantic Web extension to the commonly existing and used World Wide Web is based on the meaning of entities and relationships or in different words classes and properties used for data in a global data and information space, the Web of Data. LOD data is referenced and mash-uped by URIs and is retrievable using simple parameter controlled HTTP-requests leading to a result which is human-understandable or machine-readable. Furthermore the publishing and mash-up of data in the semantic Web realm is realized by specific Web standards, such as RDF, RDFS, OWL and SPARQL defined for the Web of Data. Semantic Web based mash-up is the Web method to aggregate and reuse various contents from different sources, such as e.g. using FOAF as a model and vocabulary for the description of persons and organizations -in our case- related to geoscience projects, instruments, observations, data and so on. On the example of three different geoscience data and information management systems, such as ESPAS, IUGONET and GFZ ISDC and the associated science data and related metadata or better called context data, the concept of the mash-up of systems and data using the semantic Web approach and the Linked Open Data framework is described in this publication. Because the three systems are based on different data models, data storage structures and technical implementations an extra semantic Web layer upon the existing interfaces is used for mash-up solutions. In order to satisfy the semantic Web standards, data transition processes, such as the transfer of content stored in relational databases or mapped in XML documents into SPARQL capable databases or endpoints using D2R or XSLT is necessary. In addition, the use of mapped and/or merged domain

  16. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  17. Database architecture evolution: Mammals flourished long before dinosaurs became extinct

    NARCIS (Netherlands)

    S. Manegold (Stefan); M.L. Kersten (Martin); P.A. Boncz (Peter)

    2009-01-01

    textabstractThe holy grail for database architecture research is to find a solution that is Scalable & Speedy, to run on anything from small ARM processors up to globally distributed compute clusters, Stable & Secure, to service a broad user community, Small & Simple, to be comprehensible to a small

  18. Development of thermodynamic databases for geochemical calculations

    Energy Technology Data Exchange (ETDEWEB)

    Arthur, R.C. [Monitor Scientific, L.L.C., Denver, Colorado (United States); Sasamoto, Hiroshi; Shibata, Masahiro; Yui, Mikazu [Japan Nuclear Cycle Development Inst., Tokai, Ibaraki (Japan); Neyama, Atsushi [Computer Software Development Corp., Tokyo (Japan)

    1999-09-01

    experimental and field observations that constrain these data are consistently evaluated within this modeling framework. The accuracy of the data in SPRONS.JNC is evaluated in the present study and elsewhere by comparison of calculated equilibrium constants with their experimental counterparts at pressures and temperatures that span much of the subcritical and supercritical regions of H{sub 2}O stability. Additional experimental investigation of mineral solubilities and aqueous reactions, particularly between 0 and 100degC, are needed to further assess, and refine if necessary, the reliability of these databases. Field studies on phase equilibria in near-surface geological environments may be useful for this purpose because associated reaction times are greater than can be accommodated experimentally. The effects on mineral-solution equilibria of metastability and solid solution, and differences in the crystallinity and state of order/disorder in minerals, must be determined, however, before reliable thermodynamic properties can be retrieved from field investigations. (author)

  19. Development of thermodynamic databases for geochemical calculations

    International Nuclear Information System (INIS)

    Arthur, R.C.; Sasamoto, Hiroshi; Shibata, Masahiro; Yui, Mikazu; Neyama, Atsushi

    1999-09-01

    experimental and field observations that constrain these data are consistently evaluated within this modeling framework. The accuracy of the data in SPRONS.JNC is evaluated in the present study and elsewhere by comparison of calculated equilibrium constants with their experimental counterparts at pressures and temperatures that span much of the subcritical and supercritical regions of H 2 O stability. Additional experimental investigation of mineral solubilities and aqueous reactions, particularly between 0 and 100degC, are needed to further assess, and refine if necessary, the reliability of these databases. Field studies on phase equilibria in near-surface geological environments may be useful for this purpose because associated reaction times are greater than can be accommodated experimentally. The effects on mineral-solution equilibria of metastability and solid solution, and differences in the crystallinity and state of order/disorder in minerals, must be determined, however, before reliable thermodynamic properties can be retrieved from field investigations. (author)

  20. The database for accelerator control in the CERN PS Complex

    International Nuclear Information System (INIS)

    Cuperus, J.H.

    1987-01-01

    The use of a database started 7 years ago and is an effort to separate logic from data so that programs and routines can do a larger number of operations on data structures without knowing a priori the contents of these structures. It is of great help in coping with the complexities of a system controlling many linked accelerators and storage rings

  1. Implementing a modular framework in a conditions database explorer for ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Simoes, J; Amorim, A; Batista, J; Lopes, L; Neves, R; Pereira, P [SIM and FCUL, University of Lisbon, Campo Grande, P-1749-016 Lisbon (Portugal); Kolos, S [University of California, Irvine, California 92697-4575 (United States); Soloviev, I [Petersburg Nuclear Physics Institute, Gatchina, St-Petersburg RU-188350 (Russian Federation)], E-mail: jalmeida@mail.cern.ch, E-mail: Antonio.Amorim@sim.fc.ul.pt

    2008-07-15

    The ATLAS conditions databases will be used to manage information of quite diverse nature and level of complexity. The usage of a relational database manager like Oracle, together with the object managers POOL and OKS developed in-house, poses special difficulties in browsing the available data while understanding its structure in a general way. This is particularly relevant for the database browser projects where it is difficult to link with the class defining libraries generated by general frameworks such as Athena. A modular approach to tackle these problems is presented here. The database infrastructure is under development using the LCG COOL infrastructure, and provides a powerful information sharing gateway upon many different systems. The nature of the stored information ranges from temporal series of simple values up to very complex objects describing the configuration of systems like ATLAS' TDAQ infrastructure, including also associations to large objects managed outside of the database infrastructure. An important example of this architecture is the Online Objects Extended Database BrowsEr (NODE), which is designed to access and display all data, available in the ATLAS Monitoring Data Archive (MDA), including histograms and data tables. To deal with the special nature of the monitoring objects, a plugin from the MDA framework to the Time managed science Instrument Databases (TIDB2) is used. The database browser is extended, in particular to include operations on histograms such as display, overlap, comparisons as well as commenting and local storage.

  2. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  3. Experimental Database with Baseline CFD Solutions: 2-D and Axisymmetric Hypersonic Shock-Wave/Turbulent-Boundary-Layer Interactions

    Science.gov (United States)

    Marvin, Joseph G.; Brown, James L.; Gnoffo, Peter A.

    2013-01-01

    A database compilation of hypersonic shock-wave/turbulent boundary layer experiments is provided. The experiments selected for the database are either 2D or axisymmetric, and include both compression corner and impinging type SWTBL interactions. The strength of the interactions range from attached to incipient separation to fully separated flows. The experiments were chosen based on criterion to ensure quality of the datasets, to be relevant to NASA's missions and to be useful for validation and uncertainty assessment of CFD Navier-Stokes predictive methods, both now and in the future. An emphasis on datasets selected was on surface pressures and surface heating throughout the interaction, but include some wall shear stress distributions and flowfield profiles. Included, for selected cases, are example CFD grids and setup information, along with surface pressure and wall heating results from simulations using current NASA real-gas Navier-Stokes codes by which future CFD investigators can compare and evaluate physics modeling improvements and validation and uncertainty assessments of future CFD code developments. The experimental database is presented tabulated in the Appendices describing each experiment. The database is also provided in computer-readable ASCII files located on a companion DVD.

  4. Construction of a bibliographic information database for the nuclear engineering

    International Nuclear Information System (INIS)

    Kim, Tae Whan; Lim, Yeon Soo; Kwac, Dong Chul

    1991-12-01

    The major goal of the project is to develop a nuclear science database of materials that have been published in Korea and to establish a network system that will give relevant information to people in the nuclear industry by linking this system with the proposed National Science Technical Information Network. This project aims to establish a database consisted of about 1,000 research reports that were prepared by KAERI from 1979 to 1990. The contents of the project are as follows: 1. Materials Selection and Collection 2. Index and Abstract Preparation 3. Data Input and Transmission. This project is intended to achieve the goal of maximum utilization of nuclear information in Korea. (Author)

  5. A geospatial database model for the management of remote sensing datasets at multiple spectral, spatial, and temporal scales

    Science.gov (United States)

    Ifimov, Gabriela; Pigeau, Grace; Arroyo-Mora, J. Pablo; Soffer, Raymond; Leblanc, George

    2017-10-01

    In this study the development and implementation of a geospatial database model for the management of multiscale datasets encompassing airborne imagery and associated metadata is presented. To develop the multi-source geospatial database we have used a Relational Database Management System (RDBMS) on a Structure Query Language (SQL) server which was then integrated into ArcGIS and implemented as a geodatabase. The acquired datasets were compiled, standardized, and integrated into the RDBMS, where logical associations between different types of information were linked (e.g. location, date, and instrument). Airborne data, at different processing levels (digital numbers through geocorrected reflectance), were implemented in the geospatial database where the datasets are linked spatially and temporally. An example dataset consisting of airborne hyperspectral imagery, collected for inter and intra-annual vegetation characterization and detection of potential hydrocarbon seepage events over pipeline areas, is presented. Our work provides a model for the management of airborne imagery, which is a challenging aspect of data management in remote sensing, especially when large volumes of data are collected.

  6. A comprehensive aligned nifH gene database: a multipurpose tool for studies of nitrogen-fixing bacteria.

    Science.gov (United States)

    Gaby, John Christian; Buckley, Daniel H

    2014-01-01

    We describe a nitrogenase gene sequence database that facilitates analysis of the evolution and ecology of nitrogen-fixing organisms. The database contains 32 954 aligned nitrogenase nifH sequences linked to phylogenetic trees and associated sequence metadata. The database includes 185 linked multigene entries including full-length nifH, nifD, nifK and 16S ribosomal RNA (rRNA) gene sequences. Evolutionary analyses enabled by the multigene entries support an ancient horizontal transfer of nitrogenase genes between Archaea and Bacteria and provide evidence that nifH has a different history of horizontal gene transfer from the nifDK enzyme core. Further analyses show that lineages in nitrogenase cluster I and cluster III have different rates of substitution within nifD, suggesting that nifD is under different selection pressure in these two lineages. Finally, we find that that the genetic divergence of nifH and 16S rRNA genes does not correlate well at sequence dissimilarity values used commonly to define microbial species, as stains having <3% sequence dissimilarity in their 16S rRNA genes can have up to 23% dissimilarity in nifH. The nifH database has a number of uses including phylogenetic and evolutionary analyses, the design and assessment of primers/probes and the evaluation of nitrogenase sequence diversity. Database URL: http://www.css.cornell.edu/faculty/buckley/nifh.htm.

  7. Analysis of functionality free CASE-tools databases design

    Directory of Open Access Journals (Sweden)

    A. V. Gavrilov

    2016-01-01

    Full Text Available The introduction in the educational process of database design CASEtechnologies requires the institution of significant costs for the purchase of software. A possible solution could be the use of free software peers. At the same time this kind of substitution should be based on even-com representation of the functional characteristics and features of operation of these programs. The purpose of the article – a review of the free and non-profi t CASE-tools database design, as well as their classifi cation on the basis of the analysis functionality. When writing this article were used materials from the offi cial websites of the tool developers. Evaluation of the functional characteristics of CASEtools for database design made exclusively empirically with the direct work with software products. Analysis functionality of tools allow you to distinguish the two categories CASE-tools database design. The first category includes systems with a basic set of features and tools. The most important basic functions of these systems are: management connections to database servers, visual tools to create and modify database objects (tables, views, triggers, procedures, the ability to enter and edit data in table mode, user and privilege management tools, editor SQL-code, means export/import data. CASE-system related to the first category can be used to design and develop simple databases, data management, as well as a means of administration server database. A distinctive feature of the second category of CASE-tools for database design (full-featured systems is the presence of visual designer, allowing to carry out the construction of the database model and automatic creation of the database on the server based on this model. CASE-system related to this categories can be used for the design and development of databases of any structural complexity, as well as a database server administration tool. The article concluded that the

  8. Religion as problem, religion as solution: religious buffers of the links between religious/spiritual struggles and well-being/mental health.

    Science.gov (United States)

    Abu-Raiya, Hisham; Pargament, Kenneth I; Krause, Neal

    2016-05-01

    Previous studies have established robust links between religious/spiritual struggles (r/s struggles) and poorer well-being and psychological distress. A critical issue involves identifying the religious factors that buffer this relationship. This is the first study to empirically address this question. Specifically, it examines four religious factors (i.e., religious commitment, life sanctification, religious support, religious hope) as potential buffers of the links between r/s struggle and one indicator of subjective well-being (i.e., happiness) and one indicator of psychological distress (i.e., depressive symptoms). We utilized a cross-sectional design and a nationally representative sample of American adults (N = 2140) dealing with a wide range of major life stressors. We found that the interactions between r/s struggle and all potential moderators were significant in predicting happiness and/or depression. The linkage between r/s struggle and lower levels of happiness was moderated by higher levels of each of the four proposed religious buffers. Religious commitment and life sanctification moderated the ties between r/s struggles and depressive symptoms. The findings underscore the multifaceted character of religion: Paradoxically, religion may be a source of solutions to problems that may be an inherent part of religious life.

  9. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  10. Fast Fingerprint Database Maintenance for Indoor Positioning Based on UGV SLAM

    Directory of Open Access Journals (Sweden)

    Jian Tang

    2015-03-01

    Full Text Available Indoor positioning technology has become more and more important in the last two decades. Utilizing Received Signal Strength Indicator (RSSI fingerprints of Signals of OPportunity (SOP is a promising alternative navigation solution. However, as the RSSIs vary during operation due to their physical nature and are easily affected by the environmental change, one challenge of the indoor fingerprinting method is maintaining the RSSI fingerprint database in a timely and effective manner. In this paper, a solution for rapidly updating the fingerprint database is presented, based on a self-developed Unmanned Ground Vehicles (UGV platform NAVIS. Several SOP sensors were installed on NAVIS for collecting indoor fingerprint information, including a digital compass collecting magnetic field intensity, a light sensor collecting light intensity, and a smartphone which collects the access point number and RSSIs of the pre-installed WiFi network. The NAVIS platform generates a map of the indoor environment and collects the SOPs during processing of the mapping, and then the SOP fingerprint database is interpolated and updated in real time. Field tests were carried out to evaluate the effectiveness and efficiency of the proposed method. The results showed that the fingerprint databases can be quickly created and updated with a higher sampling frequency (5Hz and denser reference points compared with traditional methods, and the indoor map can be generated without prior information. Moreover, environmental changes could also be detected quickly for fingerprint indoor positioning.

  11. Building a genome database using an object-oriented approach.

    Science.gov (United States)

    Barbasiewicz, Anna; Liu, Lin; Lang, B Franz; Burger, Gertraud

    2002-01-01

    GOBASE is a relational database that integrates data associated with mitochondria and chloroplasts. The most important data in GOBASE, i. e., molecular sequences and taxonomic information, are obtained from the public sequence data repository at the National Center for Biotechnology Information (NCBI), and are validated by our experts. Maintaining a curated genomic database comes with a towering labor cost, due to the shear volume of available genomic sequences and the plethora of annotation errors and omissions in records retrieved from public repositories. Here we describe our approach to increase automation of the database population process, thereby reducing manual intervention. As a first step, we used Unified Modeling Language (UML) to construct a list of potential errors. Each case was evaluated independently, and an expert solution was devised, and represented as a diagram. Subsequently, the UML diagrams were used as templates for writing object-oriented automation programs in the Java programming language.

  12. Ten Years Experience In Geo-Databases For Linear Facilities Risk Assessment (Lfra)

    Science.gov (United States)

    Oboni, F.

    2003-04-01

    Keywords: geo-environmental, database, ISO14000, management, decision-making, risk, pipelines, roads, railroads, loss control, SAR, hazard identification ABSTRACT: During the past decades, characterized by the development of the Risk Management (RM) culture, a variety of different RM models have been proposed by governmental agencies in various parts of the world. The most structured models appear to have originated in the field of environmental RM. These models are briefly reviewed in the first section of the paper focusing the attention on the difference between Hazard Management and Risk Management and the need to use databases in order to allow retrieval of specific information and effective updating. The core of the paper reviews a number of different RM approaches, based on extensions of geo-databases, specifically developed for linear facilities (LF) in transportation corridors since the early 90s in Switzerland, Italy, Canada, the US and South America. The applications are compared in terms of methodology, capabilities and resources necessary to their implementation. The paper then focuses the attention on the level of detail that applications and related data have to attain. Common pitfalls related to decision making based on hazards rather than on risks are discussed. The paper focuses the last sections on the description of the next generation of linear facility RA application, including examples of results and discussion of future methodological research. It is shown that geo-databases should be linked to loss control and accident reports in order to maximize their benefits. The links between RA and ISO 14000 (environmental management code) are explicitly considered.

  13. The CERN accelerator measurement database: on the road to federation

    International Nuclear Information System (INIS)

    Roderick, C.; Billen, R.; Gourber-Pace, M.; Hoibian, N.; Peryt, M.

    2012-01-01

    The Measurement database, acting as short-term central persistence and front-end of the CERN accelerator Logging Service, receives billions of time-series data per day for 200000+ signals. A variety of data acquisition systems on hundreds of front-end computers publish source data that eventually end up being logged in the Measurement database. As part of a federated approach to data management, information about source devices are defined in a Configuration database, whilst the signals to be logged are defined in the Measurement database. A mapping, which is often complex and subject to change/extension, is required in order to subscribe to the source devices, and write the published data to the corresponding named signals. Since 2005, this mapping was done by means of dozens of XML files, which were manually maintained by multiple persons, resulting in a configuration that was error prone. In 2010 this configuration was fully centralized in the Measurement database itself, reducing significantly the complexity and the actors in the process. Furthermore, logging processes immediately pick up modified configurations via JMS based notifications sent directly from the database. This paper will describe the architecture and the benefits of current implementation, as well as the next steps on the road to a fully federated solution. (authors)

  14. ARACHNID: A prototype object-oriented database tool for distributed systems

    Science.gov (United States)

    Younger, Herbert; Oreilly, John; Frogner, Bjorn

    1994-01-01

    This paper discusses the results of a Phase 2 SBIR project sponsored by NASA and performed by MIMD Systems, Inc. A major objective of this project was to develop specific concepts for improved performance in accessing large databases. An object-oriented and distributed approach was used for the general design, while a geographical decomposition was used as a specific solution. The resulting software framework is called ARACHNID. The Faint Source Catalog developed by NASA was the initial database testbed. This is a database of many giga-bytes, where an order of magnitude improvement in query speed is being sought. This database contains faint infrared point sources obtained from telescope measurements of the sky. A geographical decomposition of this database is an attractive approach to dividing it into pieces. Each piece can then be searched on individual processors with only a weak data linkage between the processors being required. As a further demonstration of the concepts implemented in ARACHNID, a tourist information system is discussed. This version of ARACHNID is the commercial result of the project. It is a distributed, networked, database application where speed, maintenance, and reliability are important considerations. This paper focuses on the design concepts and technologies that form the basis for ARACHNID.

  15. The Linked CENTURY Study: linking three decades of clinical and public health data to examine disparities in childhood obesity.

    Science.gov (United States)

    Hawkins, Summer Sherburne; Gillman, Matthew W; Rifas-Shiman, Sheryl L; Kleinman, Ken P; Mariotti, Megan; Taveras, Elsie M

    2016-03-09

    Despite the need to identify the causes of disparities in childhood obesity, the existing epidemiologic studies of early life risk factors have several limitations. We report on the construction of the Linked CENTURY database, incorporating CENTURY (Collecting Electronic Nutrition Trajectory Data Using Records of Youth) Study data with birth certificates; and discuss the potential implications of combining clinical and public health data sources in examining the etiology of disparities in childhood obesity. We linked the existing CENTURY Study, a database of 269,959 singleton children from birth to age 18 years with measured heights and weights, with each child's Massachusetts birth certificate, which captures information on their mothers' pregnancy history and detailed socio-demographic information of both mothers and fathers. Overall, 74.2 % were matched, resulting in 200,343 children in the Linked CENTURY Study with 1,580,597 well child visits. Among this cohort, 94.0 % (188,334) of children have some father information available on the birth certificate and 60.9 % (121,917) of children have at least one other sibling in the dataset. Using maternal race/ethnicity from the birth certificate as an indicator of children's race/ethnicity, 75.7 % of children were white, 11.6 % black, 4.6 % Hispanic, and 5.7 % Asian. Based on socio-demographic information from the birth certificate, 20.0 % of mothers were non-US born, 5.9 % smoked during pregnancy, 76.3 % initiated breastfeeding, and 11.0 % of mothers had their delivery paid for by public health insurance. Using clinical data from the CENTURY Study, 22.7 % of children had a weight-for-length ≥ 95(th) percentile between 1 and 24 months and 12.0 % of children had a body mass index ≥ 95(th) percentile at ages 5 and 17 years. By linking routinely-collected data sources, it is possible to address research questions that could not be answered with either source alone. Linkage between a clinical

  16. D11.1.1: ANAC web-site linked to the technical and administrative databases

    CERN Document Server

    Szeberenyi, A

    2013-01-01

    The EuCARD project has been using various databases to store scientific and contractual information, as well as working documents. This report documents the methods used during the life of the project and the strategy chosen to archive the technical and administrative material after the project completion. Special care is given to provide easy and open access for the foreground produced, especially for the EuCARD-2 community at large, including its network partners worldwide.

  17. Evaluation of unique identifiers used for citation linking [version 1; referees: 1 approved, 2 approved with reservations

    Directory of Open Access Journals (Sweden)

    Heidi Holst Madsen

    2016-06-01

    Full Text Available Unique identifiers (UID are seen as an effective tool to create links between identical publications in databases or identify duplicates in a database. The purpose of the present study is to investigate how well UIDs work for citation linking. We have two objectives: Explore the coverage, precision, and characteristics of publications matched versus not matched with UIDs as the match key.   Illustrate how publication sets formed by using UIDs as the match key may affect the bibliometric indicators: Number of publications, number of citations and the average number of citations per publication.   The objectives are addressed in a literature review and a case study. The literature review shows that only a few studies evaluate how well UIDs work as a match key. From the literature we identify four error types: Duplicate digital object identifiers (DOI, incorrect DOIs in reference lists and databases, DOIs not registered by the database where a bibliometric analysis is performed, and erroneous optical or special character recognition.   The case study explores the use of UIDs in the integration between the databases Pure and SciVal. Specifically journal publications in English are matched between the two databases. We find all error types except erroneous optical or special character recognition in our publication sets. In particular the duplicate DOIs constitute a problem for the calculation of bibliometric indicators as both keeping the duplicates to improve the reliability of citation counts and deleting them to improve the reliability of publication counts will distort the calculation of average number of citations per publication.   The use of UIDs as a match key in citation linking is implemented in many settings, and the availability of UIDs may become critical for the inclusion of a publication or a database in a bibliometric analysis.

  18. AMYPdb: A database dedicated to amyloid precursor proteins

    Directory of Open Access Journals (Sweden)

    Delamarche Christian

    2008-06-01

    Full Text Available Abstract Background Misfolding and aggregation of proteins into ordered fibrillar structures is associated with a number of severe pathologies, including Alzheimer's disease, prion diseases, and type II diabetes. The rapid accumulation of knowledge about the sequences and structures of these proteins allows using of in silico methods to investigate the molecular mechanisms of their abnormal conformational changes and assembly. However, such an approach requires the collection of accurate data, which are inconveniently dispersed among several generalist databases. Results We therefore created a free online knowledge database (AMYPdb dedicated to amyloid precursor proteins and we have performed large scale sequence analysis of the included data. Currently, AMYPdb integrates data on 31 families, including 1,705 proteins from nearly 600 organisms. It displays links to more than 2,300 bibliographic references and 1,200 3D-structures. A Wiki system is available to insert data into the database, providing a sharing and collaboration environment. We generated and analyzed 3,621 amino acid sequence patterns, reporting highly specific patterns for each amyloid family, along with patterns likely to be involved in protein misfolding and aggregation. Conclusion AMYPdb is a comprehensive online database aiming at the centralization of bioinformatic data regarding all amyloid proteins and their precursors. Our sequence pattern discovery and analysis approach unveiled protein regions of significant interest. AMYPdb is freely accessible 1.

  19. Development of SRS.php, a Simple Object Access Protocol-based library for data acquisition from integrated biological databases.

    Science.gov (United States)

    Barbosa-Silva, A; Pafilis, E; Ortega, J M; Schneider, R

    2007-12-11

    Data integration has become an important task for biological database providers. The current model for data exchange among different sources simplifies the manner that distinct information is accessed by users. The evolution of data representation from HTML to XML enabled programs, instead of humans, to interact with biological databases. We present here SRS.php, a PHP library that can interact with the data integration Sequence Retrieval System (SRS). The library has been written using SOAP definitions, and permits the programmatic communication through webservices with the SRS. The interactions are possible by invoking the methods described in WSDL by exchanging XML messages. The current functions available in the library have been built to access specific data stored in any of the 90 different databases (such as UNIPROT, KEGG and GO) using the same query syntax format. The inclusion of the described functions in the source of scripts written in PHP enables them as webservice clients to the SRS server. The functions permit one to query the whole content of any SRS database, to list specific records in these databases, to get specific fields from the records, and to link any record among any pair of linked databases. The case study presented exemplifies the library usage to retrieve information regarding registries of a Plant Defense Mechanisms database. The Plant Defense Mechanisms database is currently being developed, and the proposal of SRS.php library usage is to enable the data acquisition for the further warehousing tasks related to its setup and maintenance.

  20. Database Description - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name GETDB Alternative n...ame Gal4 Enhancer Trap Insertion Database DOI 10.18908/lsdba.nbdc00236-000 Creator Creator Name: Shigeo Haya... Chuo-ku, Kobe 650-0047 Tel: +81-78-306-3185 FAX: +81-78-306-3183 E-mail: Database classification Expression... Invertebrate genome database Organism Taxonomy Name: Drosophila melanogaster Taxonomy ID: 7227 Database des...riginal website information Database maintenance site Drosophila Genetic Resource

  1. JICST Factual DatabaseJICST Chemical Substance Safety Regulation Database

    Science.gov (United States)

    Abe, Atsushi; Sohma, Tohru

    JICST Chemical Substance Safety Regulation Database is based on the Database of Safety Laws for Chemical Compounds constructed by Japan Chemical Industry Ecology-Toxicology & Information Center (JETOC) sponsored by the Sience and Technology Agency in 1987. JICST has modified JETOC database system, added data and started the online service through JOlS-F (JICST Online Information Service-Factual database) in January 1990. JICST database comprises eighty-three laws and fourteen hundred compounds. The authors outline the database, data items, files and search commands. An example of online session is presented.

  2. A white-box anomaly-based framework for database leakage detection

    NARCIS (Netherlands)

    Costante, E.; den Hartog, J.; Petkovic, M.; Etalle, S.; Pechenizkiy, M.

    2017-01-01

    Data leakage is at the heart most of the privacy breaches worldwide. In this paper we present a white-box approach to detect potential data leakage by spotting anomalies in database transactions. We refer to our solution as white-box because it builds self explanatory profiles that are easy to

  3. Recirculating cooling water solute depletion models

    International Nuclear Information System (INIS)

    Price, W.T.

    1990-01-01

    Chromates have been used for years to inhibit copper corrosion in the plant Recirculating Cooling Water (RCW) system. However, chromates have become an environmental problem in recent years both in the chromate removal plant (X-616) operation and from cooling tower drift. In response to this concern, PORTS is replacing chromates with Betz Dianodic II, a combination of phosphates, BZT, and a dispersant. This changeover started with the X-326 system in 1989. In order to control chemical concentrations in X-326 and in systems linked to it, we needed to be able to predict solute concentrations in advance of the changeover. Failure to predict and control these concentrations can result in wasted chemicals, equipment fouling, or increased corrosion. Consequently, Systems Analysis developed two solute concentration models. The first simulation represents the X-326 RCW system by itself; and models the depletion of a solute once the feed has stopped. The second simulation represents the X-326, X-330, and the X-333 systems linked together by blowdown. This second simulation represents the concentration of a solute in all three systems simultaneously. 4 figs

  4. Measuring Journal Linking Success from a Discovery Service

    Directory of Open Access Journals (Sweden)

    Kenyon Stuart

    2015-03-01

    Full Text Available Online linking to full text via third-party link-resolution services, such as Serials Solutions 360 Link or Ex Libris’ SFX, has become a popular method of access to users in academic libraries. This article describes several attempts made over the course of the past three years at the University of Michigan to gather data on linkage failure: the method used, the limiting factors, the changes made in methods, an analysis of the data collected, and a report of steps taken locally because of the studies. It is hoped that the experiences at one institution may be applicable more broadly and, perhaps, produce a stronger data-driven effort at improving linking services.

  5. Application of a fast sorting algorithm to the assignment of mass spectrometric cross-linking data.

    Science.gov (United States)

    Petrotchenko, Evgeniy V; Borchers, Christoph H

    2014-09-01

    Cross-linking combined with MS involves enzymatic digestion of cross-linked proteins and identifying cross-linked peptides. Assignment of cross-linked peptide masses requires a search of all possible binary combinations of peptides from the cross-linked proteins' sequences, which becomes impractical with increasing complexity of the protein system and/or if digestion enzyme specificity is relaxed. Here, we describe the application of a fast sorting algorithm to search large sequence databases for cross-linked peptide assignments based on mass. This same algorithm has been used previously for assigning disulfide-bridged peptides (Choi et al., ), but has not previously been applied to cross-linking studies. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. SM-ROM-GL (Strong Motion Romania Ground Level Database

    Directory of Open Access Journals (Sweden)

    Ioan Sorin BORCIA

    2015-07-01

    Full Text Available The SM-ROM-GL database includes data obtained by the processing of records performed at ground level by the Romanian seismic networks, namely INCERC, NIEP, NCSRR and ISPH-GEOTEC, during recent seismic events with moment magnitude Mw ≥ 5 and epicenters located in Romania. All the available seismic records were re-processed using the same basic software and the same procedures and options (filtering and baseline correction, in order to obtain a consistent dataset. The database stores computed parameters of seismic motions, i.e. peak values: PGA, PGV, PGD, effective peak values: EPA, EPV, EPD, control periods, spectral values of absolute acceleration, relative velocity and relative displacement, as well as of instrumental intensity (as defined bz Sandi and Borcia in 2011. The fields in the database include: coding of seismic events, stations and records, a number of associated fields (seismic event source parameters, geographical coordinates of seismic stations, links to the corresponding ground motion records, charts of the response spectra of absolute acceleration, relative velocity, relative displacement and instrumental intensity, as well as some other representative parameters of seismic motions. The conception of the SM-ROM-GL database allows for an easy maintenance; such that elementary knowledge of Microsoft Access 2000 is sufficient for its operation.

  7. Mining Electronic Health Records using Linked Data.

    Science.gov (United States)

    Odgers, David J; Dumontier, Michel

    2015-01-01

    Meaningful Use guidelines have pushed the United States Healthcare System to adopt electronic health record systems (EHRs) at an unprecedented rate. Hospitals and medical centers are providing access to clinical data via clinical data warehouses such as i2b2, or Stanford's STRIDE database. In order to realize the potential of using these data for translational research, clinical data warehouses must be interoperable with standardized health terminologies, biomedical ontologies, and growing networks of Linked Open Data such as Bio2RDF. Applying the principles of Linked Data, we transformed a de-identified version of the STRIDE into a semantic clinical data warehouse containing visits, labs, diagnoses, prescriptions, and annotated clinical notes. We demonstrate the utility of this system though basic cohort selection, phenotypic profiling, and identification of disease genes. This work is significant in that it demonstrates the feasibility of using semantic web technologies to directly exploit existing biomedical ontologies and Linked Open Data.

  8. Download - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Download First of all, please read the license of this database. Data ...1.4 KB) Simple search and download Downlaod via FTP FTP server is sometimes jammed. If it is, access [here]. About This Database Data...base Description Download License Update History of This Database Site Policy | Contact Us Download - Trypanosomes Database | LSDB Archive ...

  9. License - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database License License to Use This Database Last updated : 2017/02/27 You may use this database...cense specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative ...Commons Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...ative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database, you ar

  10. Ebolavirus Database: Gene and Protein Information Resource for Ebolaviruses

    Directory of Open Access Journals (Sweden)

    Rayapadi G. Swetha

    2016-01-01

    Full Text Available Ebola Virus Disease (EVD is a life-threatening haemorrhagic fever in humans. Even though there are many reports on EVD, the protein precursor functions and virulent factors of ebolaviruses remain poorly understood. Comparative analyses of Ebolavirus genomes will help in the identification of these important features. This prompted us to develop the Ebolavirus Database (EDB and we have provided links to various tools that will aid researchers to locate important regions in both the genomes and proteomes of Ebolavirus. The genomic analyses of ebolaviruses will provide important clues for locating the essential and core functional genes. The aim of EDB is to act as an integrated resource for ebolaviruses and we strongly believe that the database will be a useful tool for clinicians, microbiologists, health care workers, and bioscience researchers.

  11. GDR (Genome Database for Rosaceae): integrated web-database for Rosaceae genomics and genetics data.

    Science.gov (United States)

    Jung, Sook; Staton, Margaret; Lee, Taein; Blenda, Anna; Svancara, Randall; Abbott, Albert; Main, Dorrie

    2008-01-01

    The Genome Database for Rosaceae (GDR) is a central repository of curated and integrated genetics and genomics data of Rosaceae, an economically important family which includes apple, cherry, peach, pear, raspberry, rose and strawberry. GDR contains annotated databases of all publicly available Rosaceae ESTs, the genetically anchored peach physical map, Rosaceae genetic maps and comprehensively annotated markers and traits. The ESTs are assembled to produce unigene sets of each genus and the entire Rosaceae. Other annotations include putative function, microsatellites, open reading frames, single nucleotide polymorphisms, gene ontology terms and anchored map position where applicable. Most of the published Rosaceae genetic maps can be viewed and compared through CMap, the comparative map viewer. The peach physical map can be viewed using WebFPC/WebChrom, and also through our integrated GDR map viewer, which serves as a portal to the combined genetic, transcriptome and physical mapping information. ESTs, BACs, markers and traits can be queried by various categories and the search result sites are linked to the mapping visualization tools. GDR also provides online analysis tools such as a batch BLAST/FASTA server for the GDR datasets, a sequence assembly server and microsatellite and primer detection tools. GDR is available at http://www.rosaceae.org.

  12. A Framing Link Based Tabu Search Algorithm for Large-Scale Multidepot Vehicle Routing Problems

    Directory of Open Access Journals (Sweden)

    Xuhao Zhang

    2014-01-01

    Full Text Available A framing link (FL based tabu search algorithm is proposed in this paper for a large-scale multidepot vehicle routing problem (LSMDVRP. Framing links are generated during continuous great optimization of current solutions and then taken as skeletons so as to improve optimal seeking ability, speed up the process of optimization, and obtain better results. Based on the comparison between pre- and postmutation routes in the current solution, different parts are extracted. In the current optimization period, links involved in the optimal solution are regarded as candidates to the FL base. Multiple optimization periods exist in the whole algorithm, and there are several potential FLs in each period. If the update condition is satisfied, the FL base is updated, new FLs are added into the current route, and the next period starts. Through adjusting the borderline of multidepot sharing area with dynamic parameters, the authors define candidate selection principles for three kinds of customer connections, respectively. Link split and the roulette approach are employed to choose FLs. 18 LSMDVRP instances in three groups are studied and new optimal solution values for nine of them are obtained, with higher computation speed and reliability.

  13. Requirements and specifications for a particle database

    International Nuclear Information System (INIS)

    2015-01-01

    One of the tasks of WPEC Subgroup 38 (SG38) is to design a database structure for storing the particle information needed for nuclear reaction databases and transport codes. Since the same particle may appear many times in a reaction database (produced by many different reactions on different targets), one of the long-term goals for SG38 is to move towards a central database of particle information to reduce redundancy and ensure consistency among evaluations. The database structure must be general enough to describe all relevant particles and their properties, including mass, charge, spin and parity, half-life, decay properties, and so on. Furthermore, it must be broad enough to handle not only excited nuclear states but also excited atomic states that can de-excite through atomic relaxation. Databases built with this hierarchy will serve as central repositories for particle information that can be linked to from codes and other databases. It is hoped that the final product is general enough for use in other projects besides SG38. While this is called a 'particle database', the definition of a particle (as described in Section 2) is very broad. The database must describe nucleons, nuclei, excited nuclear states (and possibly atomic states) in addition to fundamental particles like photons, electrons, muons, etc. Under this definition the list of possible particles becomes quite large. To help organize them the database will need a way of grouping related particles (e.g., all the isotopes of an element, or all the excited levels of an isotope) together into particle 'groups'. The database will also need a way to classify particles that belong to the same 'family' (such as 'leptons', 'baryons', etc.). Each family of particles may have special requirements as to what properties are required. One important function of the particle database will be to provide an easy way for codes and external databases to look up any particle stored inside. In order to make access as

  14. License - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database License License to Use This Database Last updated : 2017/03/13 You may use this database...specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Common...s Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...al ... . The summary of the Creative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database

  15. Joint Hybrid Backhaul and Access Links Design in Cloud-Radio Access Networks

    KAUST Repository

    Dhifallah, Oussama Najeeb

    2015-09-06

    The cloud-radio access network (CRAN) is expected to be the core network architecture for next generation mobile radio systems. In this paper, we consider the downlink of a CRAN formed of one central processor (the cloud) and several base station (BS), where each BS is connected to the cloud via either a wireless or capacity-limited wireline backhaul link. The paper addresses the joint design of the hybrid backhaul links (i.e., designing the wireline and wireless backhaul connections from the cloud to the BSs) and the access links (i.e., determining the sparse beamforming solution from the BSs to the users). The paper formulates the hybrid backhaul and access link design problem by minimizing the total network power consumption. The paper solves the problem using a two-stage heuristic algorithm. At one stage, the sparse beamforming solution is found using a weighted mixed 11/12 norm minimization approach; the correlation matrix of the quantization noise of the wireline backhaul links is computed using the classical rate-distortion theory. At the second stage, the transmit powers of the wireless backhaul links are found by solving a power minimization problem subject to quality-of-service constraints, based on the principle of conservation of rate by utilizing the rates found in the first stage. Simulation results suggest that the performance of the proposed algorithm approaches the global optimum solution, especially at high signal-to-interference-plus-noise ratio (SINR).

  16. The DFBS Spectroscopic Database and the Armenian Virtual Observatory

    Directory of Open Access Journals (Sweden)

    Areg M Mickaelian

    2009-05-01

    Full Text Available The Digitized First Byurakan Survey (DFBS is the digitized version of the famous Markarian Survey. It is the largest low-dispersion spectroscopic survey of the sky, covering 17,000 square degrees at galactic latitudes |b|>15. DFBS provides images and extracted spectra for all objects present in the FBS plates. Programs were developed to compute astrometric solution, extract spectra, and apply wavelength and photometric calibration for objects. A DFBS database and catalog has been assembled containing data for nearly 20,000,000 objects. A classification scheme for the DFBS spectra is being developed. The Armenian Virtual Observatory is based on the DFBS database and other large-area surveys and catalogue data.

  17. Vertical partitioning of relational OLTP databases using integer programming

    DEFF Research Database (Denmark)

    Amossen, Rasmus Resen

    2010-01-01

    A way to optimize performance of relational row store databases is to reduce the row widths by vertically partition- ing tables into table fractions in order to minimize the number of irrelevant columns/attributes read by each transaction. This pa- per considers vertical partitioning algorithms...... for relational row- store OLTP databases with an H-store-like architecture, meaning that we would like to maximize the number of single-sited transactions. We present a model for the vertical partitioning problem that, given a schema together with a vertical partitioning and a workload, estimates the costs...... applied to the TPC-C benchmark and the heuristic is shown to obtain solutions with costs close to the ones found using the quadratic program....

  18. KALIMER database development

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.

  19. KALIMER database development

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  20. Linking project-based mechanisms with domestic greenhouse gas emissions trading schemes

    International Nuclear Information System (INIS)

    Bygrave, S.; Bosi, M.

    2004-01-01

    Although there are a number of possible links between emission trading and project-based mechanisms, the focus of this paper is on linking domestic GHG emission trading schemes with: (1) domestic; and, (2) international (JI and CDM) GHG reduction project activities. The objective is to examine some of the challenges in linking DETs and project-based mechanisms, as well as some possible solutions to address these challenges. The link between JI / CDM and intergovernmental international emissions trading (i.e. Article 17 of the Kyoto Protocol) is defined by the Kyoto Protocol, and therefore is not covered in this paper. The paper is written in the context of: (a) countries adhering to the Kyoto Protocol and elaborating their strategies to meet their GHG emission commitments, including through the use of the emissions trading and project-based mechanisms. For example, the European Union (EU) will be commencing a GHG Emissions Trading Scheme in January 2005, and recently, the Council of ministers and the European Parliament agreed on a text for an EU Linking Directive allowing the use of JI and CDM emission units in the EU Emission Trading Scheme (EU-ETS); and (b) all countries (and/or regions within countries) with GHG emission obligations that may choose to use domestic emissions trading and project-based mechanisms to meet their GHG commitments. The paper includes the following elements: (1) an overview of the different flexibility mechanisms (i.e. GHG emissions trading and PBMs), including a brief description and comparisons between the mechanisms (Section 3); (2) an exploration of the issues that emerge when project-based mechanisms link with domestic emissions trading schemes, as well as possible solutions to address some of the challenges raised (Section 4); (3) a case study examining the EU-ETS and the EU Linking Directive on project-based mechanisms, in particular on how the EU is addressing in a practical context relevant linking issues (Section 5); (4) a

  1. Novel Schiff base (DBDDP) selective detection of Fe (III): Dispersed in aqueous solution and encapsulated in silica cross-linked micellar nanoparticles in living cell.

    Science.gov (United States)

    Gai, Fangyuan; Yin, Li; Fan, Mengmeng; Li, Ling; Grahn, Johnny; Ao, Yuhui; Yang, Xudong; Wu, Xuming; Liu, Yunling; Huo, Qisheng

    2018-03-15

    This work demonstrated the synthesis of (4E)-4-(4-(diphenylamino)benzylideneamino)-1,2-dihydro-1,5- dimethyl-2-phenylpyrazol-3-one (DBDDP) for Fe (III) detection in aqueous media and in the core of silica cross-linked micellar nanoparticles in living cells. The free DBDDP performed fluorescence enhancement due to Fe (III)-promoted hydrolysis in a mixed aqueous solution, while the DBDDP-doped silica cross-linked micellar nanoparticles (DBDDP-SCMNPs) performed an electron-transfer based fluorescence quenching of Fe (III) in living cells. The quenching fluorescence of DBDDP-SCMNPs and the concentration of Fe (III) exhibited a linear correlation, which was in accordance with the Stern-Volmer equation. Moreover, DBDDP-SCMNPs showed a low limit of detection (LOD) of 0.1 ppm and an excellent selectivity against other metal ions. Due to the good solubility and biocompatibility, DBDDP-SCMNPs could be applied as fluorescence quenching nanosensors in living cells. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. NoSQL databases

    OpenAIRE

    Mrozek, Jakub

    2012-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  3. Radiation-induced linking reactions in polyethylene

    International Nuclear Information System (INIS)

    Zoepfl, F.J.

    1983-01-01

    Three types of measurements are reported relating to chemical reactions in polyethylene induced by ionizing radiation: 1) viscometric and low-angle laser light scattering measurements to determine the effect of a radical scavenger on the yield of links; 2) calorimetric measurements to determine the effect of radiation-induced linking on the melting behavior of polyethylene; and 3) high-resolution solution carbon 13 nuclear magnetic resonance (NMR) spectrometry measurements to determine the nature of the links and the method of their formation. The NMR results present the first direct detection of radiation-induced long-chain branching (Y links) in polyethylene, and place an apparent upper limit on the yield of H-shaped crosslinks that are formed when polyethylene is irradiated to low absorbed doses. The effect of radiation-induced linking on the melting behavior of polyethylene was examined using differential scanning calorimetry (DSC). It was found that radiation-induced links do not change the heat of fusion of polythylene crystals, but decrease the melt entropy and increase the fold surface free energy per unit area of the crystals. The carbon 13 NMR results demonstrate that long-chain branches (Y links) are formed much more frequently than H-shaped crosslinks at low absorbed doses. The Y links are produced by reactions of alkyl free radicals with terminal vinyl groups in polyethylene

  4. Autism genetic database (AGD: a comprehensive database including autism susceptibility gene-CNVs integrated with known noncoding RNAs and fragile sites

    Directory of Open Access Journals (Sweden)

    Talebizadeh Zohreh

    2009-09-01

    Full Text Available Abstract Background Autism is a highly heritable complex neurodevelopmental disorder, therefore identifying its genetic basis has been challenging. To date, numerous susceptibility genes and chromosomal abnormalities have been reported in association with autism, but most discoveries either fail to be replicated or account for a small effect. Thus, in most cases the underlying causative genetic mechanisms are not fully understood. In the present work, the Autism Genetic Database (AGD was developed as a literature-driven, web-based, and easy to access database designed with the aim of creating a comprehensive repository for all the currently reported genes and genomic copy number variations (CNVs associated with autism in order to further facilitate the assessment of these autism susceptibility genetic factors. Description AGD is a relational database that organizes data resulting from exhaustive literature searches for reported susceptibility genes and CNVs associated with autism. Furthermore, genomic information about human fragile sites and noncoding RNAs was also downloaded and parsed from miRBase, snoRNA-LBME-db, piRNABank, and the MIT/ICBP siRNA database. A web client genome browser enables viewing of the features while a web client query tool provides access to more specific information for the features. When applicable, links to external databases including GenBank, PubMed, miRBase, snoRNA-LBME-db, piRNABank, and the MIT siRNA database are provided. Conclusion AGD comprises a comprehensive list of susceptibility genes and copy number variations reported to-date in association with autism, as well as all known human noncoding RNA genes and fragile sites. Such a unique and inclusive autism genetic database will facilitate the evaluation of autism susceptibility factors in relation to known human noncoding RNAs and fragile sites, impacting on human diseases. As a result, this new autism database offers a valuable tool for the research

  5. SYNTHESIS AND CATALYTIC PROPERTIES OF CROSS-LINKED HYDROPHOBICALLY ASSOCIATING POLY(ALKYLMETHYLDIALLYLAMMONIUM BROMIDES)

    NARCIS (Netherlands)

    WANG, GJ; ENGBERTS, JBFN

    1994-01-01

    Cross-linked, hydrophobically associating homo- and copolymers were synthesized by free-radical cyclo(co)polymerization of alkylmethyldiallylammonium bromide monomers with a small amount of N,N'-methylenebisacrylamide in aqueous solution using ammonium persulfate as the initiator. The cross-linked

  6. Solute-vacancy binding in aluminum

    International Nuclear Information System (INIS)

    Wolverton, C.

    2007-01-01

    Previous efforts to understand solute-vacancy binding in aluminum alloys have been hampered by a scarcity of reliable, quantitative experimental measurements. Here, we report a large database of solute-vacancy binding energies determined from first-principles density functional calculations. The calculated binding energies agree well with accurate measurements where available, and provide an accurate predictor of solute-vacancy binding in other systems. We find: (i) some common solutes in commercial Al alloys (e.g., Cu and Mg) possess either very weak (Cu), or even repulsive (Mg), binding energies. Hence, we assert that some previously reported large binding energies for these solutes are erroneous. (ii) Large binding energies are found for Sn, Cd and In, confirming the proposed mechanism for the reduced natural aging in Al-Cu alloys containing microalloying additions of these solutes. (iii) In addition, we predict that similar reduction in natural aging should occur with additions of Si, Ge and Au. (iv) Even larger binding energies are found for other solutes (e.g., Pb, Bi, Sr, Ba), but these solutes possess essentially no solubility in Al. (v) We have explored the physical effects controlling solute-vacancy binding in Al. We find that there is a strong correlation between binding energy and solute size, with larger solute atoms possessing a stronger binding with vacancies. (vi) Most transition-metal 3d solutes do not bind strongly with vacancies, and some are even energetically strongly repelled from vacancies, particularly for the early 3d solutes, Ti and V

  7. A linked GeoData map for enabling information access

    Science.gov (United States)

    Powell, Logan J.; Varanka, Dalia E.

    2018-01-10

    OverviewThe Geospatial Semantic Web (GSW) is an emerging technology that uses the Internet for more effective knowledge engineering and information extraction. Among the aims of the GSW are to structure the semantic specifications of data to reduce ambiguity and to link those data more efficiently. The data are stored as triples, the basic data unit in graph databases, which are similar to the vector data model of geographic information systems (GIS); that is, a node-edge-node model that forms a graph of semantically related information. The GSW is supported by emerging technologies such as linked geospatial data, described below, that enable it to store and manage geographical data that require new cartographic methods for visualization. This report describes a map that can interact with linked geospatial data using a simulation of a data query approach called the browsable graph to find information that is semantically related to a subject of interest, visualized using the Data Driven Documents (D3) library. Such a semantically enabled map functions as a map knowledge base (MKB) (Varanka and Usery, 2017).A MKB differs from a database in an important way. The central element of a triple, alternatively called the edge or property, is composed of a logic formalization that structures the relation between the first and third parts, the nodes or objects. Node-edge-node represents the graphic form of the triple, and the subject-property-object terms represent the data structure. Object classes connect to build a federated graph, similar to a network in visual form. Because the triple property is a logical statement (a predicate), the data graph represents logical propositions or assertions accepted to be true about the subject matter. These logical formalizations can be manipulated to calculate new triples, representing inferred logical assertions, from the existing data.To demonstrate a MKB system, a technical proof-of-concept is developed that uses geographically

  8. Implementing SaaS Solution for CRM

    Directory of Open Access Journals (Sweden)

    Adriana LIMBASAN

    2011-01-01

    Full Text Available Greatest innovations in virtualization and distributed computing have accelerated interest in cloud computing (IaaS, PaaS, SaaS, aso. This paper presents the SaaS prototype for Customer Relationship Management of a real estate company. Starting from several approaches of e-marketing and SaaS features and architectures, we adopted a model for a CRM solution using SaaS Level 2 architecture and distributed database. Based on the system objective, functionality, we developed a modular solution for solve CRM and e-marketing targets in real estate companies.

  9. DianaHealth.com, an On-Line Database Containing Appraisals of the Clinical Value and Appropriateness of Healthcare Interventions: Database Development and Retrospective Analysis.

    Science.gov (United States)

    Bonfill, Xavier; Osorio, Dimelza; Solà, Ivan; Pijoan, Jose Ignacio; Balasso, Valentina; Quintana, Maria Jesús; Puig, Teresa; Bolibar, Ignasi; Urrútia, Gerard; Zamora, Javier; Emparanza, José Ignacio; Gómez de la Cámara, Agustín; Ferreira-González, Ignacio

    2016-01-01

    To describe the development of a novel on-line database aimed to serve as a source of information concerning healthcare interventions appraised for their clinical value and appropriateness by several initiatives worldwide, and to present a retrospective analysis of the appraisals already included in the database. Database development and a retrospective analysis. The database DianaHealth.com is already on-line and it is regularly updated, independent, open access and available in English and Spanish. Initiatives are identified in medical news, in article references, and by contacting experts in the field. We include appraisals in the form of clinical recommendations, expert analyses, conclusions from systematic reviews, and original research that label any health care intervention as low-value or inappropriate. We obtain the information necessary to classify the appraisals according to type of intervention, specialties involved, publication year, authoring initiative, and key words. The database is accessible through a search engine which retrieves a list of appraisals and a link to the website where they were published. DianaHealth.com also provides a brief description of the initiatives and a section where users can report new appraisals or suggest new initiatives. From January 2014 to July 2015, the on-line database included 2940 appraisals from 22 initiatives: eleven campaigns gathering clinical recommendations from scientific societies, five sets of conclusions from literature review, three sets of recommendations from guidelines, two collections of articles on low clinical value in medical journals, and an initiative of our own. We have developed an open access on-line database of appraisals about healthcare interventions considered of low clinical value or inappropriate. DianaHealth.com could help physicians and other stakeholders make better decisions concerning patient care and healthcare systems sustainability. Future efforts should be focused on

  10. DianaHealth.com, an On-Line Database Containing Appraisals of the Clinical Value and Appropriateness of Healthcare Interventions: Database Development and Retrospective Analysis.

    Directory of Open Access Journals (Sweden)

    Xavier Bonfill

    Full Text Available To describe the development of a novel on-line database aimed to serve as a source of information concerning healthcare interventions appraised for their clinical value and appropriateness by several initiatives worldwide, and to present a retrospective analysis of the appraisals already included in the database.Database development and a retrospective analysis. The database DianaHealth.com is already on-line and it is regularly updated, independent, open access and available in English and Spanish. Initiatives are identified in medical news, in article references, and by contacting experts in the field. We include appraisals in the form of clinical recommendations, expert analyses, conclusions from systematic reviews, and original research that label any health care intervention as low-value or inappropriate. We obtain the information necessary to classify the appraisals according to type of intervention, specialties involved, publication year, authoring initiative, and key words. The database is accessible through a search engine which retrieves a list of appraisals and a link to the website where they were published. DianaHealth.com also provides a brief description of the initiatives and a section where users can report new appraisals or suggest new initiatives. From January 2014 to July 2015, the on-line database included 2940 appraisals from 22 initiatives: eleven campaigns gathering clinical recommendations from scientific societies, five sets of conclusions from literature review, three sets of recommendations from guidelines, two collections of articles on low clinical value in medical journals, and an initiative of our own.We have developed an open access on-line database of appraisals about healthcare interventions considered of low clinical value or inappropriate. DianaHealth.com could help physicians and other stakeholders make better decisions concerning patient care and healthcare systems sustainability. Future efforts should be

  11. DMPD: Toll-like receptor 3: a link between toll-like receptor, interferon and viruses. [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 15031527 Toll-like receptor 3: a link between toll-like receptor, interferon and virus... (.csml) Show Toll-like receptor 3: a link between toll-like receptor, interferon and viruses. PubmedID 1503...1527 Title Toll-like receptor 3: a link between toll-like receptor, interferon and virus

  12. Saving Large Semantic Data in Cloud: A Survey of the Main DBaaS Solutions

    Directory of Open Access Journals (Sweden)

    Bogdan IANCU

    2018-01-01

    Full Text Available In the last decades, the evolution of ICT has been spectacular, having a major impact on all the other sectors of activity. New technologies have emerged, coming up with solutions to existing problems and opening up new opportunities. This article discusses solutions that combine big data, semantic web and cloud computing technologies. The authors analyze various possibilities of storing large volumes of data in triplestore databases, which are currently the matter of choice for storing semantic web data. The paper first presents the existing solutions for installing triplestores on the premises and then focuses on triplestores as DBaaS (in cloud. Comparative analyzes are made between the various identified solutions. This paper provides useful means for choosing the most appropriate database solution for semantic web data representation, both on premises or as DBaaS.

  13. Reliability of capacitors for DC-link applications - An overview

    DEFF Research Database (Denmark)

    Wang, Huai; Blaabjerg, Frede

    2013-01-01

    DC-link capacitors are an important part in the majority of power electronic converters which contribute to cost, size and failure rate on a considerable scale. From capacitor users' viewpoint, this paper presents a review on the improvement of reliability of DC-link in power electronic converters...... from two aspects: 1) reliability-oriented DC-link design solutions; 2) conditioning monitoring of DC-link capacitors during operation. Failure mechanisms, failure modes and lifetime models of capacitors suitable for the applications are also discussed as a basis to understand the physics......-of-failure. This review serves to provide a clear picture of the state-of-the-art research in this area and to identify the corresponding challenges and future research directions for capacitors and their DC-link applications....

  14. Distributed Access View Integrated Database (DAVID) system

    Science.gov (United States)

    Jacobs, Barry E.

    1991-01-01

    The Distributed Access View Integrated Database (DAVID) System, which was adopted by the Astrophysics Division for their Astrophysics Data System, is a solution to the system heterogeneity problem. The heterogeneous components of the Astrophysics problem is outlined. The Library and Library Consortium levels of the DAVID approach are described. The 'books' and 'kits' level is discussed. The Universal Object Typer Management System level is described. The relation of the DAVID project with the Small Business Innovative Research (SBIR) program is explained.

  15. Description of OPRA: A Danish database designed for the analyses of risk factors associated with 30-day hospital readmission of people aged 65+ years.

    Science.gov (United States)

    Pedersen, Mona K; Nielsen, Gunnar L; Uhrenfeldt, Lisbeth; Rasmussen, Ole S; Lundbye-Christensen, Søren

    2017-08-01

    To describe the construction of the Older Person at Risk Assessment (OPRA) database, the ability to link this database with existing data sources obtained from Danish nationwide population-based registries and to discuss its research potential for the analyses of risk factors associated with 30-day hospital readmission. We reviewed Danish nationwide registries to obtain information on demographic and social determinants as well as information on health and health care use in a population of hospitalised older people. The sample included all people aged 65+ years discharged from Danish public hospitals in the period from 1 January 2007 to 30 September 2010. We used personal identifiers to link and integrate the data from all events of interest with the outcome measures in the OPRA database. The database contained records of the patients, admissions and variables of interest. The cohort included 1,267,752 admissions for 479,854 unique people. The rate of 30-day all-cause acute readmission was 18.9% ( n=239,077) and the overall 30-day mortality was 5.0% ( n=63,116). The OPRA database provides the possibility of linking data on health and life events in a population of people moving into retirement and ageing. Construction of the database makes it possible to outline individual life and health trajectories over time, transcending organisational boundaries within health care systems. The OPRA database is multi-component and multi-disciplinary in orientation and has been prepared to be used in a wide range of subgroup analyses, including different outcome measures and statistical methods.

  16. Database Dump - fRNAdb | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us fRNAdb Database Dump Data detail Data name Database Dump DOI 10.18908/lsdba.nbdc00452-002 De... data (tab separeted text) Data file File name: Database_Dump File URL: ftp://ftp....biosciencedbc.jp/archive/frnadb/LATEST/Database_Dump File size: 673 MB Simple search URL - Data acquisition...s. Data analysis method - Number of data entries 4 files - About This Database Database Description Download... License Update History of This Database Site Policy | Contact Us Database Dump - fRNAdb | LSDB Archive ...

  17. Implementation of linked data in the life sciences at BioHackathon 2011.

    Science.gov (United States)

    Aoki-Kinoshita, Kiyoko F; Kinjo, Akira R; Morita, Mizuki; Igarashi, Yoshinobu; Chen, Yi-An; Shigemoto, Yasumasa; Fujisawa, Takatomo; Akune, Yukie; Katoda, Takeo; Kokubu, Anna; Mori, Takaaki; Nakao, Mitsuteru; Kawashima, Shuichi; Okamoto, Shinobu; Katayama, Toshiaki; Ogishima, Soichi

    2015-01-01

    Linked Data has gained some attention recently in the life sciences as an effective way to provide and share data. As a part of the Semantic Web, data are linked so that a person or machine can explore the web of data. Resource Description Framework (RDF) is the standard means of implementing Linked Data. In the process of generating RDF data, not only are data simply linked to one another, the links themselves are characterized by ontologies, thereby allowing the types of links to be distinguished. Although there is a high labor cost to define an ontology for data providers, the merit lies in the higher level of interoperability with data analysis and visualization software. This increase in interoperability facilitates the multi-faceted retrieval of data, and the appropriate data can be quickly extracted and visualized. Such retrieval is usually performed using the SPARQL (SPARQL Protocol and RDF Query Language) query language, which is used to query RDF data stores. For the database provider, such interoperability will surely lead to an increase in the number of users. This manuscript describes the experiences and discussions shared among participants of the week-long BioHackathon 2011 who went through the development of RDF representations of their own data and developed specific RDF and SPARQL use cases. Advice regarding considerations to take when developing RDF representations of their data are provided for bioinformaticians considering making data available and interoperable. Participants of the BioHackathon 2011 were able to produce RDF representations of their data and gain a better understanding of the requirements for producing such data in a period of just five days. We summarize the work accomplished with the hope that it will be useful for researchers involved in developing laboratory databases or data analysis, and those who are considering such technologies as RDF and Linked Data.

  18. Toward An Unstructured Mesh Database

    Science.gov (United States)

    Rezaei Mahdiraji, Alireza; Baumann, Peter Peter

    2014-05-01

    -incidence relationships. We instrument ImG model with sets of optional and application-specific constraints which can be used to check validity of meshes for a specific class of object such as manifold, pseudo-manifold, and simplicial manifold. We conducted experiments to measure the performance of the graph database solution in processing mesh queries and compare it with GrAL mesh library and PostgreSQL database on synthetic and real mesh datasets. The experiments show that each system perform well on specific types of mesh queries, e.g., graph databases perform well on global path-intensive queries. In the future, we investigate database operations for the ImG model and design a mesh query language.

  19. DOT Online Database

    Science.gov (United States)

    Page Home Table of Contents Contents Search Database Search Login Login Databases Advisory Circulars accessed by clicking below: Full-Text WebSearch Databases Database Records Date Advisory Circulars 2092 5 data collection and distribution policies. Document Database Website provided by MicroSearch

  20. TIA: algorithms for development of identity-linked SNP islands for analysis by massively parallel DNA sequencing.

    Science.gov (United States)

    Farris, M Heath; Scott, Andrew R; Texter, Pamela A; Bartlett, Marta; Coleman, Patricia; Masters, David

    2018-04-11

    Single nucleotide polymorphisms (SNPs) located within the human genome have been shown to have utility as markers of identity in the differentiation of DNA from individual contributors. Massively parallel DNA sequencing (MPS) technologies and human genome SNP databases allow for the design of suites of identity-linked target regions, amenable to sequencing in a multiplexed and massively parallel manner. Therefore, tools are needed for leveraging the genotypic information found within SNP databases for the discovery of genomic targets that can be evaluated on MPS platforms. The SNP island target identification algorithm (TIA) was developed as a user-tunable system to leverage SNP information within databases. Using data within the 1000 Genomes Project SNP database, human genome regions were identified that contain globally ubiquitous identity-linked SNPs and that were responsive to targeted resequencing on MPS platforms. Algorithmic filters were used to exclude target regions that did not conform to user-tunable SNP island target characteristics. To validate the accuracy of TIA for discovering these identity-linked SNP islands within the human genome, SNP island target regions were amplified from 70 contributor genomic DNA samples using the polymerase chain reaction. Multiplexed amplicons were sequenced using the Illumina MiSeq platform, and the resulting sequences were analyzed for SNP variations. 166 putative identity-linked SNPs were targeted in the identified genomic regions. Of the 309 SNPs that provided discerning power across individual SNP profiles, 74 previously undefined SNPs were identified during evaluation of targets from individual genomes. Overall, DNA samples of 70 individuals were uniquely identified using a subset of the suite of identity-linked SNP islands. TIA offers a tunable genome search tool for the discovery of targeted genomic regions that are scalable in the population frequency and numbers of SNPs contained within the SNP island regions

  1. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  2. IAEA/NDS requirements related to database software

    International Nuclear Information System (INIS)

    Pronyaev, V.; Zerkin, V.

    2001-01-01

    Full text: The Nuclear Data Section of the IAEA disseminates data to the NDS users through Internet or on CD-ROMs and diskettes. OSU Web-server on DEC Alpha with Open VMS and Oracle/DEC DBMS provides via CGI scripts and FORTRAN retrieval programs access to the main nuclear databases supported by the networks of Nuclear Reactions Data Centres and Nuclear Structure and Decay Data Centres (CINDA, EXFOR, ENDF, NSR, ENSDF). For Web-access to data from other libraries and files, hyper-links to the files stored in ASCII text or other formats are used. Databases on CD-ROM are usually provided with some retrieval system. They are distributed in the run-time mode and comply with all license requirements for software used in their development. Although major development work is done now at the PC with MS-Windows and Linux, NDS may not at present, due to some institutional conditions, use these platforms for organization of the Web access to the data. Starting the end of 1999, the NDS, in co-operation with other data centers, began to work out the strategy of migration of main network nuclear data bases onto platforms other than DEC Alpha/Open VMS/DBMS. Because the different co-operating centers have their own preferences for hardware and software, the requirement to provide maximum platform independence for nuclear databases is the most important and desirable feature. This requirement determined some standards for the nuclear database software development. Taking into account the present state and future development, these standards can be formulated as follows: 1. All numerical data (experimental, evaluated, recommended values and their uncertainties) prepared for inclusion in the IAEA/NDS nuclear database should be submitted in the form of the ASCII text files and will be kept at NDS as a master file. 2. Databases with complex structure should be submitted in the form of the files with standard SQL statements describing all its components. All extensions of standard SQL

  3. Data-Mining Techniques in Detecting Factors Linked to Academic Achievement

    Science.gov (United States)

    Martínez Abad, Fernando; Chaparro Caso López, Alicia A.

    2017-01-01

    In light of the emergence of statistical analysis techniques based on data mining in education sciences, and the potential they offer to detect non-trivial information in large databases, this paper presents a procedure used to detect factors linked to academic achievement in large-scale assessments. The study is based on a non-experimental,…

  4. Riboflavin/UVA Collagen Cross-Linking-Induced Changes in Normal and Keratoconus Corneal Stroma

    Science.gov (United States)

    Hayes, Sally; Boote, Craig; Kamma-Lorger, Christina S.; Rajan, Madhavan S.; Harris, Jonathan; Dooley, Erin; Hawksworth, Nicholas; Hiller, Jennifer; Terill, Nick J.; Hafezi, Farhad; Brahma, Arun K.; Quantock, Andrew J.; Meek, Keith M.

    2011-01-01

    Purpose To determine the effect of Ultraviolet-A collagen cross-linking with hypo-osmolar and iso-osmolar riboflavin solutions on stromal collagen ultrastructure in normal and keratoconus ex vivo human corneas. Methods Using small-angle X-ray scattering, measurements of collagen D-periodicity, fibril diameter and interfibrillar spacing were made at 1 mm intervals across six normal post-mortem corneas (two above physiological hydration (swollen) and four below (unswollen)) and two post-transplant keratoconus corneal buttons (one swollen; one unswollen), before and after hypo-osmolar cross-linking. The same parameters were measured in three other unswollen normal corneas before and after iso-osmolar cross-linking and in three pairs of swollen normal corneas, in which only the left was cross-linked (with iso-osmolar riboflavin). Results Hypo-osmolar cross-linking resulted in an increase in corneal hydration in all corneas. In the keratoconus corneas and unswollen normal corneas, this was accompanied by an increase in collagen interfibrillar spacing (priboflavin solutions are more likely a consequence of treatment-induced changes in tissue hydration rather than cross-linking. PMID:21850225

  5. Missing "Links" in Bioinformatics Education: Expanding Students' Conceptions of Bioinformatics Using a Biodiversity Database of Living and Fossil Reef Corals

    Science.gov (United States)

    Nehm, Ross H.; Budd, Ann F.

    2006-01-01

    NMITA is a reef coral biodiversity database that we use to introduce students to the expansive realm of bioinformatics beyond genetics. We introduce a series of lessons that have students use this database, thereby accessing real data that can be used to test hypotheses about biodiversity and evolution while targeting the "National Science …

  6. Geoscientific (GEO) database of the Andra Meuse / Haute-Marne research center

    International Nuclear Information System (INIS)

    Tabani, P.; Hemet, P.; Hermand, G.; Delay, J.; Auriere, C.

    2010-01-01

    Document available in extended abstract form only. The GEO database (geo-scientific database of the Meuse/Haute-Marne Center) is a tool developed by Andra, with a view to group in a secured computer form all data related to the acquisition of in situ and laboratory measurements made on solid and fluid samples. This database has three main functions: - Acquisition and management of data and computer files related to geological, geomechanical, hydrogeological and geochemical measurements on solid and fluid samples and in situ measurements (logging, on sample measurements, geological logs, etc). - Available consultation by the staff on Andra's intranet network for selective viewing of data linked to a borehole and/or a sample and for making computations and graphs on sets of laboratory measurements related to a sample. - Physical management of fluid and solid samples stored in a 'core library' in order to localize a sample, follow-up its movement out of the 'core library' to an organization, and carry out regular inventories. The GEO database is a relational Oracle data base. It is installed on a data server which stores information and manages the users' transactions. The users can consult, download and exploit data from any computer connected to the Andra network or Internet. Management of the access rights is made through a login/ password. Four geo-scientific explanations are linked to the Geo database, they are: - The Geosciences portal: The Geosciences portal is a web Intranet application accessible from the ANDRA network. It does not require a particular installation from the client and is accessible through the Internet navigator. A SQL Server Express database manages the users and access rights to the application. This application is used for the acquisition of hydrogeological and geochemical data collected on the field and on fluid samples, as well as data related to scientific work carried out at surface level or in drifts

  7. Cadastral Database Positional Accuracy Improvement

    Science.gov (United States)

    Hashim, N. M.; Omar, A. H.; Ramli, S. N. M.; Omar, K. M.; Din, N.

    2017-10-01

    Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.

  8. Database setup insuring radiopharmaceuticals traceability

    International Nuclear Information System (INIS)

    Robert, N.; Salmon, F.; Clermont-Gallerande, H. de; Celerier, C.

    2002-01-01

    Having to organize radiopharmacy and to insure proper traceability of radiopharmaceutical medicines brings numerous problems, especially for the departments which are not assisted with global management network systems. Our work has been to find a solution enabling to use high street software to cover those needs. We have set up a PC database run by the Microsoft software ACCESS 97. Its use consists in: saving data related to generators, isotopes and kits reception and deletion, as well as the results of quality control; transferring data collected from the software that is connected to the activimeter (elutions and preparations registers, prescription book). By relating all the saved data, ACCESS enables to mix all information in order to proceed requests. At this stage, it is possible to edit all regular registers (prescription book, generator and radionuclides follow-up, blood derived medicines traceability) and to quickly retrieve patients who have received a particular radiopharmaceutical, or the radiopharmaceutical that has been given to a particular patient. This user-friendly database provides a considerable support to nuclear medicine department that don't possess any network management for their radiopharmaceutical activity. (author)

  9. The Amma-Sat Database

    Science.gov (United States)

    Ramage, K.; Desbois, M.; Eymard, L.

    2004-12-01

    The African Monsoon Multidisciplinary Analysis project is a French initiative, which aims at identifying and analysing in details the multidisciplinary and multi-scales processes that lead to a better understanding of the physical mechanisms linked to the African Monsoon. The main components of the African Monsoon are: Atmospheric Dynamics, the Continental Water Cycle, Atmospheric Chemistry, Oceanic and Continental Surface Conditions. Satellites contribute to various objectives of the project both for process analysis and for large scale-long term studies: some series of satellites (METEOSAT, NOAA,.) have been flown for more than 20 years, ensuring a good quality monitoring of some of the West African atmosphere and surface characteristics. Moreover, several recent missions, and several projects will strongly improve and complement this survey. The AMMA project offers an opportunity to develop the exploitation of satellite data and to make collaboration between specialist and non-specialist users. In this purpose databases are being developed to collect all past and future satellite data related to the African Monsoon. It will then be possible to compare different types of data from different resolution, to validate satellite data with in situ measurements or numerical simulations. AMMA-SAT database main goal is to offer an easy access to satellite data to the AMMA scientific community. The database contains geophysical products estimated from operational or research algorithms and covering the different components of the AMMA project. Nevertheless, the choice has been made to group data within pertinent scales rather than within their thematic. In this purpose, five regions of interest where defined to extract the data: An area covering Tropical Atlantic and Africa for large scale studies, an area covering West Africa for mesoscale studies and three local areas surrounding sites of in situ observations. Within each of these regions satellite data are projected on

  10. "Mr. Database" : Jim Gray and the History of Database Technologies.

    Science.gov (United States)

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  11. Mathematics for Databases

    NARCIS (Netherlands)

    ir. Sander van Laar

    2007-01-01

    A formal description of a database consists of the description of the relations (tables) of the database together with the constraints that must hold on the database. Furthermore the contents of a database can be retrieved using queries. These constraints and queries for databases can very well be

  12. Inverse kinematics algorithm for a six-link manipulator using a polynomial expression

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1987-01-01

    This report is concerned with the forward and inverse kinematics problem relevant to a six-link robot manipulator. In order to derive the kinematic relationships between links, the vector rotation operator was applied instead of the conventional homogeneous transformation. The exact algorithm for solving the inverse problem was obtained by transforming kinematics equations into a polynomial. As shown in test calculations, the accuracies of numerical solutions obtained by means of the present approach are found to be quite high. The algorithm proposed permits to find out all feasible solutions for the given inverse problem. (author)

  13. BioMart Central Portal: an open database network for the biological community

    Science.gov (United States)

    Guberman, Jonathan M.; Ai, J.; Arnaiz, O.; Baran, Joachim; Blake, Andrew; Baldock, Richard; Chelala, Claude; Croft, David; Cros, Anthony; Cutts, Rosalind J.; Di Génova, A.; Forbes, Simon; Fujisawa, T.; Gadaleta, E.; Goodstein, D. M.; Gundem, Gunes; Haggarty, Bernard; Haider, Syed; Hall, Matthew; Harris, Todd; Haw, Robin; Hu, S.; Hubbard, Simon; Hsu, Jack; Iyer, Vivek; Jones, Philip; Katayama, Toshiaki; Kinsella, R.; Kong, Lei; Lawson, Daniel; Liang, Yong; Lopez-Bigas, Nuria; Luo, J.; Lush, Michael; Mason, Jeremy; Moreews, Francois; Ndegwa, Nelson; Oakley, Darren; Perez-Llamas, Christian; Primig, Michael; Rivkin, Elena; Rosanoff, S.; Shepherd, Rebecca; Simon, Reinhard; Skarnes, B.; Smedley, Damian; Sperling, Linda; Spooner, William; Stevenson, Peter; Stone, Kevin; Teague, J.; Wang, Jun; Wang, Jianxin; Whitty, Brett; Wong, D. T.; Wong-Erasmus, Marie; Yao, L.; Youens-Clark, Ken; Yung, Christina; Zhang, Junjun; Kasprzyk, Arek

    2011-01-01

    BioMart Central Portal is a first of its kind, community-driven effort to provide unified access to dozens of biological databases spanning genomics, proteomics, model organisms, cancer data, ontology information and more. Anybody can contribute an independently maintained resource to the Central Portal, allowing it to be exposed to and shared with the research community, and linking it with the other resources in the portal. Users can take advantage of the common interface to quickly utilize different sources without learning a new system for each. The system also simplifies cross-database searches that might otherwise require several complicated steps. Several integrated tools streamline common tasks, such as converting between ID formats and retrieving sequences. The combination of a wide variety of databases, an easy-to-use interface, robust programmatic access and the array of tools make Central Portal a one-stop shop for biological data querying. Here, we describe the structure of Central Portal and show example queries to demonstrate its capabilities. Database URL: http://central.biomart.org. PMID:21930507

  14. Physico-chemical/biological properties of tripolyphosphate cross-linked chitosan based nanofibers

    Energy Technology Data Exchange (ETDEWEB)

    Sarkar, Soumi Dey [School of Medical Science and Technology, Indian Institute of Technology Kharagpur, Kharagpur-721302 (India); Farrugia, Brooke L.; Dargaville, Tim R. [Institute of Health and Biomedical Innovation, Queensland University of Technology, Kelvin Groove, Queensland-4059 (Australia); Dhara, Santanu, E-mail: sdhara@smst.iitkgp.ernet.in [School of Medical Science and Technology, Indian Institute of Technology Kharagpur, Kharagpur-721302 (India)

    2013-04-01

    In this study, chitosan-PEO blend, prepared in a 15 M acetic acid, was electrospun into nanofibers (∼ 78 nm diameter) with bead free morphology. While investigating physico-chemical parameters of blend solutions, effect of yield stress on chitosan based nanofiber fabrication was clearly evidenced. Architectural stability of nanofiber mat in aqueous medium was achieved by ionotropic cross-linking of chitosan by tripolyphosphate (TPP) ions. The TPP cross-linked nanofiber mat showed swelling up to ∼ 300% in 1 h and ∼ 40% degradation during 30 day study period. 3T3 fibroblast cells showed good attachment, proliferation and viability on TPP treated chitosan based nanofiber mats. The results indicate non-toxic nature of TPP cross-linked chitosan based nanofibers and their potential to be explored as a tissue engineering matrix. - Highlights: ► Chitosan based nanofiber fabrication through electrospinning. ► Roles of solution viscosity and yield stress on spinnability of chitosan evidenced. ► Tripolyphosphate (TPP) cross-linking rendered structural stability to nanofibers. ► TPP cross-linking also improved cellular response on chitosan based nanofibers. ► Thus, chitosan based nanofibers are suitable for tissue engineering application.

  15. Physico-chemical/biological properties of tripolyphosphate cross-linked chitosan based nanofibers

    International Nuclear Information System (INIS)

    Sarkar, Soumi Dey; Farrugia, Brooke L.; Dargaville, Tim R.; Dhara, Santanu

    2013-01-01

    In this study, chitosan-PEO blend, prepared in a 15 M acetic acid, was electrospun into nanofibers (∼ 78 nm diameter) with bead free morphology. While investigating physico-chemical parameters of blend solutions, effect of yield stress on chitosan based nanofiber fabrication was clearly evidenced. Architectural stability of nanofiber mat in aqueous medium was achieved by ionotropic cross-linking of chitosan by tripolyphosphate (TPP) ions. The TPP cross-linked nanofiber mat showed swelling up to ∼ 300% in 1 h and ∼ 40% degradation during 30 day study period. 3T3 fibroblast cells showed good attachment, proliferation and viability on TPP treated chitosan based nanofiber mats. The results indicate non-toxic nature of TPP cross-linked chitosan based nanofibers and their potential to be explored as a tissue engineering matrix. - Highlights: ► Chitosan based nanofiber fabrication through electrospinning. ► Roles of solution viscosity and yield stress on spinnability of chitosan evidenced. ► Tripolyphosphate (TPP) cross-linking rendered structural stability to nanofibers. ► TPP cross-linking also improved cellular response on chitosan based nanofibers. ► Thus, chitosan based nanofibers are suitable for tissue engineering application

  16. Numerical kinematic transformation calculations for a parallel link manipulator

    International Nuclear Information System (INIS)

    Killough, S.M.

    1993-01-01

    Parallel link manipulators are often considered for particular robotic applications because of the unique advantages they provide. Unfortunately, they have significant disadvantages with respect to calculating the kinematic transformations because of the high-order equations that must be solved. Presented is a manipulator design that exploits the mechanical advantages of parallel links yet also has a corresponding numerical kinematic solution that can be solved in real time on common microcomputers

  17. Uncertainty in geochemical modelling of CO2 and calcite dissolution in NaCl solutions due to different modelling codes and thermodynamic databases

    International Nuclear Information System (INIS)

    Haase, Christoph; Dethlefsen, Frank; Ebert, Markus; Dahmke, Andreas

    2013-01-01

    Highlights: • CO 2 and calcite dissolution is calculated. • The codes PHREEQC, Geochemist’s Workbench, EQ3/6, and FactSage are used. • Comparison with Duan and Li (2008) shows lowest deviation using phreeqc.dat and wateq4f.dat. • Using Pitzer databases does not improve accurate calculations. • Uncertainty in dissolved CO 2 is largest using the geochemical models. - Abstract: A prognosis of the geochemical effects of CO 2 storage induced by the injection of CO 2 into geologic reservoirs or by CO 2 leakage into the overlaying formations can be performed by numerical modelling (non-invasive) and field experiments. Until now the research has been focused on the geochemical processes of the CO 2 reacting with the minerals of the storage formation, which mostly consists of quartzitic sandstones. Regarding the safety assessment the reactions between the CO 2 and the overlaying formations in the case of a CO 2 leakage are of equal importance as the reactions in the storage formation. In particular, limestone formations can react very sensitively to CO 2 intrusion. The thermodynamic parameters necessary to model these reactions are not determined explicitly through experiments at the total range of temperature and pressure conditions and are thus extrapolated by the simulation code. The differences in the calculated results lead to different calcite and CO 2 solubilities and can influence the safety issues. This uncertainty study is performed by comparing the computed results, applying the geochemical modelling software codes The Geochemist’s Workbench, EQ3/6, PHREEQC and FactSage/ChemApp and their thermodynamic databases. The input parameters (1) total concentration of the solution, (2) temperature and (3) fugacity are varied within typical values for CO 2 reservoirs, overlaying formations and close-to-surface aquifers. The most sensitive input parameter in the system H 2 O–CO 2 –NaCl–CaCO 3 for the calculated range of dissolved calcite and CO 2 is the

  18. BioModels Database: a repository of mathematical models of biological processes.

    Science.gov (United States)

    Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas

    2013-01-01

    BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months.

  19. Java EE 7 recipes a problem-solution approach

    CERN Document Server

    Juneau, Josh

    2013-01-01

    Java EE 7 Recipes takes an example-based approach in showing how to program Enterprise Java applications in many different scenarios. Be it a small-business web application, or an enterprise database application, Java EE 7 Recipes provides effective and proven solutions to accomplish just about any task that you may encounter. You can feel confident using the reliable solutions that are demonstrated in this book in your personal or corporate environment. The solutions in Java EE 7 Recipes are built using the most current Java Enterprise specifications, including EJB 3.2, JSF 2.2, Expression La

  20. Crowdsourcing-Assisted Radio Environment Database for V2V Communication

    Directory of Open Access Journals (Sweden)

    Keita Katagiri

    2018-04-01

    Full Text Available In order to realize reliable Vehicle-to-Vehicle (V2V communication systems for autonomous driving, the recognition of radio propagation becomes an important technology. However, in the current wireless distributed network systems, it is difficult to accurately estimate the radio propagation characteristics because of the locality of the radio propagation caused by surrounding buildings and geographical features. In this paper, we propose a measurement-based radio environment database for improving the accuracy of the radio environment estimation in the V2V communication systems. The database first gathers measurement datasets of the received signal strength indicator (RSSI related to the transmission/reception locations from V2V systems. By using the datasets, the average received power maps linked with transmitter and receiver locations are generated. We have performed measurement campaigns of V2V communications in the real environment to observe RSSI for the database construction. Our results show that the proposed method has higher accuracy of the radio propagation estimation than the conventional path loss model-based estimation.

  1. An integrable, web-based solution for easy assessment of video-recorded performances

    DEFF Research Database (Denmark)

    Subhi, Yousif; Todsen, Tobias; Konge, Lars

    2014-01-01

    , and access to this information should be restricted to select personnel. A local software solution may also ease the need for customization to local needs and integration into existing user databases or project management software. We developed an integrable web-based solution for easy assessment of video...

  2. De-anonymizing Genomic Databases Using Phenotypic Traits

    Directory of Open Access Journals (Sweden)

    Humbert Mathias

    2015-06-01

    Full Text Available People increasingly have their genomes sequenced and some of them share their genomic data online. They do so for various purposes, including to find relatives and to help advance genomic research. An individual’s genome carries very sensitive, private information such as its owner’s susceptibility to diseases, which could be used for discrimination. Therefore, genomic databases are often anonymized. However, an individual’s genotype is also linked to visible phenotypic traits, such as eye or hair color, which can be used to re-identify users in anonymized public genomic databases, thus raising severe privacy issues. For instance, an adversary can identify a target’s genome using known her phenotypic traits and subsequently infer her susceptibility to Alzheimer’s disease. In this paper, we quantify, based on various phenotypic traits, the extent of this threat in several scenarios by implementing de-anonymization attacks on a genomic database of OpenSNP users sequenced by 23andMe. Our experimental results show that the proportion of correct matches reaches 23% with a supervised approach in a database of 50 participants. Our approach outperforms the baseline by a factor of four, in terms of the proportion of correct matches, in most scenarios. We also evaluate the adversary’s ability to predict individuals’ predisposition to Alzheimer’s disease, and we observe that the inference error can be halved compared to the baseline. We also analyze the effect of the number of known phenotypic traits on the success rate of the attack. As progress is made in genomic research, especially for genotype-phenotype associations, the threat presented in this paper will become more serious.

  3. Integrated Controlling System and Unified Database for High Throughput Protein Crystallography Experiments

    International Nuclear Information System (INIS)

    Gaponov, Yu.A.; Igarashi, N.; Hiraki, M.; Sasajima, K.; Matsugaki, N.; Suzuki, M.; Kosuge, T.; Wakatsuki, S.

    2004-01-01

    An integrated controlling system and a unified database for high throughput protein crystallography experiments have been developed. Main features of protein crystallography experiments (purification, crystallization, crystal harvesting, data collection, data processing) were integrated into the software under development. All information necessary to perform protein crystallography experiments is stored (except raw X-ray data that are stored in a central data server) in a MySQL relational database. The database contains four mutually linked hierarchical trees describing protein crystals, data collection of protein crystal and experimental data processing. A database editor was designed and developed. The editor supports basic database functions to view, create, modify and delete user records in the database. Two search engines were realized: direct search of necessary information in the database and object oriented search. The system is based on TCP/IP secure UNIX sockets with four predefined sending and receiving behaviors, which support communications between all connected servers and clients with remote control functions (creating and modifying data for experimental conditions, data acquisition, viewing experimental data, and performing data processing). Two secure login schemes were designed and developed: a direct method (using the developed Linux clients with secure connection) and an indirect method (using the secure SSL connection using secure X11 support from any operating system with X-terminal and SSH support). A part of the system has been implemented on a new MAD beam line, NW12, at the Photon Factory Advanced Ring for general user experiments

  4. The Protein Identifier Cross-Referencing (PICR service: reconciling protein identifiers across multiple source databases

    Directory of Open Access Journals (Sweden)

    Leinonen Rasko

    2007-10-01

    Full Text Available Abstract Background Each major protein database uses its own conventions when assigning protein identifiers. Resolving the various, potentially unstable, identifiers that refer to identical proteins is a major challenge. This is a common problem when attempting to unify datasets that have been annotated with proteins from multiple data sources or querying data providers with one flavour of protein identifiers when the source database uses another. Partial solutions for protein identifier mapping exist but they are limited to specific species or techniques and to a very small number of databases. As a result, we have not found a solution that is generic enough and broad enough in mapping scope to suit our needs. Results We have created the Protein Identifier Cross-Reference (PICR service, a web application that provides interactive and programmatic (SOAP and REST access to a mapping algorithm that uses the UniProt Archive (UniParc as a data warehouse to offer protein cross-references based on 100% sequence identity to proteins from over 70 distinct source databases loaded into UniParc. Mappings can be limited by source database, taxonomic ID and activity status in the source database. Users can copy/paste or upload files containing protein identifiers or sequences in FASTA format to obtain mappings using the interactive interface. Search results can be viewed in simple or detailed HTML tables or downloaded as comma-separated values (CSV or Microsoft Excel (XLS files suitable for use in a local database or a spreadsheet. Alternatively, a SOAP interface is available to integrate PICR functionality in other applications, as is a lightweight REST interface. Conclusion We offer a publicly available service that can interactively map protein identifiers and protein sequences to the majority of commonly used protein databases. Programmatic access is available through a standards-compliant SOAP interface or a lightweight REST interface. The PICR

  5. Database development and management

    CERN Document Server

    Chao, Lee

    2006-01-01

    Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database

  6. Database reliability engineering designing and operating resilient database systems

    CERN Document Server

    Campbell, Laine

    2018-01-01

    The infrastructure-as-code revolution in IT is also affecting database administration. With this practical book, developers, system administrators, and junior to mid-level DBAs will learn how the modern practice of site reliability engineering applies to the craft of database architecture and operations. Authors Laine Campbell and Charity Majors provide a framework for professionals looking to join the ranks of today’s database reliability engineers (DBRE). You’ll begin by exploring core operational concepts that DBREs need to master. Then you’ll examine a wide range of database persistence options, including how to implement key technologies to provide resilient, scalable, and performant data storage and retrieval. With a firm foundation in database reliability engineering, you’ll be ready to dive into the architecture and operations of any modern database. This book covers: Service-level requirements and risk management Building and evolving an architecture for operational visibility ...

  7. Hibernate Recipes A Problem-Solution Approach

    CERN Document Server

    Mak, Gary

    2010-01-01

    Hibernate continues to be the most popular out-of-the-box framework solution for Java Persistence and data/database accessibility techniques and patterns. It is used for e-commerce-based web applications as well as heavy-duty transactional systems for the enterprise. Gary Mak, the author of the best-selling Spring Recipes, now brings you Hibernate Recipes. This book contains a collection of code recipes and templates for learning and building Hibernate solutions for you and your clients. This book is your pragmatic day-to-day reference and guide for doing all things involving Hibernate. There

  8. Report from the 3rd Workshop on Extremely Large Databases

    Directory of Open Access Journals (Sweden)

    Jacek Becla

    2010-02-01

    Full Text Available Academic and industrial users are increasingly facing the challenge of petabytes of data, but managing and analyzing such large data sets still remains a daunting task. Both the database and the map/reduce communities worldwide are working on addressing these issues. The 3rdExtremely Large Databases workshop was organized to examine the needs of scientific communities beginning to face these issues, to reach out to European communities working on extremely large scale data challenges, and to brainstorm possible solutions. The science benchmark that emerged from the 2nd workshop in this series was also debated. This paper is the final report of the discussions and activities at this workshop.

  9. Phonetic search methods for large speech databases

    CERN Document Server

    Moyal, Ami; Tetariy, Ella; Gishri, Michal

    2013-01-01

    “Phonetic Search Methods for Large Databases” focuses on Keyword Spotting (KWS) within large speech databases. The brief will begin by outlining the challenges associated with Keyword Spotting within large speech databases using dynamic keyword vocabularies. It will then continue by highlighting the various market segments in need of KWS solutions, as well as, the specific requirements of each market segment. The work also includes a detailed description of the complexity of the task and the different methods that are used, including the advantages and disadvantages of each method and an in-depth comparison. The main focus will be on the Phonetic Search method and its efficient implementation. This will include a literature review of the various methods used for the efficient implementation of Phonetic Search Keyword Spotting, with an emphasis on the authors’ own research which entails a comparative analysis of the Phonetic Search method which includes algorithmic details. This brief is useful for resea...

  10. Solving Relational Database Problems with ORDBMS in an Advanced Database Course

    Science.gov (United States)

    Wang, Ming

    2011-01-01

    This paper introduces how to use the object-relational database management system (ORDBMS) to solve relational database (RDB) problems in an advanced database course. The purpose of the paper is to provide a guideline for database instructors who desire to incorporate the ORDB technology in their traditional database courses. The paper presents…

  11. BENEFITS OF LINKED DATA FOR INTEROPERABILITY DURING CRISIS MANAGEMENT

    Directory of Open Access Journals (Sweden)

    R. Roller

    2015-08-01

    Full Text Available Floodings represent a permanent risk to the Netherlands in general and to her power supply in particular. Data sharing is essential within this crisis scenario as a power cut affects a great variety of interdependant sectors. Currently used data sharing systems have been shown to hamper interoperability between stakeholders since they lack flexibility and there is no consensus in term definitions and interpretations. The study presented in this paper addresses these challenges by proposing a new data sharing solution based on Linked Data, a method of interlinking data points in a structured way on the web. A conceptual model for two data sharing parties in a flood-caused power cut crisis management scenario was developed to which relevant data were linked. The analysis revealed that the presented data sharing solution burderns its user with extra costs in the short run, but saves resources in the long run by overcoming interoperability problems of the legacy systems. The more stakeholders adopt Linked Data the stronger its benefits for data sharing will become.

  12. Generalized Database Management System Support for Numeric Database Environments.

    Science.gov (United States)

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  13. NORPERM, the Norwegian Permafrost Database - a TSP NORWAY IPY legacy

    Science.gov (United States)

    Juliussen, H.; Christiansen, H. H.; Strand, G. S.; Iversen, S.; Midttømme, K.; Rønning, J. S.

    2010-10-01

    NORPERM, the Norwegian Permafrost Database, was developed at the Geological Survey of Norway during the International Polar Year (IPY) 2007-2009 as the main data legacy of the IPY research project Permafrost Observatory Project: A Contribution to the Thermal State of Permafrost in Norway and Svalbard (TSP NORWAY). Its structural and technical design is described in this paper along with the ground temperature data infrastructure in Norway and Svalbard, focussing on the TSP NORWAY permafrost observatory installations in the North Scandinavian Permafrost Observatory and Nordenskiöld Land Permafrost Observatory, being the primary data providers of NORPERM. Further developments of the database, possibly towards a regional database for the Nordic area, are also discussed. The purpose of NORPERM is to store ground temperature data safely and in a standard format for use in future research. The IPY data policy of open, free, full and timely release of IPY data is followed, and the borehole metadata description follows the Global Terrestrial Network for Permafrost (GTN-P) standard. NORPERM is purely a temperature database, and the data is stored in a relation database management system and made publically available online through a map-based graphical user interface. The datasets include temperature time series from various depths in boreholes and from the air, snow cover, ground-surface or upper ground layer recorded by miniature temperature data-loggers, and temperature profiles with depth in boreholes obtained by occasional manual logging. All the temperature data from the TSP NORWAY research project is included in the database, totalling 32 temperature time series from boreholes, 98 time series of micrometeorological temperature conditions, and 6 temperature depth profiles obtained by manual logging in boreholes. The database content will gradually increase as data from previous and future projects are added. Links to near real-time permafrost temperatures, obtained

  14. Design and development of linked data from the National Map

    Science.gov (United States)

    Usery, E. Lynn; Varanka, Dalia E.

    2012-01-01

    The development of linked data on the World-Wide Web provides the opportunity for the U.S. Geological Survey (USGS) to supply its extensive volumes of geospatial data, information, and knowledge in a machine interpretable form and reach users and applications that heretofore have been unavailable. To pilot a process to take advantage of this opportunity, the USGS is developing an ontology for The National Map and converting selected data from nine research test areas to a Semantic Web format to support machine processing and linked data access. In a case study, the USGS has developed initial methods for legacy vector and raster formatted geometry, attributes, and spatial relationships to be accessed in a linked data environment maintaining the capability to generate graphic or image output from semantic queries. The description of an initial USGS approach to developing ontology, linked data, and initial query capability from The National Map databases is presented.

  15. Growth hormone treatment in aarskog syndrome: analysis of the KIGS (Pharmacia International Growth Database) data.

    NARCIS (Netherlands)

    Darendeliler, F.; Larsson, P.; Neyzi, O.; Price, A.D.; Hagenas, L.; Sipila, I.; Lindgren, A.; Otten, B.J.; Bakker, B.

    2003-01-01

    Aarskog syndrome is an X-linked disorder characterized by faciogenital dysplasia and short stature. The present study set out to determine the effect of growth hormone (GH) therapy in patients with Aarskog syndrome enrolled in KIGS--the Pharmacia International Growth Database. Twenty-one patients

  16. License - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us Trypanoso... Attribution-Share Alike 2.1 Japan . If you use data from this database, please be sure attribute this database as follows: Trypanoso...nse Update History of This Database Site Policy | Contact Us License - Trypanosomes Database | LSDB Archive ...

  17. InverPep: A database of invertebrate antimicrobial peptides.

    Science.gov (United States)

    Gómez, Esteban A; Giraldo, Paula; Orduz, Sergio

    2017-03-01

    The aim of this work was to construct InverPep, a database specialised in experimentally validated antimicrobial peptides (AMPs) from invertebrates. AMP data contained in InverPep were manually curated from other databases and the scientific literature. MySQL was integrated with the development platform Laravel; this framework allows to integrate programming in PHP with HTML and was used to design the InverPep web page's interface. InverPep contains 18 separated fields, including InverPep code, phylum and species source, peptide name, sequence, peptide length, secondary structure, molar mass, charge, isoelectric point, hydrophobicity, Boman index, aliphatic index and percentage of hydrophobic amino acids. CALCAMPI, an algorithm to calculate the physicochemical properties of multiple peptides simultaneously, was programmed in PERL language. To date, InverPep contains 702 experimentally validated AMPs from invertebrate species. All of the peptides contain information associated with their source, physicochemical properties, secondary structure, biological activity and links to external literature. Most AMPs in InverPep have a length between 10 and 50 amino acids, a positive charge, a Boman index between 0 and 2 kcal/mol, and 30-50% hydrophobic amino acids. InverPep includes 33 AMPs not reported in other databases. Besides, CALCAMPI and statistical analysis of InverPep data is presented. The InverPep database is available in English and Spanish. InverPep is a useful database to study invertebrate AMPs and its information could be used for the design of new peptides. The user-friendly interface of InverPep and its information can be freely accessed via a web-based browser at http://ciencias.medellin.unal.edu.co/gruposdeinvestigacion/prospeccionydisenobiomoleculas/InverPep/public/home_en. Copyright © 2016 International Society for Chemotherapy of Infection and Cancer. Published by Elsevier Ltd. All rights reserved.

  18. Schwarzschild Solution: A Historical Perspective

    Science.gov (United States)

    Bartusiak, Marcia

    2016-03-01

    While eighteenth-century Newtonians had imagined a precursor to the black hole, the modern version has its roots in the first full solution to Einstein's equations of general relativity, derived by the German astronomer Karl Schwarzschild on a World War I battlefront just weeks after Einstein introduced his completed theory in November 1915. This talk will demonstrate how Schwarzschild's solution is linked to the black hole and how it took more than half a century for the physics community to accept that such a bizarre celestial object could exist in the universe.

  19. Building a SuAVE browse interface to R2R's Linked Data

    Science.gov (United States)

    Clark, D.; Stocks, K. I.; Arko, R. A.; Zaslavsky, I.; Whitenack, T.

    2017-12-01

    The Rolling Deck to Repository program (R2R) is creating and evaluating a new browse portal based on the SuAVE platform and the R2R linked data graph. R2R manages the underway sensor data collected by the fleet of US academic research vessels, and provides a discovery and access point to those data at its website, www.rvdata.us. R2R has a database-driven search interface, but seeks a more capable and extensible browse interface that could be built off of the substantial R2R linked data resources. R2R's Linked Data graph organizes its data holdings around key concepts (e.g. cruise, vessel, device type, operator, award, organization, publication), anchored by persistent identifiers where feasible. The "Survey Analysis via Visual Exploration" or SuAVE platform (suave.sdsc.edu) is a system for online publication, sharing, and analysis of images and metadata. It has been implemented as an interface to diverse data collections, but has not been driven off of linked data in the past. SuAVE supports several features of interest to R2R, including faceted searching, collaborative annotations, efficient subsetting, Google maps-like navigation over an image gallery, and several types of data analysis. Our initial SuAVE-based implementation was through a CSV export from the R2R PostGIS-enabled PostgreSQL database. This served to demonstrate the utility of SuAVE but was static and required reloading as R2R data holdings grew. We are now working to implement a SPARQL-based ("RDF Query Language") service that directly leverages the R2R Linked Data graph and offers the ability to subset and/or customize output.We will show examples of SuAVE faceted searches on R2R linked data concepts, and discuss our experience to date with this work in progress.

  20. A review of accessibility of administrative healthcare databases in the Asia-Pacific region.

    Science.gov (United States)

    Milea, Dominique; Azmi, Soraya; Reginald, Praveen; Verpillat, Patrice; Francois, Clement

    2015-01-01

    We describe and compare the availability and accessibility of administrative healthcare databases (AHDB) in several Asia-Pacific countries: Australia, Japan, South Korea, Taiwan, Singapore, China, Thailand, and Malaysia. The study included hospital records, reimbursement databases, prescription databases, and data linkages. Databases were first identified through PubMed, Google Scholar, and the ISPOR database register. Database custodians were contacted. Six criteria were used to assess the databases and provided the basis for a tool to categorise databases into seven levels ranging from least accessible (Level 1) to most accessible (Level 7). We also categorised overall data accessibility for each country as high, medium, or low based on accessibility of databases as well as the number of academic articles published using the databases. Fifty-four administrative databases were identified. Only a limited number of databases allowed access to raw data and were at Level 7 [Medical Data Vision EBM Provider, Japan Medical Data Centre (JMDC) Claims database and Nihon-Chouzai Pharmacy Claims database in Japan, and Medicare, Pharmaceutical Benefits Scheme (PBS), Centre for Health Record Linkage (CHeReL), HealthLinQ, Victorian Data Linkages (VDL), SA-NT DataLink in Australia]. At Levels 3-6 were several databases from Japan [Hamamatsu Medical University Database, Medi-Trend, Nihon University School of Medicine Clinical Data Warehouse (NUSM)], Australia [Western Australia Data Linkage (WADL)], Taiwan [National Health Insurance Research Database (NHIRD)], South Korea [Health Insurance Review and Assessment Service (HIRA)], and Malaysia [United Nations University (UNU)-Casemix]. Countries were categorised as having a high level of data accessibility (Australia, Taiwan, and Japan), medium level of accessibility (South Korea), or a low level of accessibility (Thailand, China, Malaysia, and Singapore). In some countries, data may be available but accessibility was restricted