WorldWideScience

Sample records for database cortad version

  1. The Coral Reef Temperature Anomaly Database (CoRTAD) Version 5 - Global, 4 km Sea Surface Temperature and Related Thermal Stress Metrics for 1982-2012 (NCEI Accession 0126774)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Version 5 of the Coral Reef Temperature Anomaly Database (CoRTAD) is a global, 4 km, sea surface temperature (SST) and related thermal stress metrics dataset for...

  2. The Coral Reef Temperature Anomaly Database (CoRTAD) Version 3 - Global, 4 km Sea Surface Temperature and Related Thermal Stress Metrics for 1982-2009 (NODC Accession 0068999)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Coral Reef Temperature Anomaly Database (CoRTAD) is a collection of sea surface temperature (SST) and related thermal stress metrics, developed specifically for...

  3. The Coral Reef Temperature Anomaly Database (CoRTAD) Version 2 - Global, 4 km Sea Surface Temperature and Related Thermal Stress Metrics for 1982-2008 (NODC Accession 0054501)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Coral Reef Temperature Anomaly Database (CoRTAD) is a collection of sea surface temperature (SST) and related thermal stress metrics, developed specifically for...

  4. The Coral Reef Temperature Anomaly Database (CoRTAD) Version 1 - Global, 4 km, Sea Surface Temperature and Related Thermal Stress Metrics for 1985-2005 (NODC Accession 0044419)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Coral Reef Temperature Anomaly Database (CoRTAD) is a collection of sea surface temperature (SST) and related thermal stress metrics, developed specifically for...

  5. The Coral Reef Temperature Anomaly Database (CoRTAD) Version 2 - Global, 4 km Sea Surface Temperature and Related Thermal Stress Metrics for 1982-2008 (NODC Accession Number 0054501)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Coral Reef Temperature Anomaly Database (CoRTAD) is a collection of sea surface temperature (SST) and related thermal stress metrics, developed specifically for...

  6. The Coral Reef Temperature Anomaly Database (CoRTAD) Version 4 - Global, 4 km Sea Surface Temperature and Related Thermal Stress Metrics for 1981-10-31 to 2010-12-31 (NODC Accession 0087989)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Coral Reef Temperature Anomaly Database (CoRTAD) is a collection of sea surface temperature (SST) and related thermal stress metrics, developed specifically for...

  7. The Coral Reef Temperature Anomaly Database (CoRTAD) - Global, 4 km, Sea Surface Temperature and Related Thermal Stress Metrics for 1985-2005 (NODC Accession 0044419)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Coral Reef Temperature Anomaly Database (CoRTAD) is a collection of sea surface temperature (SST) and related thermal stress metrics, developed specifically for...

  8. Schema Versioning for Multitemporal Relational Databases.

    Science.gov (United States)

    De Castro, Cristina; Grandi, Fabio; Scalas, Maria Rita

    1997-01-01

    Investigates new design options for extended schema versioning support for multitemporal relational databases. Discusses the improved functionalities they may provide. Outlines options and basic motivations for the new design solutions, as well as techniques for the management of proposed schema versioning solutions, includes algorithms and…

  9. Full Data of Yeast Interacting Proteins Database (Original Version) - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Full Data of Yeast Interacting Proteins Database (Origin...al Version) Data detail Data name Full Data of Yeast Interacting Proteins Database (Original Version) DOI 10....18908/lsdba.nbdc00742-004 Description of data contents The entire data in the Yeast Interacting Proteins Database...eir interactions are required. Several sources including YPD (Yeast Proteome Database, Costanzo, M. C., Hoga...ematic name in the SGD (Saccharomyces Genome Database; http://www.yeastgenome.org /). Bait gene name The gen

  10. The Mars Climate Database (MCD version 5.2)

    Science.gov (United States)

    Millour, E.; Forget, F.; Spiga, A.; Navarro, T.; Madeleine, J.-B.; Montabone, L.; Pottier, A.; Lefevre, F.; Montmessin, F.; Chaufray, J.-Y.; Lopez-Valverde, M. A.; Gonzalez-Galindo, F.; Lewis, S. R.; Read, P. L.; Huot, J.-P.; Desjean, M.-C.; MCD/GCM development Team

    2015-10-01

    The Mars Climate Database (MCD) is a database of meteorological fields derived from General Circulation Model (GCM) numerical simulations of the Martian atmosphere and validated using available observational data. The MCD includes complementary post-processing schemes such as high spatial resolution interpolation of environmental data and means of reconstructing the variability thereof. We have just completed (March 2015) the generation of a new version of the MCD, MCD version 5.2

  11. X-ray Photoelectron Spectroscopy Database (Version 4.1)

    Science.gov (United States)

    SRD 20 X-ray Photoelectron Spectroscopy Database (Version 4.1) (Web, free access)   The NIST XPS Database gives access to energies of many photoelectron and Auger-electron spectral lines. The database contains over 22,000 line positions, chemical shifts, doublet splittings, and energy separations of photoelectron and Auger-electron lines.

  12. Solid Waste Projection Model: Database (Version 1.3)

    International Nuclear Information System (INIS)

    Blackburn, C.L.

    1991-11-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.3 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement

  13. Assessment of radionuclide databases in CAP88 mainframe version 1.0 and Windows-based version 3.0.

    Science.gov (United States)

    LaBone, Elizabeth D; Farfán, Eduardo B; Lee, Patricia L; Jannik, G Timothy; Donnelly, Elizabeth H; Foley, Trevor Q

    2009-09-01

    In this study the radionuclide databases for two versions of the Clean Air Act Assessment Package-1988 (CAP88) computer model were assessed in detail. CAP88 estimates radiation dose and the risk of health effects to human populations from radionuclide emissions to air. This program is used by several U.S. Department of Energy (DOE) facilities to comply with National Emission Standards for Hazardous Air Pollutants regulations. CAP88 Mainframe, referred to as version 1.0 on the U.S. Environmental Protection Agency Web site (http://www.epa.gov/radiation/assessment/CAP88/), was the very first CAP88 version released in 1988. Some DOE facilities including the Savannah River Site still employ this version (1.0) while others use the more user-friendly personal computer Windows-based version 3.0 released in December 2007. Version 1.0 uses the program RADRISK based on International Commission on Radiological Protection Publication 30 as its radionuclide database. Version 3.0 uses half-life, dose, and risk factor values based on Federal Guidance Report 13. Differences in these values could cause different results for the same input exposure data (same scenario), depending on which version of CAP88 is used. Consequently, the differences between the two versions are being assessed in detail at Savannah River National Laboratory. The version 1.0 and 3.0 database files contain 496 and 838 radionuclides, respectively, and though one would expect the newer version to include all the 496 radionuclides, 35 radionuclides are listed in version 1.0 that are not included in version 3.0. The majority of these has either extremely short or long half-lives or is no longer in production; however, some of the short-lived radionuclides might produce progeny of great interest at DOE sites. In addition, 122 radionuclides were found to have different half-lives in the two versions, with 21 over 3 percent different and 12 over 10 percent different.

  14. Analysis of Handling Processes of Record Versions in NoSQL Databases

    Directory of Open Access Journals (Sweden)

    Yu. A. Grigorev

    2015-01-01

    Full Text Available This article investigates the handling processes versions of a record in NoSQL databases. The goal of this work is to develop a model, which enables users both to handle record versions and work with a record simultaneously. This model allows us to estimate both a time distribution for users to handle record versions and a distribution of the count of record versions. With eventual consistency (W=R=1 there is a possibility for several users to update any record simultaneously. In this case, several versions of records with the same key will be stored in database. When reading, the user obtains all versions, handles them, and saves a new version, while older versions are deleted. According to the model, the user’s time for handling the record versions consists of two parts: random handling time of each version and random deliberation time for handling a result. Record saving time and records deleting time are much less than handling time, so, they are ignored in the model. The paper offers two model variants. According to the first variant, client's handling time of one record version is calculated as the sum of random handling times of one version based on the count of record versions. This variant ignores explicitly the fact that handling time of record versions may depend on the update count, performed by the other users between the sequential updates of the record by the current client. So there is the second variant, which takes this feature into consideration. The developed models were implemented in the GPSS environment. The model experiments with different counts of clients and different ratio between one record handling time and results deliberation time were conducted. The analysis showed that despite the resemblance of model variants, a difference in change nature between average values of record versions count and handling time is significant. In the second variant dependences of the average count of record versions in database and

  15. Solid Waste Projection Model: Database (Version 1.4)

    International Nuclear Information System (INIS)

    Blackburn, C.; Cillan, T.

    1993-09-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.4 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement. Those interested in using the SWPM database should refer to the SWPM Database User's Guide. This document is available from the PNL Task M Project Manager (D. L. Stiles, 509-372-4358), the PNL Task L Project Manager (L. L. Armacost, 509-372-4304), the WHC Restoration Projects Section Manager (509-372-1443), or the WHC Waste Characterization Manager (509-372-1193)

  16. PR-EDB: Power Reactor Embrittlement Database Version 3

    International Nuclear Information System (INIS)

    Wang, Jy-An John; Subramani, Ranjit

    2008-01-01

    The aging and degradation of light-water reactor pressure vessels is of particular concern because of their relevance to plant integrity and the magnitude of the expected irradiation embrittlement. The radiation embrittlement of reactor pressure vessel materials depends on many factors, such as neutron fluence, flux, and energy spectrum, irradiation temperature, and preirradiation material history and chemical compositions. These factors must be considered to reliably predict pressure vessel embrittlement and to ensure the safe operation of the reactor. Large amounts of data from surveillance capsules are needed to develop a generally applicable damage prediction model that can be used for industry standards and regulatory guides. Furthermore, the investigations of regulatory issues such as vessel integrity over plant life, vessel failure, and sufficiency of current codes, Standard Review Plans (SRPs), and Guides for license renewal can be greatly expedited by the use of a well-designed computerized database. The Power Reactor Embrittlement Database (PR-EDB) is such a comprehensive collection of data for U.S. designed commercial nuclear reactors. The current version of the PR-EDB lists the test results of 104 heat-affected-zone (HAZ) materials, 115 weld materials, and 141 base materials, including 103 plates, 35 forgings, and 3 correlation monitor materials that were irradiated in 321 capsules from 106 commercial power reactors. The data files are given in dBASE format and can be accessed with any personal computer using the Windows operating system. 'User-friendly' utility programs have been written to investigate radiation embrittlement using this database. Utility programs allow the user to retrieve, select and manipulate specific data, display data to the screen or printer, and fit and plot Charpy impact data. The PR-EDB Version 3.0 upgrades Version 2.0. The package was developed based on the Microsoft .NET framework technology and uses Microsoft Access for

  17. PR-EDB: Power Reactor Embrittlement Database - Version 3

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Jy-An John [ORNL; Subramani, Ranjit [ORNL

    2008-03-01

    The aging and degradation of light-water reactor pressure vessels is of particular concern because of their relevance to plant integrity and the magnitude of the expected irradiation embrittlement. The radiation embrittlement of reactor pressure vessel materials depends on many factors, such as neutron fluence, flux, and energy spectrum, irradiation temperature, and preirradiation material history and chemical compositions. These factors must be considered to reliably predict pressure vessel embrittlement and to ensure the safe operation of the reactor. Large amounts of data from surveillance capsules are needed to develop a generally applicable damage prediction model that can be used for industry standards and regulatory guides. Furthermore, the investigations of regulatory issues such as vessel integrity over plant life, vessel failure, and sufficiency of current codes, Standard Review Plans (SRPs), and Guides for license renewal can be greatly expedited by the use of a well-designed computerized database. The Power Reactor Embrittlement Database (PR-EDB) is such a comprehensive collection of data for U.S. designed commercial nuclear reactors. The current version of the PR-EDB lists the test results of 104 heat-affected-zone (HAZ) materials, 115 weld materials, and 141 base materials, including 103 plates, 35 forgings, and 3 correlation monitor materials that were irradiated in 321 capsules from 106 commercial power reactors. The data files are given in dBASE format and can be accessed with any personal computer using the Windows operating system. "User-friendly" utility programs have been written to investigate radiation embrittlement using this database. Utility programs allow the user to retrieve, select and manipulate specific data, display data to the screen or printer, and fit and plot Charpy impact data. The PR-EDB Version 3.0 upgrades Version 2.0. The package was developed based on the Microsoft .NET framework technology and uses Microsoft Access for

  18. Solid waste projection model: Database user's guide (Version 1.0)

    International Nuclear Information System (INIS)

    Carr, F.; Stiles, D.

    1991-01-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for preparing to use Version 1 of the SWPM database, for entering and maintaining data, and for performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions, and does not provide instructions in the use of Paradox, the database management system in which the SWPM database is established. 3 figs., 1 tab

  19. Solid Waste Projection Model: Database user's guide (Version 1.3)

    International Nuclear Information System (INIS)

    Blackburn, C.L.

    1991-11-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for preparing to use Version 1.3 of the SWPM database, for entering and maintaining data, and for performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions and does not provide instruction in the use of Paradox, the database management system in which the SWPM database is established

  20. Global Mammal Parasite Database version 2.0.

    Science.gov (United States)

    Stephens, Patrick R; Pappalardo, Paula; Huang, Shan; Byers, James E; Farrell, Maxwell J; Gehman, Alyssa; Ghai, Ria R; Haas, Sarah E; Han, Barbara; Park, Andrew W; Schmidt, John P; Altizer, Sonia; Ezenwa, Vanessa O; Nunn, Charles L

    2017-05-01

    Illuminating the ecological and evolutionary dynamics of parasites is one of the most pressing issues facing modern science, and is critical for basic science, the global economy, and human health. Extremely important to this effort are data on the disease-causing organisms of wild animal hosts (including viruses, bacteria, protozoa, helminths, arthropods, and fungi). Here we present an updated version of the Global Mammal Parasite Database, a database of the parasites of wild ungulates (artiodactyls and perissodactyls), carnivores, and primates, and make it available for download as complete flat files. The updated database has more than 24,000 entries in the main data file alone, representing data from over 2700 literature sources. We include data on sampling method and sample sizes when reported, as well as both "reported" and "corrected" (i.e., standardized) binomials for each host and parasite species. Also included are current higher taxonomies and data on transmission modes used by the majority of species of parasites in the database. In the associated metadata we describe the methods used to identify sources and extract data from the primary literature, how entries were checked for errors, methods used to georeference entries, and how host and parasite taxonomies were standardized across the database. We also provide definitions of the data fields in each of the four files that users can download. © 2017 by the Ecological Society of America.

  1. CHIANTI—AN ATOMIC DATABASE FOR EMISSION LINES. XII. VERSION 7 OF THE DATABASE

    International Nuclear Information System (INIS)

    Landi, E.; Del Zanna, G.; Mason, H. E.; Young, P. R.; Dere, K. P.

    2012-01-01

    The CHIANTI spectral code consists of an atomic database and a suite of computer programs to calculate the optically thin spectrum of astrophysical objects and carry out spectroscopic plasma diagnostics. The database includes atomic energy levels, wavelengths, radiative transition probabilities, collision excitation rate coefficients, and ionization and recombination rate coefficients, as well as data to calculate free-free, free-bound, and two-photon continuum emission. Version 7 has been released, which includes several new ions, significant updates to existing ions, as well as Chianti-Py, the implementation of CHIANTI software in the Python programming language. All data and programs are freely available at http://www.chiantidatabase.org, while the Python interface to CHIANTI can be found at http://chiantipy.sourceforge.net.

  2. CHIANTI-AN ATOMIC DATABASE FOR EMISSION LINES. XII. VERSION 7 OF THE DATABASE

    Energy Technology Data Exchange (ETDEWEB)

    Landi, E. [Department of Atmospheric, Oceanic and Space Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Del Zanna, G.; Mason, H. E. [Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Young, P. R. [College of Science, George Mason University, 4400 University Drive, Fairfax, VA, 22030 (United States); Dere, K. P. [School of Physics, Astronomy and Computational Sciences, MS 6A2, George Mason University, 4400 University Drive, Fairfax, VA 22030 (United States)

    2012-01-10

    The CHIANTI spectral code consists of an atomic database and a suite of computer programs to calculate the optically thin spectrum of astrophysical objects and carry out spectroscopic plasma diagnostics. The database includes atomic energy levels, wavelengths, radiative transition probabilities, collision excitation rate coefficients, and ionization and recombination rate coefficients, as well as data to calculate free-free, free-bound, and two-photon continuum emission. Version 7 has been released, which includes several new ions, significant updates to existing ions, as well as Chianti-Py, the implementation of CHIANTI software in the Python programming language. All data and programs are freely available at http://www.chiantidatabase.org, while the Python interface to CHIANTI can be found at http://chiantipy.sourceforge.net.

  3. The Consolidated Human Activity Database — Master Version (CHAD-Master) Technical Memorandum

    Science.gov (United States)

    This technical memorandum contains information about the Consolidated Human Activity Database -- Master version, including CHAD contents, inventory of variables: Questionnaire files and Event files, CHAD codes, and references.

  4. Model Adequacy Analysis of Matching Record Versions in Nosql Databases

    Directory of Open Access Journals (Sweden)

    E. V. Tsviashchenko

    2015-01-01

    Full Text Available The article investigates a model of matching record versions. The goal of this work is to analyse the model adequacy. This model allows estimating a user’s processing time distribution of the record versions and a distribution of the record versions count. The second option of the model was used, according to which, for a client the time to process record versions depends explicitly on the number of updates, performed by the other users between the sequential updates performed by a current client. In order to prove the model adequacy the real experiment was conducted in the cloud cluster. The cluster contains 10 virtual nodes, provided by DigitalOcean Company. The Ubuntu Server 14.04 was used as an operating system (OS. The NoSQL system Riak was chosen for experiments. In the Riak 2.0 version and later provide “dotted vector versions” (DVV option, which is an extension of the classic vector clock. Their use guarantees, that the versions count, simultaneously stored in DB, will not exceed the count of clients, operating in parallel with a record. This is very important while conducting experiments. For developing the application the java library, provided by Riak, was used. The processes run directly on the nodes. In experiment two records were used. They are: Z – the record, versions of which are handled by clients; RZ – service record, which contains record update counters. The application algorithm can be briefly described as follows: every client reads versions of the record Z, processes its updates using the RZ record counters, and saves treated record in database while old versions are deleted form DB. Then, a client rereads the RZ record and increments counters of updates for the other clients. After that, a client rereads the Z record, saves necessary statistics, and deliberates the results of processing. In the case of emerging conflict because of simultaneous updates of the RZ record, the client obtains all versions of that

  5. NODC Standard Product: World Ocean Database 1998 version 2 (5 disc set) (NODC Accession 0098461)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Since the first release of WOD98, the staff of the Ocean Climate Laboratory have performed additional quality control on the database. Version 2.0 also includes...

  6. Fish Karyome version 2.1: a chromosome database of fishes and other aquatic organisms.

    Science.gov (United States)

    Nagpure, Naresh Sahebrao; Pathak, Ajey Kumar; Pati, Rameshwar; Rashid, Iliyas; Sharma, Jyoti; Singh, Shri Prakash; Singh, Mahender; Sarkar, Uttam Kumar; Kushwaha, Basdeo; Kumar, Ravindra; Murali, S

    2016-01-01

    A voluminous information is available on karyological studies of fishes; however, limited efforts were made for compilation and curation of the available karyological data in a digital form. 'Fish Karyome' database was the preliminary attempt to compile and digitize the available karyological information on finfishes belonging to the Indian subcontinent. But the database had limitations since it covered data only on Indian finfishes with limited search options. Perceiving the feedbacks from the users and its utility in fish cytogenetic studies, the Fish Karyome database was upgraded by applying Linux, Apache, MySQL and PHP (pre hypertext processor) (LAMP) technologies. In the present version, the scope of the system was increased by compiling and curating the available chromosomal information over the globe on fishes and other aquatic organisms, such as echinoderms, molluscs and arthropods, especially of aquaculture importance. Thus, Fish Karyome version 2.1 presently covers 866 chromosomal records for 726 species supported with 253 published articles and the information is being updated regularly. The database provides information on chromosome number and morphology, sex chromosomes, chromosome banding, molecular cytogenetic markers, etc. supported by fish and karyotype images through interactive tools. It also enables the users to browse and view chromosomal information based on habitat, family, conservation status and chromosome number. The system also displays chromosome number in model organisms, protocol for chromosome preparation and allied techniques and glossary of cytogenetic terms. A data submission facility has also been provided through data submission panel. The database can serve as a unique and useful resource for cytogenetic characterization, sex determination, chromosomal mapping, cytotaxonomy, karyo-evolution and systematics of fishes. Database URL: http://mail.nbfgr.res.in/Fish_Karyome. © The Author(s) 2016. Published by Oxford University Press.

  7. DNAStat, version 2.1--a computer program for processing genetic profile databases and biostatistical calculations.

    Science.gov (United States)

    Berent, Jarosław

    2010-01-01

    This paper presents the new DNAStat version 2.1 for processing genetic profile databases and biostatistical calculations. The popularization of DNA studies employed in the judicial system has led to the necessity of developing appropriate computer programs. Such programs must, above all, address two critical problems, i.e. the broadly understood data processing and data storage, and biostatistical calculations. Moreover, in case of terrorist attacks and mass natural disasters, the ability to identify victims by searching related individuals is very important. DNAStat version 2.1 is an adequate program for such purposes. The DNAStat version 1.0 was launched in 2005. In 2006, the program was updated to 1.1 and 1.2 versions. There were, however, slight differences between those versions and the original one. The DNAStat version 2.0 was launched in 2007 and the major program improvement was an introduction of the group calculation options with the potential application to personal identification of mass disasters and terrorism victims. The last 2.1 version has the option of language selection--Polish or English, which will enhance the usage and application of the program also in other countries.

  8. THE NASA AMES PAH IR SPECTROSCOPIC DATABASE VERSION 2.00: UPDATED CONTENT, WEB SITE, AND ON(OFF)LINE TOOLS

    Energy Technology Data Exchange (ETDEWEB)

    Boersma, C.; Mattioda, A. L.; Allamandola, L. J. [NASA Ames Research Center, MS 245-6, Moffett Field, CA 94035 (United States); Bauschlicher, C. W. Jr.; Ricca, A. [NASA Ames Research Center, MS 230-3, Moffett Field, CA 94035 (United States); Cami, J.; Peeters, E.; De Armas, F. Sánchez; Saborido, G. Puerta [SETI Institute, 189 Bernardo Avenue 100, Mountain View, CA 94043 (United States); Hudgins, D. M., E-mail: Christiaan.Boersma@nasa.gov [NASA Headquarters, MS 3Y28, 300 E St. SW, Washington, DC 20546 (United States)

    2014-03-01

    A significantly updated version of the NASA Ames PAH IR Spectroscopic Database, the first major revision since its release in 2010, is presented. The current version, version 2.00, contains 700 computational and 75 experimental spectra compared, respectively, with 583 and 60 in the initial release. The spectra span the 2.5-4000 μm (4000-2.5 cm{sup -1}) range. New tools are available on the site that allow one to analyze spectra in the database and compare them with imported astronomical spectra as well as a suite of IDL object classes (a collection of programs utilizing IDL's object-oriented programming capabilities) that permit offline analysis called the AmesPAHdbIDLSuite. Most noteworthy among the additions are the extension of the computational spectroscopic database to include a number of significantly larger polycyclic aromatic hydrocarbons (PAHs), the ability to visualize the molecular atomic motions corresponding to each vibrational mode, and a new tool that allows one to perform a non-negative least-squares fit of an imported astronomical spectrum with PAH spectra in the computational database. Finally, a methodology is described in the Appendix, and implemented using the AmesPAHdbIDLSuite, that allows the user to enforce charge balance during the fitting procedure.

  9. The NASA Ames PAH IR Spectroscopic Database: Computational Version 3.00 with Updated Content and the Introduction of Multiple Scaling Factors

    Science.gov (United States)

    Bauschlicher, Charles W., Jr.; Ricca, A.; Boersma, C.; Allamandola, L. J.

    2018-02-01

    Version 3.00 of the library of computed spectra in the NASA Ames PAH IR Spectroscopic Database (PAHdb) is described. Version 3.00 introduces the use of multiple scale factors, instead of the single scaling factor used previously, to align the theoretical harmonic frequencies with the experimental fundamentals. The use of multiple scale factors permits the use of a variety of basis sets; this allows new PAH species to be included in the database, such as those containing oxygen, and yields an improved treatment of strained species and those containing nitrogen. In addition, the computed spectra of 2439 new PAH species have been added. The impact of these changes on the analysis of an astronomical spectrum through database-fitting is considered and compared with a fit using Version 2.00 of the library of computed spectra. Finally, astronomical constraints are defined for the PAH spectral libraries in PAHdb.

  10. YMDB 2.0: a significantly expanded version of the yeast metabolome database.

    Science.gov (United States)

    Ramirez-Gaona, Miguel; Marcu, Ana; Pon, Allison; Guo, An Chi; Sajed, Tanvir; Wishart, Noah A; Karu, Naama; Djoumbou Feunang, Yannick; Arndt, David; Wishart, David S

    2017-01-04

    YMDB or the Yeast Metabolome Database (http://www.ymdb.ca/) is a comprehensive database containing extensive information on the genome and metabolome of Saccharomyces cerevisiae Initially released in 2012, the YMDB has gone through a significant expansion and a number of improvements over the past 4 years. This manuscript describes the most recent version of YMDB (YMDB 2.0). More specifically, it provides an updated description of the database that was previously described in the 2012 NAR Database Issue and it details many of the additions and improvements made to the YMDB over that time. Some of the most important changes include a 7-fold increase in the number of compounds in the database (from 2007 to 16 042), a 430-fold increase in the number of metabolic and signaling pathway diagrams (from 66 to 28 734), a 16-fold increase in the number of compounds linked to pathways (from 742 to 12 733), a 17-fold increase in the numbers of compounds with nuclear magnetic resonance or MS spectra (from 783 to 13 173) and an increase in both the number of data fields and the number of links to external databases. In addition to these database expansions, a number of improvements to YMDB's web interface and its data visualization tools have been made. These additions and improvements should greatly improve the ease, the speed and the quantity of data that can be extracted, searched or viewed within YMDB. Overall, we believe these improvements should not only improve the understanding of the metabolism of S. cerevisiae, but also allow more in-depth exploration of its extensive metabolic networks, signaling pathways and biochemistry. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. The Mars Climate Database (MCD version 5.3)

    Science.gov (United States)

    Millour, Ehouarn; Forget, Francois; Spiga, Aymeric; Vals, Margaux; Zakharov, Vladimir; Navarro, Thomas; Montabone, Luca; Lefevre, Franck; Montmessin, Franck; Chaufray, Jean-Yves; Lopez-Valverde, Miguel; Gonzalez-Galindo, Francisco; Lewis, Stephen; Read, Peter; Desjean, Marie-Christine; MCD/GCM Development Team

    2017-04-01

    Our Global Circulation Model (GCM) simulates the atmospheric environment of Mars. It is developped at LMD (Laboratoire de Meteorologie Dynamique, Paris, France) in close collaboration with several teams in Europe (LATMOS, France, University of Oxford, The Open University, the Instituto de Astrofisica de Andalucia), and with the support of ESA (European Space Agency) and CNES (French Space Agency). GCM outputs are compiled to build a Mars Climate Database, a freely available tool useful for the scientific and engineering communities. The Mars Climate Database (MCD) has over the years been distributed to more than 300 teams around the world. The latest series of reference simulations have been compiled in a new version (v5.3) of the MCD, released in the first half of 2017. To summarize, MCD v5.3 provides: - Climatologies over a series of synthetic dust scenarios: standard (climatology) year, cold (ie: low dust), warm (ie: dusty atmosphere) and dust storm, all topped by various cases of Extreme UV solar inputs (low, mean or maximum). These scenarios have been derived from home-made, instrument-derived (TES, THEMIS, MCS, MERs), dust climatology of the last 8 Martian years. The MCD also provides simulation outputs (MY24-31) representative of these actual years. - Mean values and statistics of main meteorological variables (atmospheric temperature, density, pressure and winds), as well as surface pressure and temperature, CO2 ice cover, thermal and solar radiative fluxes, dust column opacity and mixing ratio, [H20] vapor and ice columns, concentrations of many species: [CO], [O2], [O], [N2], [H2], [O3], ... - A high resolution mode which combines high resolution (32 pixel/degree) MOLA topography records and Viking Lander 1 pressure records with raw lower resolution GCM results to yield, within the restriction of the procedure, high resolution values of atmospheric variables. - The possibility to reconstruct realistic conditions by combining the provided climatology with

  12. ICCS 2009 User Guide for the International Database. Supplement 1: International Version of the ICCS 2009 Questionnaires

    Science.gov (United States)

    Brese, Falk; Jung, Michael; Mirazchiyski, Plamen; Schulz, Wolfram; Zuehlke, Olaf

    2011-01-01

    This document presents Supplement 1 of "The International Civic and Citizenship Education Study (ICCS) 2009 International Database," which includes data for all questionnaires administered as part of the ICCS 2009 assessment. This supplement contains the international version of the ICCS 2009 questionnaires in the following seven…

  13. Soil and Terrain Database for Cuba, primary data (version 1.0) - scale 1:1 million (SOTER_Cuba)

    NARCIS (Netherlands)

    Dijkshoorn, J.A.; Huting, J.R.M.

    2014-01-01

    The Soil and Terrain database for Cuba primary data (version 1.0), at scale 1:1 million (SOTER_Cuba), was compiled of enhanced soil informtion within the framework of the FAO's program Land Degradation Assessment in Drylands (LADA). Primary soil and terrain data for Cuba were obtained from the

  14. Brassica database (BRAD) version 2.0: integrating and mining Brassicaceae species genomic resources.

    Science.gov (United States)

    Wang, Xiaobo; Wu, Jian; Liang, Jianli; Cheng, Feng; Wang, Xiaowu

    2015-01-01

    The Brassica database (BRAD) was built initially to assist users apply Brassica rapa and Arabidopsis thaliana genomic data efficiently to their research. However, many Brassicaceae genomes have been sequenced and released after its construction. These genomes are rich resources for comparative genomics, gene annotation and functional evolutionary studies of Brassica crops. Therefore, we have updated BRAD to version 2.0 (V2.0). In BRAD V2.0, 11 more Brassicaceae genomes have been integrated into the database, namely those of Arabidopsis lyrata, Aethionema arabicum, Brassica oleracea, Brassica napus, Camelina sativa, Capsella rubella, Leavenworthia alabamica, Sisymbrium irio and three extremophiles Schrenkiella parvula, Thellungiella halophila and Thellungiella salsuginea. BRAD V2.0 provides plots of syntenic genomic fragments between pairs of Brassicaceae species, from the level of chromosomes to genomic blocks. The Generic Synteny Browser (GBrowse_syn), a module of the Genome Browser (GBrowse), is used to show syntenic relationships between multiple genomes. Search functions for retrieving syntenic and non-syntenic orthologs, as well as their annotation and sequences are also provided. Furthermore, genome and annotation information have been imported into GBrowse so that all functional elements can be visualized in one frame. We plan to continually update BRAD by integrating more Brassicaceae genomes into the database. Database URL: http://brassicadb.org/brad/. © The Author(s) 2015. Published by Oxford University Press.

  15. TEDS-M 2008 User Guide for the International Database. Supplement 1: International Version of the TEDS-M Questionnaires

    Science.gov (United States)

    Brese, Falk, Ed.

    2012-01-01

    The Teacher Education Study in Mathematics (TEDS-M) International Database includes data for all questionnaires administered as part of the TEDS-M study. These consisted of questionnaires administered to future teachers, educators, and institutions with teacher preparation programs. This supplement contains the international version of the TEDS-M…

  16. Soil and Terrain Database for Senegal and the Gambia (version 1.0) - scale 1:1 million (SOTER_Senegal_Gambia)

    NARCIS (Netherlands)

    Dijkshoorn, J.A.; Huting, J.R.M.

    2014-01-01

    The Soil and Terrain database for Senegal and The Gambia primary data (version 1.0), at scale 1:1 million (SOTER_Senegal_Gambia), was compiled of enhanced soil information within the framework of the FAO's program Land Degradation Assessment in Drylands (LADA). Primary soil and terrain data for

  17. TIMSS 2011 User Guide for the International Database. Supplement 1: International Version of the TIMSS 2011 Background and Curriculum Questionnaires

    Science.gov (United States)

    Foy, Pierre, Ed.; Arora, Alka, Ed.; Stanco, Gabrielle M., Ed.

    2013-01-01

    The TIMSS 2011 International Database includes data for all questionnaires administered as part of the TIMSS 2011 assessment. This supplement contains the international version of the TIMSS 2011 background questionnaires and curriculum questionnaires in the following 10 sections: (1) Fourth Grade Student Questionnaire; (2) Fourth Grade Home…

  18. PIRLS 2011 User Guide for the International Database. Supplement 1: International Version of the PIRLS 2011, Background Questionnaires and Curriculum Questionnaire

    Science.gov (United States)

    Foy, Pierre, Ed.; Drucker, Kathleen T., Ed.

    2013-01-01

    The PIRLS 2011 international database includes data for all questionnaires administered as part of the PIRLS 2011 assessment. This supplement contains the international version of the PIRLS 2011 background questionnaires and curriculum questionnaires in the following 5 sections: (1) Student Questionnaire; (2) Home Questionnaire (Learning to Read…

  19. The UMIST database for astrochemistry 2006

    Science.gov (United States)

    Woodall, J.; Agúndez, M.; Markwick-Kemper, A. J.; Millar, T. J.

    2007-05-01

    Aims:We present a new version of the UMIST Database for Astrochemistry, the fourth such version to be released to the public. The current version contains some 4573 binary gas-phase reactions, an increase of 10% from the previous (1999) version, among 420 species, of which 23 are new to the database. Methods: Major updates have been made to ion-neutral reactions, neutral-neutral reactions, particularly at low temperature, and dissociative recombination reactions. We have included for the first time the interstellar chemistry of fluorine. In addition to the usual database, we have also released a reaction set in which the effects of dipole-enhanced ion-neutral rate coefficients are included. Results: These two reactions sets have been used in a dark cloud model and the results of these models are presented and discussed briefly. The database and associated software are available on the World Wide Web at www.udfa.net. Tables 1, 2, 4 and 9 are only available in electronic form at http://www.aanda.org

  20. Soil and Terrain Database for Upper Tana River Catchment (version 1.1) - scale 1:250,000 (SOTER_UT_v1.1)

    NARCIS (Netherlands)

    Dijkshoorn, J.A.; Macharia, P.; Kempen, B.

    2014-01-01

    The Soil and Terrain database for the Upper Tana River Catchment (version 1.1) (SOTER_UT_v1.1) at scale 1:250,000 was compiled to support the Green Water Credits (GWC) programme by creating a primary SOTER dataset for a hydrology assessment of the basin. The Kenya Soil Survey of the Kenya

  1. Tank Characterization Database (TCD) Data Dictionary: Version 4.0

    International Nuclear Information System (INIS)

    1996-04-01

    This document is the data dictionary for the tank characterization database (TCD) system and contains information on the data model and SYBASE reg-sign database structure. The first two parts of this document are subject areas based on the two different areas of the (TCD) database: sample analysis and waste inventory. Within each subject area is an alphabetical list of all the database tables contained in the subject area. Within each table defintiion is a brief description of the table and alist of field names and attributes. The third part, Field Descriptions, lists all field names in the data base alphabetically

  2. The Global Energy Balance Archive (GEBA) version 2017: a database for worldwide measured surface energy fluxes

    Science.gov (United States)

    Wild, Martin; Ohmura, Atsumu; Schär, Christoph; Müller, Guido; Folini, Doris; Schwarz, Matthias; Zyta Hakuba, Maria; Sanchez-Lorenzo, Arturo

    2017-08-01

    The Global Energy Balance Archive (GEBA) is a database for the central storage of the worldwide measured energy fluxes at the Earth's surface, maintained at ETH Zurich (Switzerland). This paper documents the status of the GEBA version 2017 dataset, presents the new web interface and user access, and reviews the scientific impact that GEBA data had in various applications. GEBA has continuously been expanded and updated and contains in its 2017 version around 500 000 monthly mean entries of various surface energy balance components measured at 2500 locations. The database contains observations from 15 surface energy flux components, with the most widely measured quantity available in GEBA being the shortwave radiation incident at the Earth's surface (global radiation). Many of the historic records extend over several decades. GEBA contains monthly data from a variety of sources, namely from the World Radiation Data Centre (WRDC) in St. Petersburg, from national weather services, from different research networks (BSRN, ARM, SURFRAD), from peer-reviewed publications, project and data reports, and from personal communications. Quality checks are applied to test for gross errors in the dataset. GEBA has played a key role in various research applications, such as in the quantification of the global energy balance, in the discussion of the anomalous atmospheric shortwave absorption, and in the detection of multi-decadal variations in global radiation, known as global dimming and brightening. GEBA is further extensively used for the evaluation of climate models and satellite-derived surface flux products. On a more applied level, GEBA provides the basis for engineering applications in the context of solar power generation, water management, agricultural production and tourism. GEBA is publicly accessible through the internet via http://www.geba.ethz.ch. Supplementary data are available at https://doi.org/10.1594/PANGAEA.873078.

  3. Automated Oracle database testing

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Ensuring database stability and steady performance in the modern world of agile computing is a major challenge. Various changes happening at any level of the computing infrastructure: OS parameters & packages, kernel versions, database parameters & patches, or even schema changes, all can potentially harm production services. This presentation shows how an automatic and regular testing of Oracle databases can be achieved in such agile environment.

  4. ICRAF Species Switchboard. Version 1.2

    DEFF Research Database (Denmark)

    Kindt, R.; Ordonez, J.; Smith, E.

    2015-01-01

    The current version of the Agroforestry Species Switchboard documents the presence of a total of 26,135 plant species (33,813 species including synonyms) across 19 web-based databases. When available, hyperlinks to information on the selected species in particular databases are provided. In total...

  5. The Human Communication Research Centre dialogue database.

    Science.gov (United States)

    Anderson, A H; Garrod, S C; Clark, A; Boyle, E; Mullin, J

    1992-10-01

    The HCRC dialogue database consists of over 700 transcribed and coded dialogues from pairs of speakers aged from seven to fourteen. The speakers are recorded while tackling co-operative problem-solving tasks and the same pairs of speakers are recorded over two years tackling 10 different versions of our two tasks. In addition there are over 200 dialogues recorded between pairs of undergraduate speakers engaged on versions of the same tasks. Access to the database, and to its accompanying custom-built search software, is available electronically over the JANET system by contacting liz@psy.glasgow.ac.uk, from whom further information about the database and a user's guide to the database can be obtained.

  6. DataBase on Demand

    International Nuclear Information System (INIS)

    Aparicio, R Gaspar; Gomez, D; Wojcik, D; Coz, I Coterillo

    2012-01-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  7. Fermilab Security Site Access Request Database

    Science.gov (United States)

    Fermilab Security Site Access Request Database Use of the online version of the Fermilab Security Site Access Request Database requires that you login into the ESH&Q Web Site. Note: Only Fermilab generated from the ESH&Q Section's Oracle database on May 27, 2018 05:48 AM. If you have a question

  8. The BDNYC database of low-mass stars, brown dwarfs, and planetary mass companions

    Science.gov (United States)

    Cruz, Kelle; Rodriguez, David; Filippazzo, Joseph; Gonzales, Eileen; Faherty, Jacqueline K.; Rice, Emily; BDNYC

    2018-01-01

    We present a web-interface to a database of low-mass stars, brown dwarfs, and planetary mass companions. Users can send SELECT SQL queries to the database, perform searches by coordinates or name, check the database inventory on specified objects, and even plot spectra interactively. The initial version of this database contains information for 198 objects and version 2 will contain over 1000 objects. The database currently includes photometric data from 2MASS, WISE, and Spitzer and version 2 will include a significant portion of the publicly available optical and NIR spectra for brown dwarfs. The database is maintained and curated by the BDNYC research group and we welcome contributions from other researchers via GitHub.

  9. Mycobacteriophage genome database.

    Science.gov (United States)

    Joseph, Jerrine; Rajendran, Vasanthi; Hassan, Sameer; Kumar, Vanaja

    2011-01-01

    Mycobacteriophage genome database (MGDB) is an exclusive repository of the 64 completely sequenced mycobacteriophages with annotated information. It is a comprehensive compilation of the various gene parameters captured from several databases pooled together to empower mycobacteriophage researchers. The MGDB (Version No.1.0) comprises of 6086 genes from 64 mycobacteriophages classified into 72 families based on ACLAME database. Manual curation was aided by information available from public databases which was enriched further by analysis. Its web interface allows browsing as well as querying the classification. The main objective is to collect and organize the complexity inherent to mycobacteriophage protein classification in a rational way. The other objective is to browse the existing and new genomes and describe their functional annotation. The database is available for free at http://mpgdb.ibioinformatics.org/mpgdb.php.

  10. AFSC/ABL: Exxon Valdez Trustee Hydrocarbon Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This hydrocarbon database was initiated after the Exxon Valdez oil spill in 1989. The first version was as an RBase database, PWSOIL(Short, Heintz et al. 1996). It...

  11. Protocol - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...e version) File URL: ftp://ftp.biosciencedbc.jp/archive/rpd/LATEST/rpd_protocol_jp.zip File size: 535 KB Fil...e name: rpd_protocol_en.zip (English version) File URL: ftp://ftp.biosciencedbc.jp/archiv...tabase Database Description Download License Update History of This Database Site Policy | Contact Us Protocol - RPD | LSDB Archive ...

  12. Core Data of Yeast Interacting Proteins Database (Original Version) - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available y are in the reverse direction. *1 A comprehensive two-hybrid analysis to explore the yeast protein interact...s. 2000 Jan 1;28(1):73-6. *2 The yeast proteome database (YPD) and Caenorhabditis elegans proteome database (WormPD): comprehensive...000 Jan 1;28(1):73-6. *3 A comprehensive analysis of protein-protein interactions in Saccharomyces cerevisia

  13. NIST/ASME Steam Properties Database

    Science.gov (United States)

    SRD 10 NIST/ASME Steam Properties Database (PC database for purchase)   Based upon the International Association for the Properties of Water and Steam (IAPWS) 1995 formulation for the thermodynamic properties of water and the most recent IAPWS formulations for transport and other properties, this updated version provides water properties over a wide range of conditions according to the accepted international standards.

  14. Deep Sea Coral National Observation Database, Northeast Region

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The national database of deep sea coral observations. Northeast version 1.0. * This database was developed by the NOAA NOS NCCOS CCMA Biogeography office as part of...

  15. Distributed Database Control and Allocation. Volume 3. Distributed Database System Designer’s Handbook.

    Science.gov (United States)

    1983-10-01

    Multiversion Data 2-18 2.7.1 Multiversion Timestamping 2-20 2.T.2 Multiversion Looking 2-20 2.8 Combining the Techniques 2-22 3. Database Recovery Algorithms...See rTHEM79, GIFF79] for details. 2.7 Multiversion Data Let us return to a database system model where each logical data item is stored at one DM...In a multiversion database each Write wifxl, produces a new copy (or version) of x, denoted xi. Thus, the value of z is a set of ver- sions. For each

  16. Implementing version support for complex objects

    OpenAIRE

    Blanken, Henk

    1991-01-01

    New applications in the area of office information systems, computer aided design and manufacturing make new demands upon database management systems. Among others highly structured objects and their history have to be represented and manipulated. The paper discusses some general problems concerning the access and storage of complex objects with their versions and the solutions developed within the AIM/II project. Queries related to versions are distinguished in ASOF queries (asking informati...

  17. Women who abuse prescription opioids: findings from the Addiction Severity Index-Multimedia Version Connect prescription opioid database.

    Science.gov (United States)

    Green, Traci C; Grimes Serrano, Jill M; Licari, Andrea; Budman, Simon H; Butler, Stephen F

    2009-07-01

    Evidence suggests gender differences in abuse of prescription opioids. This study aimed to describe characteristics of women who abuse prescription opioids in a treatment-seeking sample and to contrast gender differences among prescription opioid abusers. Data collected November 2005 to April 2008 derived from the Addiction Severity Index Multimedia Version Connect (ASI-MV Connect) database. Bivariate and multivariable logistic regression examined correlates of prescription opioid abuse stratified by gender. 29,906 assessments from 220 treatment centers were included, of which 12.8% (N=3821) reported past month prescription opioid abuse. Women were more likely than men to report use of any prescription opioid (29.8% females vs. 21.1% males, phistory of drug overdose. Men-specific correlates were age screen and identify those at highest risk of prescription opioid abuse. Prevention and intervention efforts with a gender-specific approach are warranted.

  18. Fire test database

    International Nuclear Information System (INIS)

    Lee, J.A.

    1989-01-01

    This paper describes a project recently completed for EPRI by Impell. The purpose of the project was to develop a reference database of fire tests performed on non-typical fire rated assemblies. The database is designed for use by utility fire protection engineers to locate test reports for power plant fire rated assemblies. As utilities prepare to respond to Information Notice 88-04, the database will identify utilities, vendors or manufacturers who have specific fire test data. The database contains fire test report summaries for 729 tested configurations. For each summary, a contact is identified from whom a copy of the complete fire test report can be obtained. Five types of configurations are included: doors, dampers, seals, wraps and walls. The database is computerized. One version for IBM; one for Mac. Each database is accessed through user-friendly software which allows adding, deleting, browsing, etc. through the database. There are five major database files. One each for the five types of tested configurations. The contents of each provides significant information regarding the test method and the physical attributes of the tested configuration. 3 figs

  19. LHCb Conditions Database Operation Assistance Systems

    CERN Multimedia

    Shapoval, Illya

    2012-01-01

    The Conditions Database of the LHCb experiment (CondDB) provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger, reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues: - an extension to the automatic content validation done by the “Oracle Streams” replication technology, to trap cases when the replication was unsuccessful; - an automated distribution process for the S...

  20. A new version of the RDP (Ribosomal Database Project)

    Science.gov (United States)

    Maidak, B. L.; Cole, J. R.; Parker, C. T. Jr; Garrity, G. M.; Larsen, N.; Li, B.; Lilburn, T. G.; McCaughey, M. J.; Olsen, G. J.; Overbeek, R.; hide

    1999-01-01

    The Ribosomal Database Project (RDP-II), previously described by Maidak et al. [ Nucleic Acids Res. (1997), 25, 109-111], is now hosted by the Center for Microbial Ecology at Michigan State University. RDP-II is a curated database that offers ribosomal RNA (rRNA) nucleotide sequence data in aligned and unaligned forms, analysis services, and associated computer programs. During the past two years, data alignments have been updated and now include >9700 small subunit rRNA sequences. The recent development of an ObjectStore database will provide more rapid updating of data, better data accuracy and increased user access. RDP-II includes phylogenetically ordered alignments of rRNA sequences, derived phylogenetic trees, rRNA secondary structure diagrams, and various software programs for handling, analyzing and displaying alignments and trees. The data are available via anonymous ftp (ftp.cme.msu. edu) and WWW (http://www.cme.msu.edu/RDP). The WWW server provides ribosomal probe checking, approximate phylogenetic placement of user-submitted sequences, screening for possible chimeric rRNA sequences, automated alignment, and a suggested placement of an unknown sequence on an existing phylogenetic tree. Additional utilities also exist at RDP-II, including distance matrix, T-RFLP, and a Java-based viewer of the phylogenetic trees that can be used to create subtrees.

  1. Stratified B-trees and versioning dictionaries

    OpenAIRE

    Twigg, Andy; Byde, Andrew; Milos, Grzegorz; Moreton, Tim; Wilkes, John; Wilkie, Tom

    2011-01-01

    A classic versioned data structure in storage and computer science is the copy-on-write (CoW) B-tree -- it underlies many of today's file systems and databases, including WAFL, ZFS, Btrfs and more. Unfortunately, it doesn't inherit the B-tree's optimality properties; it has poor space utilization, cannot offer fast updates, and relies on random IO to scale. Yet, nothing better has been developed since. We describe the `stratified B-tree', which beats all known semi-external memory versioned B...

  2. Functionally Graded Materials Database

    Science.gov (United States)

    Kisara, Katsuto; Konno, Tomomi; Niino, Masayuki

    2008-02-01

    Functionally Graded Materials Database (hereinafter referred to as FGMs Database) was open to the society via Internet in October 2002, and since then it has been managed by the Japan Aerospace Exploration Agency (JAXA). As of October 2006, the database includes 1,703 research information entries with 2,429 researchers data, 509 institution data and so on. Reading materials such as "Applicability of FGMs Technology to Space Plane" and "FGMs Application to Space Solar Power System (SSPS)" were prepared in FY 2004 and 2005, respectively. The English version of "FGMs Application to Space Solar Power System (SSPS)" is now under preparation. This present paper explains the FGMs Database, describing the research information data, the sitemap and how to use it. From the access analysis, user access results and users' interests are discussed.

  3. De-identifying an EHR Database

    DEFF Research Database (Denmark)

    Lauesen, Søren; Pantazos, Kostas; Lippert, Søren

    2011-01-01

    -identified a Danish EHR database with 437,164 patients. The goal was to generate a version with real medical records, but related to artificial persons. We developed a de-identification algorithm that uses lists of named entities, simple language analysis, and special rules. Our algorithm consists of 3 steps: collect...... lists of identifiers from the database and external resources, define a replacement for each identifier, and replace identifiers in structured data and free text. Some patient records could not be safely de-identified, so the de-identified database has 323,122 patient records with an acceptable degree...... of anonymity, readability and correctness (F-measure of 95%). The algorithm has to be adjusted for each culture, language and database....

  4. O-GLYCBASE version 2.0: a revised database of O-glycosylated proteins

    DEFF Research Database (Denmark)

    Hansen, Jan; Lund, Ole; Rapacki, Kristoffer

    1997-01-01

    O-GLYCBASE is an updated database of information on glycoproteins and their O-linked glycosylation sites. Entries are compiled and revised from the literature, and from the SWISS-PROT database. Entries include information about species, sequence, glycosylation sites and glycan type. O-GLYCBASE is...... patterns for the GalNAc, mannose and GlcNAc transferases are shown. The O-GLYCBASE database is available through WWW or by anonymous FTP....

  5. U.S. Climate Divisional Dataset (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data has been superseded by a newer version of the dataset. Please refer to NOAA's Climate Divisional Database for more information. The U.S. Climate Divisional...

  6. Professional iOS database application programming

    CERN Document Server

    Alessi, Patrick

    2013-01-01

    Updated and revised coverage that includes the latest versions of iOS and Xcode Whether you're a novice or experienced developer, you will want to dive into this updated resource on database application programming for the iPhone and iPad. Packed with more than 50 percent new and revised material - including completely rebuilt code, screenshots, and full coverage of new features pertaining to database programming and enterprise integration in iOS 6 - this must-have book intends to continue the precedent set by the previous edition by helping thousands of developers master database

  7. Near Real-Time Automatic Data Quality Controls for the AERONET Version 3 Database: An Introduction to the New Level 1.5V Aerosol Optical Depth Data Product

    Science.gov (United States)

    Giles, D. M.; Holben, B. N.; Smirnov, A.; Eck, T. F.; Slutsker, I.; Sorokin, M. G.; Espenak, F.; Schafer, J.; Sinyuk, A.

    2015-12-01

    The Aerosol Robotic Network (AERONET) has provided a database of aerosol optical depth (AOD) measured by surface-based Sun/sky radiometers for over 20 years. AERONET provides unscreened (Level 1.0) and automatically cloud cleared (Level 1.5) AOD in near real-time (NRT), while manually inspected quality assured (Level 2.0) AOD are available after instrument field deployment (Smirnov et al., 2000). The growing need for NRT quality controlled aerosol data has become increasingly important. Applications of AERONET NRT data include the satellite evaluation (e.g., MODIS, VIIRS, MISR, OMI), data synergism (e.g., MPLNET), verification of aerosol forecast models and reanalysis (e.g., GOCART, ICAP, NAAPS, MERRA), input to meteorological models (e.g., NCEP, ECMWF), and field campaign support (e.g., KORUS-AQ, ORACLES). In response to user needs for quality controlled NRT data sets, the new Version 3 (V3) Level 1.5V product was developed with similar quality controls as those applied by hand to the Version 2 (V2) Level 2.0 data set. The AERONET cloud screened (Level 1.5) NRT AOD database can be significantly impacted by data anomalies. The most significant data anomalies include AOD diurnal dependence due to contamination or obstruction of the sensor head windows, anomalous AOD spectral dependence due to problems with filter degradation, instrument gains, or non-linear changes in calibration, and abnormal changes in temperature sensitive wavelengths (e.g., 1020nm) in response to anomalous sensor head temperatures. Other less common AOD anomalies result from loose filters, uncorrected clock shifts, connection and electronic issues, and various solar eclipse episodes. Automatic quality control algorithms are applied to the new V3 Level 1.5 database to remove NRT AOD anomalies and produce the new AERONET V3 Level 1.5V AOD product. Results of the quality control algorithms are presented and the V3 Level 1.5V AOD database is compared to the V2 Level 2.0 AOD database.

  8. ATLAS database application enhancements using Oracle 11g

    International Nuclear Information System (INIS)

    Dimitrov, G; Canali, L; Blaszczyk, M; Sorokoletov, R

    2012-01-01

    The ATLAS experiment at LHC relies on databases for detector online data-taking, storage and retrieval of configurations, calibrations and alignments, post data-taking analysis, file management over the grid, job submission and management, condition data replication to remote sites. Oracle Relational Database Management System (RDBMS) has been addressing the ATLAS database requirements to a great extent for many years. Ten database clusters are currently deployed for the needs of the different applications, divided in production, integration and standby databases. The data volume, complexity and demands from the users are increasing steadily with time. Nowadays more than 20 TB of data are stored in the ATLAS production Oracle databases at CERN (not including the index overhead), but the most impressive number is the hosted 260 database schemes (for the most common case each schema is related to a dedicated client application with its own requirements). At the beginning of 2012 all ATLAS databases at CERN have been upgraded to the newest Oracle version at the time: Oracle 11g Release 2. Oracle 11g come with several key improvements compared to previous database engine versions. In this work we present our evaluation of the most relevant new features of Oracle 11g of interest for ATLAS applications and use cases. Notably we report on the performance and scalability enhancements obtained in production since the Oracle 11g deployment during Q1 2012 and we outline plans for future work in this area.

  9. Accelerated Leach Testing of GLASS: ALTGLASS Version 3.0

    Energy Technology Data Exchange (ETDEWEB)

    Trivelpiece, Cory L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Jantzen, Carol M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Crawford, Charles L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-12-31

    The Accelerated Leach Testing of GLASS (ALTGLASS) database is a collection of data from short- and long-term product consistency tests (PCT, ASTM C1285 A and B) on high level waste (HLW) as well as low activity waste (LAW) glasses. The database provides both U.S. and international researchers with an archive of experimental data for the purpose of studying, modeling, or validating existing models of nuclear waste glass corrosion. The ALTGLASS database is maintained and updated by researchers at the Savannah River National Laboratory (SRNL). This newest version, ALTGLASS Version 3.0, has been updated with an additional 503 rows of data representing PCT results from corrosion experiments conducted in the United States by the Savannah River National Laboratory, Pacific Northwest National Laboratory, Argonne National Laboratory, and the Vitreous State Laboratory (SRNL, PNNL, ANL, VSL, respectively) as well as the National Nuclear Laboratory (NNL) in the United Kingdom.

  10. Accelerated Leach Testing of GLASS: ALTGLASS Version 3.0

    International Nuclear Information System (INIS)

    Trivelpiece, Cory L.; Jantzen, Carol M.; Crawford, Charles L.

    2016-01-01

    The Accelerated Leach Testing of GLASS (ALTGLASS) database is a collection of data from short- and long-term product consistency tests (PCT, ASTM C1285 A and B) on high level waste (HLW) as well as low activity waste (LAW) glasses. The database provides both U.S. and international researchers with an archive of experimental data for the purpose of studying, modeling, or validating existing models of nuclear waste glass corrosion. The ALTGLASS database is maintained and updated by researchers at the Savannah River National Laboratory (SRNL). This newest version, ALTGLASS Version 3.0, has been updated with an additional 503 rows of data representing PCT results from corrosion experiments conducted in the United States by the Savannah River National Laboratory, Pacific Northwest National Laboratory, Argonne National Laboratory, and the Vitreous State Laboratory (SRNL, PNNL, ANL, VSL, respectively) as well as the National Nuclear Laboratory (NNL) in the United Kingdom.

  11. Database System Design and Implementation for Marine Air-Traffic-Controller Training

    Science.gov (United States)

    2017-06-01

    units used larger applications such as Microsoft Access or MySQL . These systems have outdated platforms, and individuals currently maintaining these...Oracle Database 12c was version 12.2.0.20.96, IDE version 12.2.1.0.42.151001.0541. SQL Developer was version 4.1.3.20.96, which used Java platform

  12. O-GLYCOBASE version 4.0: a revised database of O-glycosylated proteins

    DEFF Research Database (Denmark)

    Gupta, Ramneek; Birch, Hanne; Rapacki, Krzysztof

    1999-01-01

    O-GLYCBASE is a database of glycoproteins with O-linked glycosylation sites. Entries with at least one experimentally verified O-glycosylation site have been complied from protein sequence databases and literature. Each entry contains information about the glycan involved, the species, sequence, ...

  13. Research Directions in Database Security IV

    Science.gov (United States)

    1993-07-01

    second algorithm, which is based on multiversion timestamp ordering, is that high level transactions can be forced to read arbitrarily old data values...system. The first, the single ver- sion model, stores only the latest veision of each data item, while the second, the 88 multiversion model, stores... Multiversion Database Model In the standard database model, where there is only one version of each data item, all transactions compete for the most recent

  14. JASPAR 2010: the greatly expanded open-access database of transcription factor binding profiles

    DEFF Research Database (Denmark)

    Portales-Casamar, Elodie; Thongjuea, Supat; Kwon, Andrew T

    2009-01-01

    JASPAR (http://jaspar.genereg.net) is the leading open-access database of matrix profiles describing the DNA-binding patterns of transcription factors (TFs) and other proteins interacting with DNA in a sequence-specific manner. Its fourth major release is the largest expansion of the core database...... to an active research community. As binding models are refined by newer data, the JASPAR database now uses versioning of matrices: in this release, 12% of the older models were updated to improved versions. Classification of TF families has been improved by adopting a new DNA-binding domain nomenclature...

  15. A Global Database of Soil Respiration Data, Version 2.0

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: This data set provides an updated soil respiration database (SRDB), a near-universal compendium of published soil respiration (RS) data. Soil respiration,...

  16. O-GLYCBASE version 3.0: a revised database of O-glycosylated proteins

    DEFF Research Database (Denmark)

    Hansen, Jan; Lund, Ole; Nilsson, Jette

    1998-01-01

    O-GLYCBASE is a revised database of information on glycoproteins and their O-linked glycosylation sites. Entries are compiled and revised from the literature, and from the sequence databases. Entries include informations about species, sequence, glycosylation sites and glycan type and is fully cr...

  17. A Global Database of Soil Respiration Data, Version 1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: This data set provides a soil respiration data database (SRDB), a near-universal compendium of published soil respiration (RS) data. Soil respiration, the...

  18. New Access Points to ERIC--CD-ROM Versions. ERIC Digest.

    Science.gov (United States)

    McLaughlin, Pamela W.

    This digest reviews three CD-ROM (compact disc-read only memory) versions of the ERIC (Educational Resources Information Center) database currently being delivered or tested and provides information for comparison. However, no attempt is made to recommend any one product. The advantages and disadvantages of the acquisition of CD-ROM databases are…

  19. Solid Waste Projection Model: Database User's Guide

    International Nuclear Information System (INIS)

    Blackburn, C.L.

    1993-10-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for using Version 1.4 of the SWPM database: system requirements and preparation, entering and maintaining data, and performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions and does not Provide instruction in the use of Paradox, the database management system in which the SWPM database is established

  20. A new relational database structure and online interface for the HITRAN database

    International Nuclear Information System (INIS)

    Hill, Christian; Gordon, Iouli E.; Rothman, Laurence S.; Tennyson, Jonathan

    2013-01-01

    A new format for the HITRAN database is proposed. By storing the line-transition data in a number of linked tables described by a relational database schema, it is possible to overcome the limitations of the existing format, which have become increasingly apparent over the last few years as new and more varied data are being used by radiative-transfer models. Although the database in the new format can be searched using the well-established Structured Query Language (SQL), a web service, HITRANonline, has been deployed to allow users to make most common queries of the database using a graphical user interface in a web page. The advantages of the relational form of the database to ensuring data integrity and consistency are explored, and the compatibility of the online interface with the emerging standards of the Virtual Atomic and Molecular Data Centre (VAMDC) project is discussed. In particular, the ability to access HITRAN data using a standard query language from other websites, command line tools and from within computer programs is described. -- Highlights: • A new, interactive version of the HITRAN database is presented. • The data is stored in a structured fashion in a relational database. • The new HITRANonline interface offers increased functionality and easier error correction

  1. The MAR databases: development and implementation of databases specific for marine metagenomics.

    Science.gov (United States)

    Klemetsen, Terje; Raknes, Inge A; Fu, Juan; Agafonov, Alexander; Balasundaram, Sudhagar V; Tartari, Giacomo; Robertsen, Espen; Willassen, Nils P

    2018-01-04

    We introduce the marine databases; MarRef, MarDB and MarCat (https://mmp.sfb.uit.no/databases/), which are publicly available resources that promote marine research and innovation. These data resources, which have been implemented in the Marine Metagenomics Portal (MMP) (https://mmp.sfb.uit.no/), are collections of richly annotated and manually curated contextual (metadata) and sequence databases representing three tiers of accuracy. While MarRef is a database for completely sequenced marine prokaryotic genomes, which represent a marine prokaryote reference genome database, MarDB includes all incomplete sequenced prokaryotic genomes regardless level of completeness. The last database, MarCat, represents a gene (protein) catalog of uncultivable (and cultivable) marine genes and proteins derived from marine metagenomics samples. The first versions of MarRef and MarDB contain 612 and 3726 records, respectively. Each record is built up of 106 metadata fields including attributes for sampling, sequencing, assembly and annotation in addition to the organism and taxonomic information. Currently, MarCat contains 1227 records with 55 metadata fields. Ontologies and controlled vocabularies are used in the contextual databases to enhance consistency. The user-friendly web interface lets the visitors browse, filter and search in the contextual databases and perform BLAST searches against the corresponding sequence databases. All contextual and sequence databases are freely accessible and downloadable from https://s1.sfb.uit.no/public/mar/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. Implementation of Collate at the database level for PostgreSQL

    OpenAIRE

    Strnad, Radek

    2009-01-01

    Current version of PostgreSQL supports only one collation per database cluster. This does not meet the requirements of some users developing multi-lingual applications. The goal of the work will be to implement collation at database level and make foundations for further national language supp ort development. User will be able to set collation when creating a database. Particulary commands CREATE DATABASE... COLLATE ... will be implemented using ANSI standards. Work will also implement possi...

  3. Study of relational nuclear databases and online services

    International Nuclear Information System (INIS)

    Fan Tieshuan; Guo Zhiyu; Liu Wenlong; Ye Weiguo; Feng Yuqing; Song Xiangxiang; Huang Gang; Hong Yingjue; Liu Tinjin; Chen Jinxiang; Tang Guoyou; Shi Zhaoming; Liu Chi; Chen Jiaer; Huang Xiaolong

    2004-01-01

    A relational nuclear database management and web-based services software system has been developed. Its objective is to allow users to access numerical and graphical representation of nuclear data and to easily reconstruct nuclear data in original standardized formats from the relational databases. It presents 9 relational nuclear libraries: 5 ENDF format neutron reaction databases (BROND), CENDL, ENDF, JEF and JENDL), the ENSDF database, the EXFOR database, the IAEA Photonuclear Data Library and the charged particle reaction data from the FENDL database. The computer programs providing support for database management and data retrievals are based on the Linux implementation of PHP and the MySQL software, and are platform-independent. The first version of this software was officially released in September 2001

  4. Report on the database structuring project in fiscal 1996 related to the 'surveys on making databases for energy saving (2)'; 1996 nendo database kochiku jigyo hokokusho. Sho energy database system ka ni kansuru chosa 2

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    With an objective to support promotion of energy conservation in such countries as Japan, China, Indonesia, the Philippines, Thailand, Malaysia, Taiwan and Korea, primary information on energy conservation in each country was collected, and the database was structured. This paper summarizes the achievements in fiscal 1996. Based on the survey result on the database project having been progressed to date, and on various data having been collected, this fiscal year has discussed structuring the database for distribution and proliferation of the database. In the discussion, requirements for the functions to be possessed by the database, items of data to be recorded in the database, and processing of the recorded data were put into order referring to propositions on the database circumstances. Demonstrations for the database of a proliferation version were performed in the Philippines, Indonesia and China. Three hundred CDs for distribution in each country were prepared. Adjustments and confirmation on operation of the supplied computers were carried out, and the operation explaining meetings were held in China and the Philippines. (NEDO)

  5. Spent fuel composition database system on WWW. SFCOMPO on WWW Ver.2

    International Nuclear Information System (INIS)

    Mochizuki, Hiroki; Suyama, Kenya; Nomura, Yasushi; Okuno, Hiroshi

    2001-08-01

    'SFCOMPO on WWW Ver.2' is an advanced version of 'SFCOMPO on WWW (Spent Fuel Composition Database System on WWW' released in 1997. This new version has a function of database management by an introduced relational database software 'PostgreSQL' and has various searching methods. All of the data required for the calculation of isotopic composition is available from the web site of this system. This report describes the outline of this system and the searching method using Internet. In addition, the isotopic composition data and the reactor data of the 14 LWRs (7 PWR and 7 BWR) registered in this system are described. (author)

  6. Diffusivity database (DDB) system for major rocks (Version of 2006/specification and CD-ROM)

    International Nuclear Information System (INIS)

    Tochigi, Yoshikatsu; Sasamoto, Hirosi; Shibata, Masahiro; Sato, Haruo; Yui, Mikazu

    2006-03-01

    The development of the database system has been started to manage with the generally used. The database system has been constructed based on datasheets of the effective diffusion coefficient of the nuclides in the rock matrix in order to be applied on the 'H12: Project to Establish the Scientific and Technical Basis for HLW Disposal in Japan'. In this document, the examination and expansion of the datasheet structure and the process of construction of the database system and conversion of all data existing on datasheets are described. As the first step of the development of the database, this database system and its data will continue to be updated and the interface will be revised to improve the availability. The developed database system is attached on the CD-ROM as the file format of Microsoft Access. (author)

  7. Querying temporal databases via OWL 2 QL

    CSIR Research Space (South Africa)

    Klarman, S

    2014-06-01

    Full Text Available SQL:2011, the most recently adopted version of the SQL query language, has unprecedentedly standardized the representation of temporal data in relational databases. Following the successful paradigm of ontology-based data access, we develop a...

  8. Reproducibility of the Portuguese version of the PEDro Scale

    Directory of Open Access Journals (Sweden)

    Silvia Regina Shiwa

    2011-10-01

    Full Text Available The objective of this study was to test the inter-rater reproducibility of the Portuguese version of the PEDro Scale. Seven physiotherapists rated the methodological quality of 50 reports of randomized controlled trials written in Portuguese indexed on the PEDro database. Each report was also rated using the English version of the PEDro Scale. Reproducibility was evaluated by comparing two separate ratings of reports written in Portuguese and comparing the Portuguese PEDro score with the English version of the scale. Kappa coefficients ranged from 0.53 to 1.00 for individual item and an intraclass correlation coefficient (ICC of 0.82 for the total PEDro score was observed. The standard error of the measurement of the scale was 0.58. The Portuguese version of the scale was comparable with the English version, with an ICC of 0.78. The inter-rater reproducibility of the Brazilian Portuguese PEDro Scale is adequate and similar to the original English version.

  9. Spent fuel composition database system on WWW. SFCOMPO on WWW Ver.2

    Energy Technology Data Exchange (ETDEWEB)

    Mochizuki, Hiroki [Japan Research Institute, Ltd., Tokyo (Japan); Suyama, Kenya; Nomura, Yasushi; Okuno, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-08-01

    'SFCOMPO on WWW Ver.2' is an advanced version of 'SFCOMPO on WWW' ('Spent Fuel Composition Database System on WWW') released in 1997. This new version has a function of database management by an introduced relational database software 'PostgreSQL' and has various searching methods. All of the data required for the calculation of isotopic composition is available from the web site of this system. This report describes the outline of this system and the searching method using Internet. In addition, the isotopic composition data and the reactor data of the 14 LWRs (7 PWR and 7 BWR) registered in this system are described. (author)

  10. The Androgen Receptor Gene Mutations Database.

    Science.gov (United States)

    Gottlieb, B; Lehvaslaiho, H; Beitel, L K; Lumbroso, R; Pinsky, L; Trifiro, M

    1998-01-01

    The current version of the androgen receptor (AR) gene mutations database is described. The total number of reported mutations has risen from 272 to 309 in the past year. We have expanded the database: (i) by giving each entry an accession number; (ii) by adding information on the length of polymorphic polyglutamine (polyGln) and polyglycine (polyGly) tracts in exon 1; (iii) by adding information on large gene deletions; (iv) by providing a direct link with a completely searchable database (courtesy EMBL-European Bioinformatics Institute). The addition of the exon 1 polymorphisms is discussed in light of their possible relevance as markers for predisposition to prostate or breast cancer. The database is also available on the internet (http://www.mcgill. ca/androgendb/ ), from EMBL-European Bioinformatics Institute (ftp. ebi.ac.uk/pub/databases/androgen ), or as a Macintosh FilemakerPro or Word file (MC33@musica.mcgill.ca).

  11. U.S. EPA River Reach File Version 1.0

    Data.gov (United States)

    Kansas Data Access and Support Center — Reach File Version 1.0 (RF1) is a vector database of approximately 700,000 miles of streams and open waters in the conterminous United States. It is used extensively...

  12. Product- and Process Units in the CRITT Translation Process Research Database

    DEFF Research Database (Denmark)

    Carl, Michael

    than 300 hours of text production. The database provides the raw logging data, as well as Tables of pre-processed product- and processing units. The TPR-DB includes various types of simple and composed product and process units that are intended to support the analysis and modelling of human text......The first version of the "Translation Process Research Database" (TPR DB v1.0) was released In August 2012, containing logging data of more than 400 translation and text production sessions. The current version of the TPR DB, (v1.4), contains data from more than 940 sessions, which represents more...

  13. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1992-11-09

    The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R- 717 (ammonia), ethers, and others as well as azeotropic and zeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents on compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. A computerized version is available that includes retrieval software.

  14. Hanford Site technical baseline database. Revision 1

    International Nuclear Information System (INIS)

    Porter, P.E.

    1995-01-01

    This report lists the Hanford specific files (Table 1) that make up the Hanford Site Technical Baseline Database. Table 2 includes the delta files that delineate the differences between this revision and revision 0 of the Hanford Site Technical Baseline Database. This information is being managed and maintained on the Hanford RDD-100 System, which uses the capabilities of RDD-100, a systems engineering software system of Ascent Logic Corporation (ALC). This revision of the Hanford Site Technical Baseline Database uses RDD-100 version 3.0.2.2 (see Table 3). Directories reflect those controlled by the Hanford RDD-100 System Administrator. Table 4 provides information regarding the platform. A cassette tape containing the Hanford Site Technical Baseline Database is available

  15. Verification of RESRAD-RDD. (Version 2.01)

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Jing-Jy [Argonne National Lab. (ANL), Argonne, IL (United States); Flood, Paul E. [Argonne National Lab. (ANL), Argonne, IL (United States); LePoire, David [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, Sunita [Argonne National Lab. (ANL), Argonne, IL (United States); Yu, Charley [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-09-01

    In this report, the results generated by RESRAD-RDD version 2.01 are compared with those produced by RESRAD-RDD version 1.7 for different scenarios with different sets of input parameters. RESRAD-RDD version 1.7 is spreadsheet-driven, performing calculations with Microsoft Excel spreadsheets. RESRAD-RDD version 2.01 revamped version 1.7 by using command-driven programs designed with Visual Basic.NET to direct calculations with data saved in Microsoft Access database, and re-facing the graphical user interface (GUI) to provide more flexibility and choices in guideline derivation. Because version 1.7 and version 2.01 perform the same calculations, the comparison of their results serves as verification of both versions. The verification covered calculation results for 11 radionuclides included in both versions: Am-241, Cf-252, Cm-244, Co-60, Cs-137, Ir-192, Po-210, Pu-238, Pu-239, Ra-226, and Sr-90. At first, all nuclidespecific data used in both versions were compared to ensure that they are identical. Then generic operational guidelines and measurement-based radiation doses or stay times associated with a specific operational guideline group were calculated with both versions using different sets of input parameters, and the results obtained with the same set of input parameters were compared. A total of 12 sets of input parameters were used for the verification, and the comparison was performed for each operational guideline group, from A to G, sequentially. The verification shows that RESRAD-RDD version 1.7 and RESRAD-RDD version 2.01 generate almost identical results; the slight differences could be attributed to differences in numerical precision with Microsoft Excel and Visual Basic.NET. RESRAD-RDD version 2.01 allows the selection of different units for use in reporting calculation results. The results of SI units were obtained and compared with the base results (in traditional units) used for comparison with version 1.7. The comparison shows that RESRAD

  16. NIMS structural materials databases and cross search engine - MatNavi

    Energy Technology Data Exchange (ETDEWEB)

    Yamazaki, M.; Xu, Y.; Murata, M.; Tanaka, H.; Kamihira, K.; Kimura, K. [National Institute for Materials Science, Tokyo (Japan)

    2007-06-15

    Materials Database Station (MDBS) of National Institute for Materials Science (NIMS) owns the world's largest Internet materials database for academic and industry purpose, which is composed of twelve databases: five concerning structural materials, five concerning basic physical properties, one for superconducting materials and one for polymers. All of theses databases are opened to Internet access at the website of http://mits.nims.go.jp/en. Online tools for predicting properties of polymers and composite materials are also available. The NIMS structural materials databases are composed of structural materials data sheet online version (creep, fatigue, corrosion and space use materials strength), microstructure for crept material database, Pressure vessel materials database and CCT diagram for welding. (orig.)

  17. The new NIST atomic spectra database

    International Nuclear Information System (INIS)

    Kelleher, D.E.; Martin, W.C.; Wiese, W.L.; Sugar, J.; Fuhr, J.R.; Olsen, K.; Musgrove, A.; Mohr, P.J.; Reader, J.; Dalton, G.R.

    1999-01-01

    The new atomic spectra database (ASD), Version 2.0, of the National Institute of Standards and Technology (NIST) contains significantly more data and covers a wider range of atomic and ionic transitions and energy levels than earlier versions. All data are integrated. It also has a new user interface and search engine. ASD contains spectral reference data which have been critically evaluated and compiled by NIST. Version 2.0 contains data on 900 spectra, with about 70000 energy levels and 91000 lines ranging from about 1 Aangstroem to 200 micrometers, roughly half of which have transition probabilities with estimated uncertainties. References to the NIST compilations and original data sources are listed in the ASD bibliography. A detailed ''Help'' file serves as a user's manual, and full search and filter capabilities are provided. (orig.)

  18. TrSDB: a proteome database of transcription factors

    Science.gov (United States)

    Hermoso, Antoni; Aguilar, Daniel; Aviles, Francesc X.; Querol, Enrique

    2004-01-01

    TrSDB—TranScout Database—(http://ibb.uab.es/trsdb) is a proteome database of eukaryotic transcription factors based upon predicted motifs by TranScout and data sources such as InterPro and Gene Ontology Annotation. Nine eukaryotic proteomes are included in the current version. Extensive and diverse information for each database entry, different analyses considering TranScout classification and similarity relationships are offered for research on transcription factors or gene expression. PMID:14681387

  19. CyanoBase: the cyanobacteria genome database update 2010

    OpenAIRE

    Nakao, Mitsuteru; Okamoto, Shinobu; Kohara, Mitsuyo; Fujishiro, Tsunakazu; Fujisawa, Takatomo; Sato, Shusei; Tabata, Satoshi; Kaneko, Takakazu; Nakamura, Yasukazu

    2009-01-01

    CyanoBase (http://genome.kazusa.or.jp/cyanobase) is the genome database for cyanobacteria, which are model organisms for photosynthesis. The database houses cyanobacteria species information, complete genome sequences, genome-scale experiment data, gene information, gene annotations and mutant information. In this version, we updated these datasets and improved the navigation and the visual display of the data views. In addition, a web service API now enables users to retrieve the data in var...

  20. Global Ocean Surface Water Partial Pressure of CO2 Database: Measurements Performed During 1968-2007 (Version 2007)

    Energy Technology Data Exchange (ETDEWEB)

    Kozyr, Alex [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Carbon Dioxide Information Analysis Center

    2008-09-30

    More than 4.1 million measurements of surface water partial pressure of CO2 obtained over the global oceans during 1968-2007 are listed in the Lamont-Doherty Earth Observatory (LDEO) database, which includes open ocean and coastal water measurements. The data assembled include only those measured by equilibrator-CO2 analyzer systems and have been quality-controlled based on the stability of the system performance, the reliability of calibrations for CO2 analysis, and the internal consistency of data. To allow re-examination of the data in the future, a number of measured parameters relevant to pCO2 measurements are listed. The overall uncertainty for the pCO2 values listed is estimated to be ± 2.5 µatm on the average. For simplicity and for ease of reference, this version is referred to as 2007, meaning that data collected through 31 December 2007 has been included. It is our intention to update this database annually. There are 37 new cruise/ship files in this update. In addition, some editing has been performed on existing files so this should be considered a V2007 file. Also we have added a column reporting the partial pressure of CO2 in seawater in units of Pascals. The data presented in this database include the analyses of partial pressure of CO2 (pCO2), sea surface temperature (SST), sea surface salinity (SSS), pressure of the equilibration, and barometric pressure in the outside air from the ship’s observation system. The global pCO2 data set is available free of charge as a numeric data package (NDP) from the Carbon Dioxide Information Analysis Center (CDIAC). The NDP consists of the oceanographic data files and this printed documentation, which describes the procedures and methods used to obtain the data.

  1. Description of geophysical data in the SKB database GEOTAB. Version 2

    International Nuclear Information System (INIS)

    Sehlstedt, S.

    1991-01-01

    For the storage of different types of data collected by SKB a database called GEOTAB has been created. The following data is stored in the database: Background data, geological data, geophysical data, hydrogeological and meteorological data, hydrochemical data, and tracer tests. This report describes the data flow for different types of geophysical measurement. The descriptions start with measurement and end with the storage of data in GEOTAB. Each process and the resulting data volume is presented separately. The geophysical measurements have been divided into the following subjects: Geophysical ground surface measurements, geophysical borehole logging, and petrophysical measurements. Each group of measurements is described in an individual chapter. In each chapter several measuring techniques are described and each method has a data table and a flyleaf table in GEOTAB. (author)

  2. Insertion algorithms for network model database management systems

    Science.gov (United States)

    Mamadolimov, Abdurashid; Khikmat, Saburov

    2017-12-01

    The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.

  3. CyanoBase: the cyanobacteria genome database update 2010.

    Science.gov (United States)

    Nakao, Mitsuteru; Okamoto, Shinobu; Kohara, Mitsuyo; Fujishiro, Tsunakazu; Fujisawa, Takatomo; Sato, Shusei; Tabata, Satoshi; Kaneko, Takakazu; Nakamura, Yasukazu

    2010-01-01

    CyanoBase (http://genome.kazusa.or.jp/cyanobase) is the genome database for cyanobacteria, which are model organisms for photosynthesis. The database houses cyanobacteria species information, complete genome sequences, genome-scale experiment data, gene information, gene annotations and mutant information. In this version, we updated these datasets and improved the navigation and the visual display of the data views. In addition, a web service API now enables users to retrieve the data in various formats with other tools, seamlessly.

  4. PhosphoBase: a database of phosphorylation sites

    DEFF Research Database (Denmark)

    Blom, Nikolaj; Kreegipuu, Andres; Brunak, Søren

    1998-01-01

    PhosphoBase is a database of experimentally verified phosphorylation sites. Version 1.0 contains 156 entries and 398 experimentally determined phosphorylation sites. Entries are compiled and revised from the literature and from major protein sequence databases such as SwissProt and PIR. The entries...... provide information about the phosphoprotein and the exact position of its phosphorylation sites. Furthermore, part of the entries contain information about kinetic data obtained from enzyme assays on specific peptides. To illustrate the use of data extracted from PhosphoBase we present a sequence logo...... displaying the overall conservation of positions around serines phosphorylated by protein kinase A (PKA). PhosphoBase is available on the WWW at http://www.cbs.dtu.dk/databases/PhosphoBase/....

  5. Development of the geometry database for the CBM experiment

    Science.gov (United States)

    Akishina, E. P.; Alexandrov, E. I.; Alexandrov, I. N.; Filozova, I. A.; Friese, V.; Ivanov, V. V.

    2018-01-01

    The paper describes the current state of the Geometry Database (Geometry DB) for the CBM experiment. The main purpose of this database is to provide convenient tools for: (1) managing the geometry modules; (2) assembling various versions of the CBM setup as a combination of geometry modules and additional files. The CBM users of the Geometry DB may use both GUI (Graphical User Interface) and API (Application Programming Interface) tools for working with it.

  6. CD-ROM for the PGAA-IAEA database

    International Nuclear Information System (INIS)

    Firestone, R.B.; Zerking, V.

    2007-01-01

    Both the database of prompt gamma rays from slow neutron capture for elemental analysis and the results of this CRP are available on the accompanying CD-ROM. The file index.html is the home page for the CD-ROM, and provides links to the following information: (a) The CRP - General information, papers and reports relevant to this CRP. (b) The PGAA-IAEA database viewer - An interactive program to display and search the PGAA database by isotope, energy or capture cross-section. (c) The Database of Prompt Gamma Rays from Slow Neutron Capture for Elemental Analysis - This report. (d) The PGAA database files - Adopted PGAA database and associated files in EXCEL, PDF and Text formats. The archival databases by Lone et al. and by Reedy and Frankle are also available. (e) The Evaluated Gamma-Ray Activation File (EGAF) - The adopted PGAA database in ENSDF format. Data can be viewed with the Isotope Explorer 2.2 ENSDF Viewer. (f) The PGAA database evaluation - ENSDF format versions of the adopted PGAA database, and the Budapest and ENSDF isotopic input files. Decay scheme balance and statistical analysis summaries are provided. (g) The Isotope Explorer 2.2 ENSDF viewer - Windows software for viewing the level scheme drawings and tables provided in ENSDF format. The complete ENSDF database is included, as of December 2002. The databases and viewers are discussed in greater detail in the following sections

  7. PC version of PRIS (Power Reactor Information System)

    International Nuclear Information System (INIS)

    Fukala, J.; Stanik, Z.; White, D.

    1990-05-01

    The IAEA has been collecting operating experience data on nuclear power plants in the Member States since 1970. In 1980 a computerized database was established, the IAEA Power Reactor Information System (PRIS). To make PRIS data available to the Member States in a more convenient format, the development of a PC version of PRIS started in 1989

  8. The Danish (Q)SAR Database Update Project

    DEFF Research Database (Denmark)

    Nikolov, Nikolai Georgiev; Dybdahl, Marianne; Abildgaard Rosenberg, Sine

    2013-01-01

    The Danish (Q)SAR Database is a collection of predictions from quantitative structure–activity relationship ((Q)SAR) models for over 70 environmental and human health-related endpoints (covering biodegradation, metabolism, allergy, irritation, endocrine disruption, teratogenicity, mutagenicity......, carcinogenicity and others), each of them available for 185,000 organic substances. The database has been available online since 2005 (http://qsar.food.dtu.dk). A major update project for the Danish (Q)SAR database is under way, with a new online release planned in the beginning of 2015. The updated version...... will contain more than 600,000 discrete organic structures and new, more precise predictions for all endpoints, derived by consensus algorithms from a number of state-of-the-art individual predictions. Copyright © 2013 Published by Elsevier Ireland Ltd....

  9. Super Natural II--a database of natural products.

    Science.gov (United States)

    Banerjee, Priyanka; Erehman, Jevgeni; Gohlke, Björn-Oliver; Wilhelm, Thomas; Preissner, Robert; Dunkel, Mathias

    2015-01-01

    Natural products play a significant role in drug discovery and development. Many topological pharmacophore patterns are common between natural products and commercial drugs. A better understanding of the specific physicochemical and structural features of natural products is important for corresponding drug development. Several encyclopedias of natural compounds have been composed, but the information remains scattered or not freely available. The first version of the Supernatural database containing ∼ 50,000 compounds was published in 2006 to face these challenges. Here we present a new, updated and expanded version of natural product database, Super Natural II (http://bioinformatics.charite.de/supernatural), comprising ∼ 326,000 molecules. It provides all corresponding 2D structures, the most important structural and physicochemical properties, the predicted toxicity class for ∼ 170,000 compounds and the vendor information for the vast majority of compounds. The new version allows a template-based search for similar compounds as well as a search for compound names, vendors, specific physical properties or any substructures. Super Natural II also provides information about the pathways associated with synthesis and degradation of the natural products, as well as their mechanism of action with respect to structurally similar drugs and their target proteins. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Migration check tool: automatic plan verification following treatment management systems upgrade and database migration.

    Science.gov (United States)

    Hadley, Scott W; White, Dale; Chen, Xiaoping; Moran, Jean M; Keranen, Wayne M

    2013-11-04

    Software upgrades of the treatment management system (TMS) sometimes require that all data be migrated from one version of the database to another. It is necessary to verify that the data are correctly migrated to assure patient safety. It is impossible to verify by hand the thousands of parameters that go into each patient's radiation therapy treatment plan. Repeating pretreatment QA is costly, time-consuming, and may be inadequate in detecting errors that are introduced during the migration. In this work we investigate the use of an automatic Plan Comparison Tool to verify that plan data have been correctly migrated to a new version of a TMS database from an older version. We developed software to query and compare treatment plans between different versions of the TMS. The same plan in the two TMS systems are translated into an XML schema. A plan comparison module takes the two XML schemas as input and reports any differences in parameters between the two versions of the same plan by applying a schema mapping. A console application is used to query the database to obtain a list of active or in-preparation plans to be tested. It then runs in batch mode to compare all the plans, and a report of success or failure of the comparison is saved for review. This software tool was used as part of software upgrade and database migration from Varian's Aria 8.9 to Aria 11 TMS. Parameters were compared for 358 treatment plans in 89 minutes. This direct comparison of all plan parameters in the migrated TMS against the previous TMS surpasses current QA methods that relied on repeating pretreatment QA measurements or labor-intensive and fallible hand comparisons.

  11. Cyclebase 3.0: a multi-organism database on cell-cycle regulation and phenotypes

    DEFF Research Database (Denmark)

    Santos Delgado, Alberto; Wernersson, Rasmus; Jensen, Lars Juhl

    2015-01-01

    3.0, we have updated the content of the database to reflect changes to genome annotation, added new mRNAand protein expression data, and integrated cell-cycle phenotype information from high-content screens and model-organism databases. The new version of Cyclebase also features a new web interface...

  12. Meteonorm. Global meteorological database for solar energy and applied climatology. Version 4.0: edition 2000. Software and data on CD-ROM

    International Nuclear Information System (INIS)

    1999-01-01

    This is a comprehensive meteorological planning tool for system design, targeted at engineers, architects, teachers, planners and anyone interested in solar energy and climatology. METEONORM includes data from 2400 meteorological stations worldwide. Version V4.0 is based on over 15 years in the development of meteorological databases for energy. It may be used for solar applications at any desired location in the world, as an interpolation model of solar radiation and additional parameters for any site in the world is included. Also, with up-to-date algorithms, solar radiation incident on surfaces of arbitrary orientation may be calculated at the touch of a button. The local skyline profile may be specified. Five languages are supported: English, French, German, Italian, Spanish. Sites may be selected on map by means of a graphical interface. User data may be imported. 16 different output formats are available. Data, programme, manual, maps and illustrations are incorporated on the CD-ROM which is available for sale

  13. LHCb Conditions database operation assistance systems

    International Nuclear Information System (INIS)

    Clemencic, M; Shapoval, I; Cattaneo, M; Degaudenzi, H; Santinelli, R

    2012-01-01

    The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger (HLT), reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues. The first system is a CondDB state tracking extension to the Oracle 3D Streams replication technology, to trap cases when the CondDB replication was corrupted. Second, an automated distribution system for the SQLite-based CondDB, providing also smart backup and checkout mechanisms for the CondDB managers and LHCb users respectively. And, finally, a system to verify and monitor the internal (CondDB self-consistency) and external (LHCb physics software vs. CondDB) compatibility. The former two systems are used in production in the LHCb experiment and have achieved the desired goal of higher flexibility and robustness for the management and operation of the CondDB. The latter one has been fully designed and is passing currently to the implementation stage.

  14. A Global Database of Gas Fluxes from Soils after Rewetting or Thawing, Version 1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — This database contains information compiled from published studies on gas flux from soil following rewetting or thawing. The resulting database includes 222 field...

  15. Neuraxial blockade for external cephalic version: Cost analysis.

    Science.gov (United States)

    Yamasato, Kelly; Kaneshiro, Bliss; Salcedo, Jennifer

    2015-07-01

    Neuraxial blockade (epidural or spinal anesthesia/analgesia) with external cephalic version increases the external cephalic version success rate. Hospitals and insurers may affect access to neuraxial blockade for external cephalic version, but the costs to these institutions remain largely unstudied. The objective of this study was to perform a cost analysis of neuraxial blockade use during external cephalic version from hospital and insurance payer perspectives. Secondarily, we estimated the effect of neuraxial blockade on cesarean delivery rates. A decision-analysis model was developed using costs and probabilities occurring prenatally through the delivery hospital admission. Model inputs were derived from the literature, national databases, and local supply costs. Univariate and bivariate sensitivity analyses and Monte Carlo simulations were performed to assess model robustness. Neuraxial blockade was cost saving to both hospitals ($30 per delivery) and insurers ($539 per delivery) using baseline estimates. From both perspectives, however, the model was sensitive to multiple variables. Monte Carlo simulation indicated neuraxial blockade to be more costly in approximately 50% of scenarios. The model demonstrated that routine use of neuraxial blockade during external cephalic version, compared to no neuraxial blockade, prevented 17 cesarean deliveries for every 100 external cephalic versions attempted. Neuraxial blockade is associated with minimal hospital and insurer cost changes in the setting of external cephalic version, while reducing the cesarean delivery rate. © 2015 The Authors. Journal of Obstetrics and Gynaecology Research © 2015 Japan Society of Obstetrics and Gynecology.

  16. Recent Developments in the NIST Atomic Databases

    Science.gov (United States)

    Kramida, Alexander

    2011-05-01

    New versions of the NIST Atomic Spectra Database (ASD, v. 4.0) and three bibliographic databases (Atomic Energy Levels and Spectra, v. 2.0, Atomic Transition Probabilities, v. 9.0, and Atomic Line Broadening and Shapes, v. 3.0) have recently been released. In this contribution I will describe the main changes in the way users get the data through the Web. The contents of ASD have been significantly extended. In particular, the data on highly ionized tungsten (W III-LXXIV) have been added from a recently published NIST compilation. The tables for Fe I and Fe II have been replaced with newer, much more extensive lists (10000 lines for Fe I). The other updated or new spectra include H, D, T, He I-II, Li I-III, Be I-IV, B I-V, C I-II, N I-II, O I-II, Na I-X, K I-XIX, and Hg I. The new version of ASD now incorporates data on isotopes of several elements. I will describe some of the issues the NIST ASD Team faces when updating the data.

  17. Recent Developments in the NIST Atomic Databases

    International Nuclear Information System (INIS)

    Kramida, Alexander

    2011-01-01

    New versions of the NIST Atomic Spectra Database (ASD, v. 4.0) and three bibliographic databases (Atomic Energy Levels and Spectra, v. 2.0, Atomic Transition Probabilities, v. 9.0, and Atomic Line Broadening and Shapes, v. 3.0) have recently been released. In this contribution I will describe the main changes in the way users get the data through the Web. The contents of ASD have been significantly extended. In particular, the data on highly ionized tungsten (W III-LXXIV) have been added from a recently published NIST compilation. The tables for Fe I and Fe II have been replaced with newer, much more extensive lists (10000 lines for Fe I). The other updated or new spectra include H, D, T, He I-II, Li I-III, Be I-IV, B I-V, C I-II, N I-II, O I-II, Na I-X, K I-XIX, and Hg I. The new version of ASD now incorporates data on isotopes of several elements. I will describe some of the issues the NIST ASD Team faces when updating the data.

  18. REDIdb: the RNA editing database.

    Science.gov (United States)

    Picardi, Ernesto; Regina, Teresa Maria Rosaria; Brennicke, Axel; Quagliariello, Carla

    2007-01-01

    The RNA Editing Database (REDIdb) is an interactive, web-based database created and designed with the aim to allocate RNA editing events such as substitutions, insertions and deletions occurring in a wide range of organisms. The database contains both fully and partially sequenced DNA molecules for which editing information is available either by experimental inspection (in vitro) or by computational detection (in silico). Each record of REDIdb is organized in a specific flat-file containing a description of the main characteristics of the entry, a feature table with the editing events and related details and a sequence zone with both the genomic sequence and the corresponding edited transcript. REDIdb is a relational database in which the browsing and identification of editing sites has been simplified by means of two facilities to either graphically display genomic or cDNA sequences or to show the corresponding alignment. In both cases, all editing sites are highlighted in colour and their relative positions are detailed by mousing over. New editing positions can be directly submitted to REDIdb after a user-specific registration to obtain authorized secure access. This first version of REDIdb database stores 9964 editing events and can be freely queried at http://biologia.unical.it/py_script/search.html.

  19. Molecular signatures database (MSigDB) 3.0.

    Science.gov (United States)

    Liberzon, Arthur; Subramanian, Aravind; Pinchback, Reid; Thorvaldsdóttir, Helga; Tamayo, Pablo; Mesirov, Jill P

    2011-06-15

    Well-annotated gene sets representing the universe of the biological processes are critical for meaningful and insightful interpretation of large-scale genomic data. The Molecular Signatures Database (MSigDB) is one of the most widely used repositories of such sets. We report the availability of a new version of the database, MSigDB 3.0, with over 6700 gene sets, a complete revision of the collection of canonical pathways and experimental signatures from publications, enhanced annotations and upgrades to the web site. MSigDB is freely available for non-commercial use at http://www.broadinstitute.org/msigdb.

  20. User's and reference guide to the INEL RML/analytical radiochemistry sample tracking database version 1.00

    International Nuclear Information System (INIS)

    Femec, D.A.

    1995-09-01

    This report discusses the sample tracking database in use at the Idaho National Engineering Laboratory (INEL) by the Radiation Measurements Laboratory (RML) and Analytical Radiochemistry. The database was designed in-house to meet the specific needs of the RML and Analytical Radiochemistry. The report consists of two parts, a user's guide and a reference guide. The user's guide presents some of the fundamentals needed by anyone who will be using the database via its user interface. The reference guide describes the design of both the database and the user interface. Briefly mentioned in the reference guide are the code-generating tools, CREATE-SCHEMA and BUILD-SCREEN, written to automatically generate code for the database and its user interface. The appendices contain the input files used by the these tools to create code for the sample tracking database. The output files generated by these tools are also included in the appendices

  1. Version VI of the ESTree db: an improved tool for peach transcriptome analysis

    Science.gov (United States)

    Lazzari, Barbara; Caprera, Andrea; Vecchietti, Alberto; Merelli, Ivan; Barale, Francesca; Milanesi, Luciano; Stella, Alessandra; Pozzi, Carlo

    2008-01-01

    Background The ESTree database (db) is a collection of Prunus persica and Prunus dulcis EST sequences that in its current version encompasses 75,404 sequences from 3 almond and 19 peach libraries. Nine peach genotypes and four peach tissues are represented, from four fruit developmental stages. The aim of this work was to implement the already existing ESTree db by adding new sequences and analysis programs. Particular care was given to the implementation of the web interface, that allows querying each of the database features. Results A Perl modular pipeline is the backbone of sequence analysis in the ESTree db project. Outputs obtained during the pipeline steps are automatically arrayed into the fields of a MySQL database. Apart from standard clustering and annotation analyses, version VI of the ESTree db encompasses new tools for tandem repeat identification, annotation against genomic Rosaceae sequences, and positioning on the database of oligomer sequences that were used in a peach microarray study. Furthermore, known protein patterns and motifs were identified by comparison to PROSITE. Based on data retrieved from sequence annotation against the UniProtKB database, a script was prepared to track positions of homologous hits on the GO tree and build statistics on the ontologies distribution in GO functional categories. EST mapping data were also integrated in the database. The PHP-based web interface was upgraded and extended. The aim of the authors was to enable querying the database according to all the biological aspects that can be investigated from the analysis of data available in the ESTree db. This is achieved by allowing multiple searches on logical subsets of sequences that represent different biological situations or features. Conclusions The version VI of ESTree db offers a broad overview on peach gene expression. Sequence analyses results contained in the database, extensively linked to external related resources, represent a large amount of

  2. Causal Analysis of Databases Concerning Electromagnetism and Health

    Directory of Open Access Journals (Sweden)

    Kristian Alonso-Stenberg

    2016-12-01

    Full Text Available In this article, we conducted a causal analysis of a system extracted from a database of current data in the telecommunications domain, namely the Eurobarometer 73.3 database arose from a survey of 26,602 citizens EU on the potential health effects that electromagnetic fields can produce. To determine the cause-effect relationships between variables, we represented these data by a directed graph that can be applied to a qualitative version of the theory of discrete chaos to highlight causal circuits and attractors, as these are basic elements of system behavior.

  3. Development of a national, dynamic reservoir-sedimentation database

    Science.gov (United States)

    Gray, J.R.; Bernard, J.M.; Stewart, D.W.; McFaul, E.J.; Laurent, K.W.; Schwarz, G.E.; Stinson, J.T.; Jonas, M.M.; Randle, T.J.; Webb, J.W.

    2010-01-01

    The importance of dependable, long-term water supplies, coupled with the need to quantify rates of capacity loss of the Nation’s re servoirs due to sediment deposition, were the most compelling reasons for developing the REServoir- SEDimentation survey information (RESSED) database and website. Created under the auspices of the Advisory Committee on Water Information’s Subcommittee on Sedimenta ion by the U.S. Geological Survey and the Natural Resources Conservation Service, the RESSED database is the most comprehensive compilation of data from reservoir bathymetric and dry-basin surveys in the United States. As of March 2010, the database, which contains data compiled on the 1950s vintage Soil Conservation Service’s Form SCS-34 data sheets, contained results from 6,616 surveys on 1,823 reservoirs in the United States and two surveys on one reservoir in Puerto Rico. The data span the period 1755–1997, with 95 percent of the surveys performed from 1930–1990. The reservoir surface areas range from sub-hectare-scale farm ponds to 658 km2 Lake Powell. The data in the RESSED database can be useful for a number of purposes, including calculating changes in reservoir-storage characteristics, quantifying sediment budgets, and estimating erosion rates in a reservoir’s watershed. The March 2010 version of the RESSED database has a number of deficiencies, including a cryptic and out-of-date database architecture; some geospatial inaccuracies (although most have been corrected); other data errors; an inability to store all data in a readily retrievable manner; and an inability to store all data types that currently exist. Perhaps most importantly, the March 2010 version of RESSED database provides no publically available means to submit new data and corrections to existing data. To address these and other deficiencies, the Subcommittee on Sedimentation, through the U.S. Geological Survey and the U.S. Army Corps of Engineers, began a collaborative project in

  4. Update of the androgen receptor gene mutations database.

    Science.gov (United States)

    Gottlieb, B; Beitel, L K; Lumbroso, R; Pinsky, L; Trifiro, M

    1999-01-01

    The current version of the androgen receptor (AR) gene mutations database is described. The total number of reported mutations has risen from 309 to 374 during the past year. We have expanded the database by adding information on AR-interacting proteins; and we have improved the database by identifying those mutation entries that have been updated. Mutations of unknown significance have now been reported in both the 5' and 3' untranslated regions of the AR gene, and in individuals who are somatic mosaics constitutionally. In addition, single nucleotide polymorphisms, including silent mutations, have been discovered in normal individuals and in individuals with male infertility. A mutation hotspot associated with prostatic cancer has been identified in exon 5. The database is available on the internet (http://www.mcgill.ca/androgendb/), from EMBL-European Bioinformatics Institute (ftp.ebi.ac.uk/pub/databases/androgen), or as a Macintosh FilemakerPro or Word file (MC33@musica.mcgill.ca). Copyright 1999 Wiley-Liss, Inc.

  5. A curated database of cyanobacterial strains relevant for modern taxonomy and phylogenetic studies

    OpenAIRE

    Ramos, Vitor; Morais, Jo?o; Vasconcelos, Vitor M.

    2017-01-01

    The dataset herein described lays the groundwork for an online database of relevant cyanobacterial strains, named CyanoType (http://lege.ciimar.up.pt/cyanotype). It is a database that includes categorized cyanobacterial strains useful for taxonomic, phylogenetic or genomic purposes, with associated information obtained by means of a literature-based curation. The dataset lists 371 strains and represents the first version of the database (CyanoType v.1). Information for each strain includes st...

  6. Database for the OECD-IAEA Paks Fuel Project

    International Nuclear Information System (INIS)

    Szabo, Emese; Hozer, Zoltan; Gyori, Csaba; Hegyi, Gyoergy

    2010-01-01

    really necessary for the analytical work. The present version of the database can be extended according to the requests of project participants and considering the availability of requested data. The present version of the database was collected by the experts of AEKI in close cooperation with Paks NPP. The work was supported by the International Atomic Energy Agency (IAEA) and the Hungarian Atomic Energy Authority (HAEA). The database has been prepared to support three types of calculations: - Thermal-hydraulic calculations to describe how the inadequate cooling conditions could have established during the incident. - Fuel behaviour simulation to describe the oxidation and degradation mechanisms of fuel assemblies. - Activity release and transport calculations to simulate the release of fission products from the failed fuel rods. The database includes the following main parts: - Design characteristics of VVER-440 fuel assemblies (main geometrical data, some mechanical properties, oxidation kinetics of Zr1%Nb cladding, and integral data of assemblies). - Operational data of damaged fuel assemblies (power histories of fuel assemblies, burnup, fuel rod internal pressure, isotope inventories, decay heat and axial power distribution). - Design characteristic of the cleaning tank (main geometrical data). - Measured data during the incident: (temperature, water level measurements, cleaning tank outlet flowrates). - Activity measurements (measured coolant activity concentrations, activity release through the chimney, flowrate of water make-up system, released activities). - Reports (describing results of investigations, chronology). - Status of fuel (results of visuals observations). The database items were collected from different sources. Some of them were calculated by Paks NPP and AEKI using VVER-440 and Paks specific input data. The details of the present version of the database including the main information on calculational work will be described in the following

  7. 2MASS Catalog Server Kit Version 2.1

    Science.gov (United States)

    Yamauchi, C.

    2013-10-01

    The 2MASS Catalog Server Kit is open source software for use in easily constructing a high performance search server for important astronomical catalogs. This software utilizes the open source RDBMS PostgreSQL, therefore, any users can setup the database on their local computers by following step-by-step installation guide. The kit provides highly optimized stored functions for positional searchs similar to SDSS SkyServer. Together with these, the powerful SQL environment of PostgreSQL will meet various user's demands. We released 2MASS Catalog Server Kit version 2.1 in 2012 May, which supports the latest WISE All-Sky catalog (563,921,584 rows) and 9 major all-sky catalogs. Local databases are often indispensable for observatories with unstable or narrow-band networks or severe use, such as retrieving large numbers of records within a small period of time. This software is the best for such purposes, and increasing supported catalogs and improvements of version 2.1 can cover a wider range of applications including advanced calibration system, scientific studies using complicated SQL queries, etc. Official page: http://www.ir.isas.jaxa.jp/~cyamauch/2masskit/

  8. PathwayAccess: CellDesigner plugins for pathway databases.

    Science.gov (United States)

    Van Hemert, John L; Dickerson, Julie A

    2010-09-15

    CellDesigner provides a user-friendly interface for graphical biochemical pathway description. Many pathway databases are not directly exportable to CellDesigner models. PathwayAccess is an extensible suite of CellDesigner plugins, which connect CellDesigner directly to pathway databases using respective Java application programming interfaces. The process is streamlined for creating new PathwayAccess plugins for specific pathway databases. Three PathwayAccess plugins, MetNetAccess, BioCycAccess and ReactomeAccess, directly connect CellDesigner to the pathway databases MetNetDB, BioCyc and Reactome. PathwayAccess plugins enable CellDesigner users to expose pathway data to analytical CellDesigner functions, curate their pathway databases and visually integrate pathway data from different databases using standard Systems Biology Markup Language and Systems Biology Graphical Notation. Implemented in Java, PathwayAccess plugins run with CellDesigner version 4.0.1 and were tested on Ubuntu Linux, Windows XP and 7, and MacOSX. Source code, binaries, documentation and video walkthroughs are freely available at http://vrac.iastate.edu/~jlv.

  9. EMU Lessons Learned Database

    Science.gov (United States)

    Matthews, Kevin M., Jr.; Crocker, Lori; Cupples, J. Scott

    2011-01-01

    As manned space exploration takes on the task of traveling beyond low Earth orbit, many problems arise that must be solved in order to make the journey possible. One major task is protecting humans from the harsh space environment. The current method of protecting astronauts during Extravehicular Activity (EVA) is through use of the specially designed Extravehicular Mobility Unit (EMU). As more rigorous EVA conditions need to be endured at new destinations, the suit will need to be tailored and improved in order to accommodate the astronaut. The Objective behind the EMU Lessons Learned Database(LLD) is to be able to create a tool which will assist in the development of next-generation EMUs, along with maintenance and improvement of the current EMU, by compiling data from Failure Investigation and Analysis Reports (FIARs) which have information on past suit failures. FIARs use a system of codes that give more information on the aspects of the failure, but if one is unfamiliar with the EMU they will be unable to decipher the information. A goal of the EMU LLD is to not only compile the information, but to present it in a user-friendly, organized, searchable database accessible to all familiarity levels with the EMU; both newcomers and veterans alike. The EMU LLD originally started as an Excel database, which allowed easy navigation and analysis of the data through pivot charts. Creating an entry requires access to the Problem Reporting And Corrective Action database (PRACA), which contains the original FIAR data for all hardware. FIAR data are then transferred to, defined, and formatted in the LLD. Work is being done to create a web-based version of the LLD in order to increase accessibility to all of Johnson Space Center (JSC), which includes converting entries from Excel to the HTML format. FIARs related to the EMU have been completed in the Excel version, and now focus has shifted to expanding FIAR data in the LLD to include EVA tools and support hardware such as

  10. IAEA international database on irradiated nuclear graphite properties

    International Nuclear Information System (INIS)

    Burchell, T.D.; Clark, R.E.H.; Stephens, J.A.; Eto, M.; Haag, G.; Hacker, P.; Neighbour, G.B.; Janev, R.K.; Wickham, A.J.

    2000-02-01

    This report describes an IAEA database containing data on the properties of irradiated nuclear graphites. Development and implementation of the graphite database followed initial discussions at an IAEA Specialists' Meeting held in September 1995. The design of the database is based upon developments at the University of Bath (United Kingdom), work which the UK Health and Safety Executive initially supported. The database content and data management policies were determined during two IAEA Consultants' Meetings of nuclear reactor graphite specialists held in 1998 and 1999. The graphite data are relevant to the construction and safety case developments required for new and existing HTR nuclear power plants, and to the development of safety cases for continued operation of existing plants. The database design provides a flexible structure for data archiving and retrieval and employs Microsoft Access 97. An instruction manual is provided within this document for new users, including installation instructions for the database on personal computers running Windows 95/NT 4.0 or higher versions. The data management policies and associated responsibilities are contained in the database Working Arrangement which is included as an Appendix to this report. (author)

  11. Description of background data in the SKB database GEOTAB. Version 2

    International Nuclear Information System (INIS)

    Eriksson, E.; Sehlstedt, S.

    1991-03-01

    During the research and development program performed by SKB for the final disposal of spent nuclear fuel, a large quantity of geoscientific data was collected. Most of this data was stored in a database called Geotab. The data is organized into eight groups as follows: Background information; Geological data; Borehole geophysical measurements; Ground surface geophysical measurements; Hydrogeological and meteorological data; Hydrochemical data; Petrophysical measurements and Tracer tests. Except for the case of borehole geophysical data, ground surface geophysical data and petrophysical data, described in the same report, the data in each group is described in a separate SKB report. The present report describes data within the Background data group. This data provides information on the location of areas studied, borehole positions and also some drilling information. Data is normally collected on forms or as notes and this is then stored into the database. The background data group, called BACKGROUND, is divided into several subgroups: BGAREA area background data; BGDRILL drilling information; BGDRILLP drill penetration data; BGHOLE borehole information; BGTABLES number of rows in a table and BGTOLR data table tolerance. A method consist of one or several data tables. In each chapter a method and its data tables are described. (authors)

  12. The STRING database in 2011

    DEFF Research Database (Denmark)

    Szklarczyk, Damian; Franceschini, Andrea; Kuhn, Michael

    2011-01-01

    present an update on the online database resource Search Tool for the Retrieval of Interacting Genes (STRING); it provides uniquely comprehensive coverage and ease of access to both experimental as well as predicted interaction information. Interactions in STRING are provided with a confidence score...... models, extensive data updates and strongly improved connectivity and integration with third-party resources. Version 9.0 of STRING covers more than 1100 completely sequenced organisms; the resource can be reached at http://string-db.org....

  13. Spatial Digital Database for the Geologic Map of Oregon

    Science.gov (United States)

    Walker, George W.; MacLeod, Norman S.; Miller, Robert J.; Raines, Gary L.; Connors, Katherine A.

    2003-01-01

    Introduction This report describes and makes available a geologic digital spatial database (orgeo) representing the geologic map of Oregon (Walker and MacLeod, 1991). The original paper publication was printed as a single map sheet at a scale of 1:500,000, accompanied by a second sheet containing map unit descriptions and ancillary data. A digital version of the Walker and MacLeod (1991) map was included in Raines and others (1996). The dataset provided by this open-file report supersedes the earlier published digital version (Raines and others, 1996). This digital spatial database is one of many being created by the U.S. Geological Survey as an ongoing effort to provide geologic information for use in spatial analysis in a geographic information system (GIS). This database can be queried in many ways to produce a variety of geologic maps. This database is not meant to be used or displayed at any scale larger than 1:500,000 (for example, 1:100,000). This report describes the methods used to convert the geologic map data into a digital format, describes the ArcInfo GIS file structures and relationships, and explains how to download the digital files from the U.S. Geological Survey public access World Wide Web site on the Internet. Scanned images of the printed map (Walker and MacLeod, 1991), their correlation of map units, and their explanation of map symbols are also available for download.

  14. 5S ribosomal RNA database Y2K.

    Science.gov (United States)

    Szymanski, M; Barciszewska, M Z; Barciszewski, J; Erdmann, V A

    2000-01-01

    This paper presents the updated version (Y2K) of the database of ribosomal 5S ribonucleic acids (5S rRNA) and their genes (5S rDNA), http://rose.man/poznan.pl/5SData/index.html. This edition of the database contains 1985primary structures of 5S rRNA and 5S rDNA. They include 60 archaebacterial, 470 eubacterial, 63 plastid, nine mitochondrial and 1383 eukaryotic sequences. The nucleotide sequences of the 5S rRNAs or 5S rDNAs are divided according to the taxonomic position of the source organisms.

  15. A relational database for physical data from TJ-II discharges

    International Nuclear Information System (INIS)

    Sanchez, E.; Portas, A.B.; Vega, J.

    2002-01-01

    A relational database (RDB) has been developed for classifying TJ-II experimental data according to physical criteria. Two objectives have been achieved: the design and the implementation of the database and the software tools for data access depending on a single software driver. TJ-II data were arranged in several tables with a flexible design, speedy performance, efficient search capacity and adaptability to meet present and future, requirements. The software has been developed to allow the access to the TJ-II RDB from a variety of computer platforms (ALPHA AXP/True64 UNIX, CRAY/UNICOS, Intel Linux, Sparc/Solaris and Intel/Windows 95/98/NT) and programming languages (FORTRAN and C/C++). The database resides in a Windows NT Server computer and is managed by Microsoft SQL Server. The access software is based on open network computing remote procedure call and follows client/server model. A server program running in the Windows NT computer controls data access. Operations on the database (through a local ODBC connection) are performed according to predefined permission protocols. A client library providing a set of basic functions for data integration and retrieval has been built in both static and dynamic link versions. The dynamic version is essential in accessing RDB data from 4GL environments (IDL and PV-WAVE among others)

  16. Analysis of Global Horizontal Irradiance in Version 3 of the National Solar Radiation Database.

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford; Martin, Curtis E.; Guay, Nathan Gene

    2015-09-01

    We report an analysis that compares global horizontal irradiance (GHI) estimates from version 3 of the National Solar Radiation Database (NSRDB v3) with surface measurements of GHI at a wide variety of locations over the period spanning from 2005 to 2012. The NSRDB v3 estimate of GHI are derived from the Physical Solar Model (PSM) which employs physics-based models to estimate GHI from measurements of reflected visible and infrared irradiance collected by Geostationary Operational Environment Satellites (GOES) and several other data sources. Because the ground measurements themselves are uncertain our analysis does not establish the absolute accuracy for PSM GHI. However by examining the comparison for trends and for consistency across a large number of sites, we may establish a level of confidence in PSM GHI and identify conditions which indicate opportunities to improve PSM. We focus our evaluation on annual and monthly insolation because these quantities directly relate to prediction of energy production from solar power systems. We find that generally, PSM GHI exhibits a bias towards overestimating insolation, on the order of 5% when all sky conditions are considered, and somewhat less (-3%) when only clear sky conditions are considered. The biases persist across multiple years and are evident at many locations. In our opinion the bias originates with PSM and we view as less credible that the bias stems from calibration drift or soiling of ground instruments. We observe that PSM GHI may significantly underestimate monthly insolation in locations subject to broad snow cover. We found examples of days where PSM GHI apparently misidentified snow cover as clouds, resulting in significant underestimates of GHI during these days and hence leading to substantial understatement of monthly insolation. Analysis of PSM GHI in adjacent pixels shows that the level of agreement between PSM GHI and ground data can vary substantially over distances on the order of 2 km. We

  17. JASPAR 2010: the greatly expanded open-access database of transcription factor binding profiles

    Science.gov (United States)

    Portales-Casamar, Elodie; Thongjuea, Supat; Kwon, Andrew T.; Arenillas, David; Zhao, Xiaobei; Valen, Eivind; Yusuf, Dimas; Lenhard, Boris; Wasserman, Wyeth W.; Sandelin, Albin

    2010-01-01

    JASPAR (http://jaspar.genereg.net) is the leading open-access database of matrix profiles describing the DNA-binding patterns of transcription factors (TFs) and other proteins interacting with DNA in a sequence-specific manner. Its fourth major release is the largest expansion of the core database to date: the database now holds 457 non-redundant, curated profiles. The new entries include the first batch of profiles derived from ChIP-seq and ChIP-chip whole-genome binding experiments, and 177 yeast TF binding profiles. The introduction of a yeast division brings the convenience of JASPAR to an active research community. As binding models are refined by newer data, the JASPAR database now uses versioning of matrices: in this release, 12% of the older models were updated to improved versions. Classification of TF families has been improved by adopting a new DNA-binding domain nomenclature. A curated catalog of mammalian TFs is provided, extending the use of the JASPAR profiles to additional TFs belonging to the same structural family. The changes in the database set the system ready for more rapid acquisition of new high-throughput data sources. Additionally, three new special collections provide matrix profile data produced by recent alternative high-throughput approaches. PMID:19906716

  18. Version pressure feedback mechanisms for speculative versioning caches

    Science.gov (United States)

    Eichenberger, Alexandre E.; Gara, Alan; O& #x27; Brien, Kathryn M.; Ohmacht, Martin; Zhuang, Xiaotong

    2013-03-12

    Mechanisms are provided for controlling version pressure on a speculative versioning cache. Raw version pressure data is collected based on one or more threads accessing cache lines of the speculative versioning cache. One or more statistical measures of version pressure are generated based on the collected raw version pressure data. A determination is made as to whether one or more modifications to an operation of a data processing system are to be performed based on the one or more statistical measures of version pressure, the one or more modifications affecting version pressure exerted on the speculative versioning cache. An operation of the data processing system is modified based on the one or more determined modifications, in response to a determination that one or more modifications to the operation of the data processing system are to be performed, to affect the version pressure exerted on the speculative versioning cache.

  19. A kinetics database and scripts for PHREEQC

    Science.gov (United States)

    Hu, B.; Zhang, Y.; Teng, Y.; Zhu, C.

    2017-12-01

    Kinetics of geochemical reactions has been increasingly used in numerical models to simulate coupled flow, mass transport, and chemical reactions. However, the kinetic data are scattered in the literature. To assemble a kinetic dataset for a modeling project is an intimidating task for most. In order to facilitate the application of kinetics in geochemical modeling, we assembled kinetics parameters into a database for the geochemical simulation program, PHREEQC (version 3.0). Kinetics data were collected from the literature. Our database includes kinetic data for over 70 minerals. The rate equations are also programmed into scripts with the Basic language. Using the new kinetic database, we simulated reaction path during the albite dissolution process using various rate equations in the literature. The simulation results with three different rate equations gave difference reaction paths at different time scale. Another application involves a coupled reactive transport model simulating the advancement of an acid plume in an acid mine drainage site associated with Bear Creek Uranium tailings pond. Geochemical reactions including calcite, gypsum, and illite were simulated with PHREEQC using the new kinetic database. The simulation results successfully demonstrated the utility of new kinetic database.

  20. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1996-04-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates. Citations in this report are divided into the following topics: thermophysical properties; materials compatibility; lubricants and tribology; application data; safety; test and analysis methods; impacts; regulatory actions; substitute refrigerants; identification; absorption and adsorption; research programs; and miscellaneous documents. Information is also presented on ordering instructions for the computerized version.

  1. The Design of Lexical Database for Indonesian Language

    Science.gov (United States)

    Gunawan, D.; Amalia, A.

    2017-03-01

    Kamus Besar Bahasa Indonesia (KBBI), an official dictionary for Indonesian language, provides lists of words with their meaning. The online version can be accessed via Internet network. Another online dictionary is Kateglo. KBBI online and Kateglo only provides an interface for human. A machine cannot retrieve data from the dictionary easily without using advanced techniques. Whereas, lexical of words is required in research or application development which related to natural language processing, text mining, information retrieval or sentiment analysis. To address this requirement, we need to build a lexical database which provides well-defined structured information about words. A well-known lexical database is WordNet, which provides the relation among words in English. This paper proposes the design of a lexical database for Indonesian language based on the combination of KBBI 4th edition, Kateglo and WordNet structure. Knowledge representation by utilizing semantic networks depict the relation among words and provide the new structure of lexical database for Indonesian language. The result of this design can be used as the foundation to build the lexical database for Indonesian language.

  2. ExplorEnz: a MySQL database of the IUBMB enzyme nomenclature.

    Science.gov (United States)

    McDonald, Andrew G; Boyce, Sinéad; Moss, Gerard P; Dixon, Henry B F; Tipton, Keith F

    2007-07-27

    We describe the database ExplorEnz, which is the primary repository for EC numbers and enzyme data that are being curated on behalf of the IUBMB. The enzyme nomenclature is incorporated into many other resources, including the ExPASy-ENZYME, BRENDA and KEGG bioinformatics databases. The data, which are stored in a MySQL database, preserve the formatting of chemical and enzyme names. A simple, easy to use, web-based query interface is provided, along with an advanced search engine for more complex queries. The database is publicly available at http://www.enzyme-database.org. The data are available for download as SQL and XML files via FTP. ExplorEnz has powerful and flexible search capabilities and provides the scientific community with the most up-to-date version of the IUBMB Enzyme List.

  3. Archive and Database as Metaphor: Theorizing the Historical Record

    Science.gov (United States)

    Manoff, Marlene

    2010-01-01

    Digital media increase the visibility and presence of the past while also reshaping our sense of history. We have extraordinary access to digital versions of books, journals, film, television, music, art and popular culture from earlier eras. New theoretical formulations of database and archive provide ways to think creatively about these changes…

  4. Neuraxial blockade for external cephalic version: a systematic review.

    Science.gov (United States)

    Sultan, P; Carvalho, B

    2011-10-01

    The desire to decrease the number of cesarean deliveries has renewed interest in external cephalic version. The rationale for using neuraxial blockade to facilitate external cephalic version is to provide abdominal muscular relaxation and reduce patient discomfort during the procedure, so permitting successful repositioning of the fetus to a cephalic presentation. This review systematically examined the current evidence to determine the safety and efficacy of neuraxial anesthesia or analgesia when used for external cephalic version. A systematic literature review of studies that examined success rates of external cephalic version with neuraxial anesthesia was performed. Published articles written in English between 1945 and 2010 were identified using the Medline, Cochrane, EMBASE and Web of Sciences databases. Six, randomized controlled studies were identified. Neuraxial blockade significantly improved the success rate in four of these six studies. A further six non-randomized studies were identified, of which four studies with control groups found that neuraxial blockade increased the success rate of external cephalic version. Despite over 850 patients being included in the 12 studies reviewed, placental abruption was reported in only one patient with a neuraxial block, compared with two in the control groups. The incidence of non-reassuring fetal heart rate requiring cesarean delivery in the anesthesia groups was 0.44% (95% CI 0.15-1.32). Neuraxial blockade improved the likelihood of success during external cephalic version, although the dosing regimen that provides optimal conditions for successful version is unclear. Anesthetic rather than analgesic doses of local anesthetics may improve success. The findings suggest that neuraxial blockade does not compromise maternal or fetal safety during external cephalic version. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  5. Geometric database maintenance using CCTV cameras and overlay graphics

    Science.gov (United States)

    Oxenberg, Sheldon C.; Landell, B. Patrick; Kan, Edwin

    1988-01-01

    An interactive graphics system using closed circuit television (CCTV) cameras for remote verification and maintenance of a geometric world model database has been demonstrated in GE's telerobotics testbed. The database provides geometric models and locations of objects viewed by CCTV cameras and manipulated by telerobots. To update the database, an operator uses the interactive graphics system to superimpose a wireframe line drawing of an object with known dimensions on a live video scene containing that object. The methodology used is multipoint positioning to easily superimpose a wireframe graphic on the CCTV image of an object in the work scene. An enhanced version of GE's interactive graphics system will provide the object designation function for the operator control station of the Jet Propulsion Laboratory's telerobot demonstration system.

  6. Accelerating Smith-Waterman Algorithm for Biological Database Search on CUDA-Compatible GPUs

    Science.gov (United States)

    Munekawa, Yuma; Ino, Fumihiko; Hagihara, Kenichi

    This paper presents a fast method capable of accelerating the Smith-Waterman algorithm for biological database search on a cluster of graphics processing units (GPUs). Our method is implemented using compute unified device architecture (CUDA), which is available on the nVIDIA GPU. As compared with previous methods, our method has four major contributions. (1) The method efficiently uses on-chip shared memory to reduce the data amount being transferred between off-chip video memory and processing elements in the GPU. (2) It also reduces the number of data fetches by applying a data reuse technique to query and database sequences. (3) A pipelined method is also implemented to overlap GPU execution with database access. (4) Finally, a master/worker paradigm is employed to accelerate hundreds of database searches on a cluster system. In experiments, the peak performance on a GeForce GTX 280 card reaches 8.32 giga cell updates per second (GCUPS). We also find that our method reduces the amount of data fetches to 1/140, achieving approximately three times higher performance than a previous CUDA-based method. Our 32-node cluster version is approximately 28 times faster than a single GPU version. Furthermore, the effective performance reaches 75.6 giga instructions per second (GIPS) using 32 GeForce 8800 GTX cards.

  7. VizieR Online Data Catalog: SKY2000 Master Catalog, Version 5 (Myers+ 2006)

    Science.gov (United States)

    Myers, J. R.; Sande, C. B.; Miller, A. C.; Warren, W. H., Jr.; Tracewell, D. A.

    2015-02-01

    The SKYMAP Star Catalog System consists of a Master Catalog stellar database and a collection of utility software designed to create and maintain the database and to generate derivative mission star catalogs (run catalogs). It contains an extensive compilation of information on almost 300000 stars brighter than 8.0mag. The original SKYMAP Master Catalog was generated in the early 1970's. Incremental updates and corrections were made over the following years but the first complete revision of the source data occurred with Version 4.0. This revision also produced a unique, consolidated source of astrometric information which can be used by the astronomical community. The derived quantities were removed and wideband and photometric data in the R (red) and I (infrared) systems were added. Version 4 of the SKY2000 Master Catalog was completed in April 2002; it marks the global replacement of the variability identifier and variability data fields. More details can be found in the description file sky2kv4.pdf. The SKY2000 Version 5 Revision 4 Master Catalog differs from Revision 3 in that MK and HD spectral types have been added from the Catalogue of Stellar Spectral Classifications (B. A. Skiff of Lowell Observatory, 2005), which has been assigned source code 50 in this process. 9622 entries now have MK types from this source, while 3976 entries have HD types from this source. SKY2000 V5 R4 also differs globally from preceding MC versions in that the Galactic coordinate computations performed by UPDATE have been increased in accuracy, so that differences from the same quantities from other sources are now typically in the last decimal places carried in the MC. This version supersedes the previous versions 1(V/95), 2(V/102), 3(V/105) and 4(V/109). (6 data files).

  8. 3MdB: the Mexican Million Models database

    Science.gov (United States)

    Morisset, C.; Delgado-Inglada, G.

    2014-10-01

    The 3MdB is an original effort to construct a large multipurpose database of photoionization models. This is a more modern version of a previous attempt based on Cloudy3D and IDL tools. It is accessed by MySQL requests. The models are obtained using the well known and widely used Cloudy photoionization code (Ferland et al, 2013). The database is aimed to host grids of models with different references to identify each project and to facilitate the extraction of the desired data. We present here a description of the way the database is managed and some of the projects that use 3MdB. Anybody can ask for a grid to be run and stored in 3MdB, to increase the visibility of the grid and the potential side applications of it.

  9. The IVTANTHERMO-Online database for thermodynamic properties of individual substances with web interface

    Science.gov (United States)

    Belov, G. V.; Dyachkov, S. A.; Levashov, P. R.; Lomonosov, I. V.; Minakov, D. V.; Morozov, I. V.; Sineva, M. A.; Smirnov, V. N.

    2018-01-01

    The database structure, main features and user interface of an IVTANTHERMO-Online system are reviewed. This system continues the series of the IVTANTHERMO packages developed in JIHT RAS. It includes the database for thermodynamic properties of individual substances and related software for analysis of experimental results, data fitting, calculation and estimation of thermodynamical functions and thermochemistry quantities. In contrast to the previous IVTANTHERMO versions it has a new extensible database design, the client-server architecture, a user-friendly web interface with a number of new features for online and offline data processing.

  10. YPED: an integrated bioinformatics suite and database for mass spectrometry-based proteomics research.

    Science.gov (United States)

    Colangelo, Christopher M; Shifman, Mark; Cheung, Kei-Hoi; Stone, Kathryn L; Carriero, Nicholas J; Gulcicek, Erol E; Lam, TuKiet T; Wu, Terence; Bjornson, Robert D; Bruce, Can; Nairn, Angus C; Rinehart, Jesse; Miller, Perry L; Williams, Kenneth R

    2015-02-01

    We report a significantly-enhanced bioinformatics suite and database for proteomics research called Yale Protein Expression Database (YPED) that is used by investigators at more than 300 institutions worldwide. YPED meets the data management, archival, and analysis needs of a high-throughput mass spectrometry-based proteomics research ranging from a single laboratory, group of laboratories within and beyond an institution, to the entire proteomics community. The current version is a significant improvement over the first version in that it contains new modules for liquid chromatography-tandem mass spectrometry (LC-MS/MS) database search results, label and label-free quantitative proteomic analysis, and several scoring outputs for phosphopeptide site localization. In addition, we have added both peptide and protein comparative analysis tools to enable pairwise analysis of distinct peptides/proteins in each sample and of overlapping peptides/proteins between all samples in multiple datasets. We have also implemented a targeted proteomics module for automated multiple reaction monitoring (MRM)/selective reaction monitoring (SRM) assay development. We have linked YPED's database search results and both label-based and label-free fold-change analysis to the Skyline Panorama repository for online spectra visualization. In addition, we have built enhanced functionality to curate peptide identifications into an MS/MS peptide spectral library for all of our protein database search identification results. Copyright © 2015 The Authors. Production and hosting by Elsevier Ltd.. All rights reserved.

  11. HitPredict version 4: comprehensive reliability scoring of physical protein-protein interactions from more than 100 species.

    Science.gov (United States)

    López, Yosvany; Nakai, Kenta; Patil, Ashwini

    2015-01-01

    HitPredict is a consolidated resource of experimentally identified, physical protein-protein interactions with confidence scores to indicate their reliability. The study of genes and their inter-relationships using methods such as network and pathway analysis requires high quality protein-protein interaction information. Extracting reliable interactions from most of the existing databases is challenging because they either contain only a subset of the available interactions, or a mixture of physical, genetic and predicted interactions. Automated integration of interactions is further complicated by varying levels of accuracy of database content and lack of adherence to standard formats. To address these issues, the latest version of HitPredict provides a manually curated dataset of 398 696 physical associations between 70 808 proteins from 105 species. Manual confirmation was used to resolve all issues encountered during data integration. For improved reliability assessment, this version combines a new score derived from the experimental information of the interactions with the original score based on the features of the interacting proteins. The combined interaction score performs better than either of the individual scores in HitPredict as well as the reliability score of another similar database. HitPredict provides a web interface to search proteins and visualize their interactions, and the data can be downloaded for offline analysis. Data usability has been enhanced by mapping protein identifiers across multiple reference databases. Thus, the latest version of HitPredict provides a significantly larger, more reliable and usable dataset of protein-protein interactions from several species for the study of gene groups. Database URL: http://hintdb.hgc.jp/htp. © The Author(s) 2015. Published by Oxford University Press.

  12. The MPI facial expression database--a validated database of emotional and conversational facial expressions.

    Directory of Open Access Journals (Sweden)

    Kathrin Kaulard

    Full Text Available The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision to investigate the processing of a wider range of natural

  13. User's Manual for LEWICE Version 3.2

    Science.gov (United States)

    Wright, William

    2008-01-01

    A research project is underway at NASA Glenn to produce a computer code which can accurately predict ice growth under a wide range of meteorological conditions for any aircraft surface. This report will present a description of the code inputs and outputs from version 3.2 of this software, which is called LEWICE. This version differs from release 2.0 due to the addition of advanced thermal analysis capabilities for de-icing and anti-icing applications using electrothermal heaters or bleed air applications, the addition of automated Navier-Stokes analysis, an empirical model for supercooled large droplets (SLD) and a pneumatic boot option. An extensive effort was also undertaken to compare the results against the database of electrothermal results which have been generated in the NASA Glenn Icing Research Tunnel (IRT) as was performed for the validation effort for version 2.0. This report will primarily describe the features of the software related to the use of the program. Appendix A has been included to list some of the inner workings of the software or the physical models used. This information is also available in the form of several unpublished documents internal to NASA. This report is intended as a replacement for all previous user manuals of LEWICE. In addition to describing the changes and improvements made for this version, information from previous manuals may be duplicated so that the user will not need to consult previous manuals to use this software.

  14. TOPDOM: database of conservatively located domains and motifs in proteins.

    Science.gov (United States)

    Varga, Julia; Dobson, László; Tusnády, Gábor E

    2016-09-01

    The TOPDOM database-originally created as a collection of domains and motifs located consistently on the same side of the membranes in α-helical transmembrane proteins-has been updated and extended by taking into consideration consistently localized domains and motifs in globular proteins, too. By taking advantage of the recently developed CCTOP algorithm to determine the type of a protein and predict topology in case of transmembrane proteins, and by applying a thorough search for domains and motifs as well as utilizing the most up-to-date version of all source databases, we managed to reach a 6-fold increase in the size of the whole database and a 2-fold increase in the number of transmembrane proteins. TOPDOM database is available at http://topdom.enzim.hu The webpage utilizes the common Apache, PHP5 and MySQL software to provide the user interface for accessing and searching the database. The database itself is generated on a high performance computer. tusnady.gabor@ttk.mta.hu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  15. Human health risk assessment database, "the NHSRC toxicity value database": supporting the risk assessment process at US EPA's National Homeland Security Research Center.

    Science.gov (United States)

    Moudgal, Chandrika J; Garrahan, Kevin; Brady-Roberts, Eletha; Gavrelis, Naida; Arbogast, Michelle; Dun, Sarah

    2008-11-15

    The toxicity value database of the United States Environmental Protection Agency's (EPA) National Homeland Security Research Center has been in development since 2004. The toxicity value database includes a compilation of agent property, toxicity, dose-response, and health effects data for 96 agents: 84 chemical and radiological agents and 12 biotoxins. The database is populated with multiple toxicity benchmark values and agent property information from secondary sources, with web links to the secondary sources, where available. A selected set of primary literature citations and associated dose-response data are also included. The toxicity value database offers a powerful means to quickly and efficiently gather pertinent toxicity and dose-response data for a number of agents that are of concern to the nation's security. This database, in conjunction with other tools, will play an important role in understanding human health risks, and will provide a means for risk assessors and managers to make quick and informed decisions on the potential health risks and determine appropriate responses (e.g., cleanup) to agent release. A final, stand alone MS ACESSS working version of the toxicity value database was completed in November, 2007.

  16. Human health risk assessment database, 'the NHSRC toxicity value database': Supporting the risk assessment process at US EPA's National Homeland Security Research Center

    International Nuclear Information System (INIS)

    Moudgal, Chandrika J.; Garrahan, Kevin; Brady-Roberts, Eletha; Gavrelis, Naida; Arbogast, Michelle; Dun, Sarah

    2008-01-01

    The toxicity value database of the United States Environmental Protection Agency's (EPA) National Homeland Security Research Center has been in development since 2004. The toxicity value database includes a compilation of agent property, toxicity, dose-response, and health effects data for 96 agents: 84 chemical and radiological agents and 12 biotoxins. The database is populated with multiple toxicity benchmark values and agent property information from secondary sources, with web links to the secondary sources, where available. A selected set of primary literature citations and associated dose-response data are also included. The toxicity value database offers a powerful means to quickly and efficiently gather pertinent toxicity and dose-response data for a number of agents that are of concern to the nation's security. This database, in conjunction with other tools, will play an important role in understanding human health risks, and will provide a means for risk assessors and managers to make quick and informed decisions on the potential health risks and determine appropriate responses (e.g., cleanup) to agent release. A final, stand alone MS ACESSS working version of the toxicity value database was completed in November, 2007

  17. Adding glycaemic index and glycaemic load functionality to DietPLUS, a Malaysian food composition database and diet intake calculator.

    Science.gov (United States)

    Shyam, Sangeetha; Wai, Tony Ng Kock; Arshad, Fatimah

    2012-01-01

    This paper outlines the methodology to add glycaemic index (GI) and glycaemic load (GL) functionality to food DietPLUS, a Microsoft Excel-based Malaysian food composition database and diet intake calculator. Locally determined GI values and published international GI databases were used as the source of GI values. Previously published methodology for GI value assignment was modified to add GI and GL calculators to the database. Two popular local low GI foods were added to the DietPLUS database, bringing up the total number of foods in the database to 838 foods. Overall, in relation to the 539 major carbohydrate foods in the Malaysian Food Composition Database, 243 (45%) food items had local Malaysian values or were directly matched to International GI database and another 180 (33%) of the foods were linked to closely-related foods in the GI databases used. The mean ± SD dietary GI and GL of the dietary intake of 63 women with previous gestational diabetes mellitus, calculated using DietPLUS version3 were, 62 ± 6 and 142 ± 45, respectively. These values were comparable to those reported from other local studies. DietPLUS version3, a simple Microsoft Excel-based programme aids calculation of diet GI and GL for Malaysian diets based on food records.

  18. NIST/Sandia/ICDD Electron Diffraction Database: A Database for Phase Identification by Electron Diffraction.

    Science.gov (United States)

    Carr, M J; Chambers, W F; Melgaard, D; Himes, V L; Stalick, J K; Mighell, A D

    1989-01-01

    A new database containing crystallographic and chemical information designed especially for application to electron diffraction search/match and related problems has been developed. The new database was derived from two well-established x-ray diffraction databases, the JCPDS Powder Diffraction File and NBS CRYSTAL DATA, and incorporates 2 years of experience with an earlier version. It contains 71,142 entries, with space group and unit cell data for 59,612 of those. Unit cell and space group information were used, where available, to calculate patterns consisting of all allowed reflections with d -spacings greater than 0.8 A for ~ 59,000 of the entries. Calculated patterns are used in the database in preference to experimental x-ray data when both are available, since experimental x-ray data sometimes omits high d -spacing data which falls at low diffraction angles. Intensity data are not given when calculated spacings are used. A search scheme using chemistry and r -spacing (reciprocal d -spacing) has been developed. Other potentially searchable data in this new database include space group, Pearson symbol, unit cell edge lengths, reduced cell edge length, and reduced cell volume. Compound and/or mineral names, formulas, and journal references are included in the output, as well as pointers to corresponding entries in NBS CRYSTAL DATA and the Powder Diffraction File where more complete information may be obtained. Atom positions are not given. Rudimentary search software has been written to implement a chemistry and r -spacing bit map search. With typical data, a full search through ~ 71,000 compounds takes 10~20 seconds on a PDP 11/23-RL02 system.

  19. GSIMF: a web service based software and database management system for the next generation grids

    International Nuclear Information System (INIS)

    Wang, N; Ananthan, B; Gieraltowski, G; May, E; Vaniachine, A

    2008-01-01

    To process the vast amount of data from high energy physics experiments, physicists rely on Computational and Data Grids; yet, the distribution, installation, and updating of a myriad of different versions of different programs over the Grid environment is complicated, time-consuming, and error-prone. Our Grid Software Installation Management Framework (GSIMF) is a set of Grid Services that has been developed for managing versioned and interdependent software applications and file-based databases over the Grid infrastructure. This set of Grid services provide a mechanism to install software packages on distributed Grid computing elements, thus automating the software and database installation management process on behalf of the users. This enables users to remotely install programs and tap into the computing power provided by Grids

  20. The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions

    Science.gov (United States)

    Kaulard, Kathrin; Cunningham, Douglas W.; Bülthoff, Heinrich H.; Wallraven, Christian

    2012-01-01

    The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions

  1. Large Science Databases – Are Cloud Services Ready for Them?

    Directory of Open Access Journals (Sweden)

    Ani Thakar

    2011-01-01

    Full Text Available We report on attempts to put an astronomical database – the Sloan Digital Sky Survey science archive – in the cloud. We find that it is very frustrating to impossible at this time to migrate a complex SQL Server database into current cloud service offerings such as Amazon (EC2 and Microsoft (SQL Azure. Certainly it is impossible to migrate a large database in excess of a TB, but even with (much smaller databases, the limitations of cloud services make it very difficult to migrate the data to the cloud without making changes to the schema and settings that would degrade performance and/or make the data unusable. Preliminary performance comparisons show a large performance discrepancy with the Amazon cloud version of the SDSS database. These difficulties suggest that much work and coordination needs to occur between cloud service providers and their potential clients before science databases – not just large ones but even smaller databases that make extensive use of advanced database features for performance and usability – can successfully and effectively be deployed in the cloud. We describe a powerful new computational instrument that we are developing in the interim – the Data-Scope – that will enable fast and efficient analysis of the largest (petabyte scale scientific datasets.

  2. Prediction of Success in External Cephalic Version under Tocolysis: Still a Challenge.

    Science.gov (United States)

    Vaz de Macedo, Carolina; Clode, Nuno; Mendes da Graça, Luís

    2015-01-01

    External cephalic version is a procedure of fetal rotation to a cephalic presentation through manoeuvres applied to the maternal abdomen. There are several prognostic factors described in literature for external cephalic version success and prediction scores have been proposed, but their true implication in clinical practice is controversial. We aim to identify possible factors that could contribute to the success of an external cephalic version attempt in our population. We retrospectively examined 207 consecutive external cephalic version attempts under tocolysis conducted between January 1997 and July 2012. We consulted the department's database for the following variables: race, age, parity, maternal body mass index, gestational age, estimated fetal weight, breech category, placental location and amniotic fluid index. We performed descriptive and analytical statistics for each variable and binary logistic regression. External cephalic version was successful in 46.9% of cases (97/207). None of the included variables was associated with the outcome of external cephalic version attempts after adjustment for confounding factors. We present a success rate similar to what has been previously described in literature. However, in contrast to previous authors, we could not associate any of the analysed variables with success of the external cephalic version attempt. We believe this discrepancy is partly related to the type of statistical analysis performed. Even though there are numerous prognostic factors identified for the success in external cephalic version, care must be taken when counselling and selecting patients for this procedure. The data obtained suggests that external cephalic version should continue being offered to all eligible patients regardless of prognostic factors for success.

  3. Preliminary study for unified management of CANDU safety codes and construction of database system

    International Nuclear Information System (INIS)

    Min, Byung Joo; Kim, Hyoung Tae

    2003-03-01

    It is needed to develop the Graphical User Interface(GUI) for the unified management of CANDU safety codes and to construct database system for the validation of safety codes, for which the preliminary study is done in the first stage of the present work. The input and output structures and data flow of CATHENA and PRESCON2 are investigated and the interaction of the variables between CATHENA and PRESCON2 are identified. Furthermore, PC versions of CATHENA and PRESCON2 codes are developed for the interaction of these codes and GUI(Graphic User Interface). The PC versions are assessed by comparing the calculation results with those by HP workstation or from FSAR(Final Safety Analysis Report). Preliminary study on the GUI for the safety codes in the unified management system are done. The sample of GUI programming is demonstrated preliminarily. Visual C++ is selected as the programming language for the development of GUI system. The data for Wolsong plants, reactor core, and thermal-hydraulic experiments executed in the inside and outside of the country, are collected and classified following the structure of the database system, of which two types are considered for the final web-based database system. The preliminary GUI programming for database system is demonstrated, which is updated in the future work

  4. MBA acceptance test procedures, software Version 1.4

    International Nuclear Information System (INIS)

    Mullaney, J.E.; Russell, V.K.

    1994-01-01

    The Mass Balance Program (MBA) is an adjunct to the Materials Accounting database system, Version 3.4. MBA was written to equip the personnel performing K-Basin encapsulation tasks with a conservative estimate of accumulated sludge during the processing of canisters into and out of the chute. The K Basins Materials Balance programs had some minor improvements made to it to feedback the chute processing status to the operator better. This ATP describes how the code was to be tested to verify its correctness

  5. Soil and Terrain Database for Malawi (ver. 1.0) (SOTER_Malawi)

    NARCIS (Netherlands)

    Kempen, B.

    2014-01-01

    The Soil and Terrain database for Malawi (version 1.0), at scale 1:1 million, was compiled based on the soil map of Malawi at scale 1:250,000 (compiled by the Land Resources Evaluation Project) that was complemented with soil boundary information from the provisional soil map at scale 1:1 million.

  6. Web Exploration Tools for a Fast Federated Optical Survey Database

    Science.gov (United States)

    Humphreys, Roberta M.

    2000-01-01

    especially for galaxies, using a new median centroider and integrated magnitudes for galaxies with an improved density-to-intensity calibration with a "sky" background subtraction. In the original version of StarBase the object classification fainter than 19.5-20.0 mag., was an extrapolation of the networks trained on brighter objects. We have used a new catalog of galaxies at the NGP to train a neural network on objects fainter than 20th mag. This improved classification is used in the new version of StarBase. We have also added a FITS table option for the returned data from queries on the object catalog. The APS image database includes images in both colors so we have added a tool for querying the image database in both colors simultaneously. The images can be displayed in parallel or blinked for comparison.

  7. Relational Database for the Geology of the Northern Rocky Mountains - Idaho, Montana, and Washington

    Science.gov (United States)

    Causey, J. Douglas; Zientek, Michael L.; Bookstrom, Arthur A.; Frost, Thomas P.; Evans, Karl V.; Wilson, Anna B.; Van Gosen, Bradley S.; Boleneus, David E.; Pitts, Rebecca A.

    2008-01-01

    A relational database was created to prepare and organize geologic map-unit and lithologic descriptions for input into a spatial database for the geology of the northern Rocky Mountains, a compilation of forty-three geologic maps for parts of Idaho, Montana, and Washington in U.S. Geological Survey Open File Report 2005-1235. Not all of the information was transferred to and incorporated in the spatial database due to physical file limitations. This report releases that part of the relational database that was completed for that earlier product. In addition to descriptive geologic information for the northern Rocky Mountains region, the relational database contains a substantial bibliography of geologic literature for the area. The relational database nrgeo.mdb (linked below) is available in Microsoft Access version 2000, a proprietary database program. The relational database contains data tables and other tables used to define terms, relationships between the data tables, and hierarchical relationships in the data; forms used to enter data; and queries used to extract data.

  8. The DFBS Spectroscopic Database and the Armenian Virtual Observatory

    Directory of Open Access Journals (Sweden)

    Areg M Mickaelian

    2009-05-01

    Full Text Available The Digitized First Byurakan Survey (DFBS is the digitized version of the famous Markarian Survey. It is the largest low-dispersion spectroscopic survey of the sky, covering 17,000 square degrees at galactic latitudes |b|>15. DFBS provides images and extracted spectra for all objects present in the FBS plates. Programs were developed to compute astrometric solution, extract spectra, and apply wavelength and photometric calibration for objects. A DFBS database and catalog has been assembled containing data for nearly 20,000,000 objects. A classification scheme for the DFBS spectra is being developed. The Armenian Virtual Observatory is based on the DFBS database and other large-area surveys and catalogue data.

  9. Measurement properties of translated versions of neck-specific questionnaires: a systematic review.

    Science.gov (United States)

    Schellingerhout, Jasper M; Heymans, Martijn W; Verhagen, Arianne P; de Vet, Henrica C; Koes, Bart W; Terwee, Caroline B

    2011-06-06

    Several disease-specific questionnaires to measure pain and disability in patients with neck pain have been translated. However, a simple translation of the original version doesn't guarantee similar measurement properties. The objective of this study is to critically appraise the quality of the translation process, cross-cultural validation and the measurement properties of translated versions of neck-specific questionnaires. Bibliographic databases were searched for articles concerning the translation or evaluation of the measurement properties of a translated version of a neck-specific questionnaire. The methodological quality of the selected studies and the results of the measurement properties were critically appraised and rated using the COSMIN checklist and criteria for measurement properties. The search strategy resulted in a total of 3641 unique hits, of which 27 articles, evaluating 6 different questionnaires in 15 different languages, were included in this study. Generally the methodological quality of the translation process is poor and none of the included studies performed a cross-cultural adaptation. A substantial amount of information regarding the measurement properties of translated versions of the different neck-specific questionnaires is lacking. Moreover, the evidence for the quality of measurement properties of the translated versions is mostly limited or assessed in studies of poor methodological quality. Until results from high quality studies are available, we advise to use the Catalan, Dutch, English, Iranian, Korean, Spanish and Turkish version of the NDI, the Chinese version of the NPQ, and the Finnish, German and Italian version of the NPDS. The Greek NDI needs cross-cultural validation and there is no methodologically sound information for the Swedish NDI. For all other languages we advise to translate the original version of the NDI.

  10. Validation of an electronic version of the Mini Asthma Quality of Life Questionnaire.

    Science.gov (United States)

    Olajos-Clow, J; Minard, J; Szpiro, K; Juniper, E F; Turcotte, S; Jiang, X; Jenkins, B; Lougheed, M D

    2010-05-01

    The Mini Asthma Quality of Life Questionnaire (MiniAQLQ) is a validated disease-specific quality of life (QOL) paper (p) questionnaire. Electronic (e) versions enable inclusion of asthma QOL in electronic medical records and research databases. To validate an e-version of the MiniAQLQ, compare time required for completion of e- and p-versions, and determine which version participants prefer. Adults with stable asthma were randomized to complete either the e- or p-MiniAQLQ, followed by a 2-h rest period before completing the other version. Agreement between versions was measured using the intraclass correlation coefficient (ICC) and Bland-Altman analysis. Two participants with incomplete p-MiniAQLQ responses were excluded. Forty participants (85% female; age 47.7 +/- 14.9 years; asthma duration 22.6 +/- 16.1 years; FEV(1) 87.1 +/- 21.6% predicted) with both AQLQ scores limitation, emotional function and environmental stimuli domains were 0.94, 0.89, 0.90, and 0.91 respectively. A small but significant bias (Delta=0.3; P=0.004) was noted in the activity limitation domain. Completion time was significantly longer for the e-version (3.8 +/- 1.9min versus 2.7 +/- 1.1min; Ppreferred the e-MiniAQLQ; 35% had no preference. This e-version of the MiniAQLQ is valid and was preferred by most participants despite taking slightly longer to complete. Generalizabilty may be limited in younger (12-17) and older (>65) adults.

  11. FY 1998 survey report. Examinational research on the construction of body function database; 1998 nendo chosa hokokusho. Shintai kino database no kochiku ni kansuru chosa kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    The body function database is aimed at supplying and supporting products and environment friendly to aged people by supplying the data on body function of aged people in case of planning, designing and production when companies supply the products and environment. As a method for survey, group measuring was made for measurement of visual characteristics. For the measurement of action characteristics, the moving action including posture change was studied, the experimental plan was carried out, and items of group measurement and measuring methods were finally proposed. The database structure was made public at the end of this fiscal year, through the pre-publication/evaluation after the trial evaluation conducted using pilot database. In the study of the measurement of action characteristics, the verification test was conducted for a small-size group. By this, the measurement of action characteristics was finally proposed. In the body function database system, subjects on operation were extracted/bettered by trially evaluating pilot database, and also adjustment of right relations toward publication and preparation of management methods were made. An evaluation version was made supposing its publication. (NEDO)

  12. HMDB 3.0--The Human Metabolome Database in 2013.

    Science.gov (United States)

    Wishart, David S; Jewison, Timothy; Guo, An Chi; Wilson, Michael; Knox, Craig; Liu, Yifeng; Djoumbou, Yannick; Mandal, Rupasri; Aziat, Farid; Dong, Edison; Bouatra, Souhaila; Sinelnikov, Igor; Arndt, David; Xia, Jianguo; Liu, Philip; Yallou, Faizath; Bjorndahl, Trent; Perez-Pineiro, Rolando; Eisner, Roman; Allen, Felicity; Neveu, Vanessa; Greiner, Russ; Scalbert, Augustin

    2013-01-01

    The Human Metabolome Database (HMDB) (www.hmdb.ca) is a resource dedicated to providing scientists with the most current and comprehensive coverage of the human metabolome. Since its first release in 2007, the HMDB has been used to facilitate research for nearly 1000 published studies in metabolomics, clinical biochemistry and systems biology. The most recent release of HMDB (version 3.0) has been significantly expanded and enhanced over the 2009 release (version 2.0). In particular, the number of annotated metabolite entries has grown from 6500 to more than 40,000 (a 600% increase). This enormous expansion is a result of the inclusion of both 'detected' metabolites (those with measured concentrations or experimental confirmation of their existence) and 'expected' metabolites (those for which biochemical pathways are known or human intake/exposure is frequent but the compound has yet to be detected in the body). The latest release also has greatly increased the number of metabolites with biofluid or tissue concentration data, the number of compounds with reference spectra and the number of data fields per entry. In addition to this expansion in data quantity, new database visualization tools and new data content have been added or enhanced. These include better spectral viewing tools, more powerful chemical substructure searches, an improved chemical taxonomy and better, more interactive pathway maps. This article describes these enhancements to the HMDB, which was previously featured in the 2009 NAR Database Issue. (Note to referees, HMDB 3.0 will go live on 18 September 2012.).

  13. JANIS 4: An Improved Version of the NEA Java-based Nuclear Data Information System

    International Nuclear Information System (INIS)

    Soppera, N.; Bossant, M.; Dupont, E.

    2014-01-01

    JANIS is software developed to facilitate the visualization and manipulation of nuclear data, giving access to evaluated data libraries, and to the EXFOR and CINDA databases. It is stand-alone Java software, downloadable from the web and distributed on DVD. Used offline, the system also makes use of an internet connection to access the NEA Data Bank database. It is now also offered as a full web application, only requiring a browser. The features added in the latest version of the software and this new web interface are described

  14. Systems analysis programs for hands-on integrated reliability evaluations (SAPHIRE) version 5.0

    International Nuclear Information System (INIS)

    Russell, K.D.; Kvarfordt, K.J.; Skinner, N.L.; Wood, S.T.

    1994-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. This volume is the reference manual for the Systems Analysis and Risk Assessment (SARA) System Version 5.0, a microcomputer-based system used to analyze the safety issues of a open-quotes familyclose quotes [i.e., a power plant, a manufacturing facility, any facility on which a probabilistic risk assessment (PRA) might be performed]. The SARA database contains PRA data primarily for the dominant accident sequences of a family and descriptive information about the family including event trees, fault trees, and system model diagrams. The number of facility databases that can be accessed is limited only by the amount of disk storage available. To simulate changes to family systems, SARA users change the failure rates of initiating and basic events and/or modify the structure of the cut sets that make up the event trees, fault trees, and systems. The user then evaluates the effects of these changes through the recalculation of the resultant accident sequence probabilities and importance measures. The results are displayed in tables and graphs that may be printed for reports. A preliminary version of the SARA program was completed in August 1985 and has undergone several updates in response to user suggestions and to maintain compatibility with the other SAPHIRE programs. Version 5.0 of SARA provides the same capability as earlier versions and adds the ability to process unlimited cut sets; display fire, flood, and seismic data; and perform more powerful cut set editing

  15. JEnsembl: a version-aware Java API to Ensembl data systems.

    Science.gov (United States)

    Paterson, Trevor; Law, Andy

    2012-11-01

    The Ensembl Project provides release-specific Perl APIs for efficient high-level programmatic access to data stored in various Ensembl database schema. Although Perl scripts are perfectly suited for processing large volumes of text-based data, Perl is not ideal for developing large-scale software applications nor embedding in graphical interfaces. The provision of a novel Java API would facilitate type-safe, modular, object-orientated development of new Bioinformatics tools with which to access, analyse and visualize Ensembl data. The JEnsembl API implementation provides basic data retrieval and manipulation functionality from the Core, Compara and Variation databases for all species in Ensembl and EnsemblGenomes and is a platform for the development of a richer API to Ensembl datasources. The JEnsembl architecture uses a text-based configuration module to provide evolving, versioned mappings from database schema to code objects. A single installation of the JEnsembl API can therefore simultaneously and transparently connect to current and previous database instances (such as those in the public archive) thus facilitating better analysis repeatability and allowing 'through time' comparative analyses to be performed. Project development, released code libraries, Maven repository and documentation are hosted at SourceForge (http://jensembl.sourceforge.net).

  16. A database for TMT interface control documents

    Science.gov (United States)

    Gillies, Kim; Roberts, Scott; Brighton, Allan; Rogers, John

    2016-08-01

    The TMT Software System consists of software components that interact with one another through a software infrastructure called TMT Common Software (CSW). CSW consists of software services and library code that is used by developers to create the subsystems and components that participate in the software system. CSW also defines the types of components that can be constructed and their roles. The use of common component types and shared middleware services allows standardized software interfaces for the components. A software system called the TMT Interface Database System was constructed to support the documentation of the interfaces for components based on CSW. The programmer describes a subsystem and each of its components using JSON-style text files. A command interface file describes each command a component can receive and any commands a component sends. The event interface files describe status, alarms, and events a component publishes and status and events subscribed to by a component. A web application was created to provide a user interface for the required features. Files are ingested into the software system's database. The user interface allows browsing subsystem interfaces, publishing versions of subsystem interfaces, and constructing and publishing interface control documents that consist of the intersection of two subsystem interfaces. All published subsystem interfaces and interface control documents are versioned for configuration control and follow the standard TMT change control processes. Subsystem interfaces and interface control documents can be visualized in the browser or exported as PDF files.

  17. ARACHNID: A prototype object-oriented database tool for distributed systems

    Science.gov (United States)

    Younger, Herbert; Oreilly, John; Frogner, Bjorn

    1994-01-01

    This paper discusses the results of a Phase 2 SBIR project sponsored by NASA and performed by MIMD Systems, Inc. A major objective of this project was to develop specific concepts for improved performance in accessing large databases. An object-oriented and distributed approach was used for the general design, while a geographical decomposition was used as a specific solution. The resulting software framework is called ARACHNID. The Faint Source Catalog developed by NASA was the initial database testbed. This is a database of many giga-bytes, where an order of magnitude improvement in query speed is being sought. This database contains faint infrared point sources obtained from telescope measurements of the sky. A geographical decomposition of this database is an attractive approach to dividing it into pieces. Each piece can then be searched on individual processors with only a weak data linkage between the processors being required. As a further demonstration of the concepts implemented in ARACHNID, a tourist information system is discussed. This version of ARACHNID is the commercial result of the project. It is a distributed, networked, database application where speed, maintenance, and reliability are important considerations. This paper focuses on the design concepts and technologies that form the basis for ARACHNID.

  18. CUDASW++: optimizing Smith-Waterman sequence database searches for CUDA-enabled graphics processing units

    Directory of Open Access Journals (Sweden)

    Maskell Douglas L

    2009-05-01

    Full Text Available Abstract Background The Smith-Waterman algorithm is one of the most widely used tools for searching biological sequence databases due to its high sensitivity. Unfortunately, the Smith-Waterman algorithm is computationally demanding, which is further compounded by the exponential growth of sequence databases. The recent emergence of many-core architectures, and their associated programming interfaces, provides an opportunity to accelerate sequence database searches using commonly available and inexpensive hardware. Findings Our CUDASW++ implementation (benchmarked on a single-GPU NVIDIA GeForce GTX 280 graphics card and a dual-GPU GeForce GTX 295 graphics card provides a significant performance improvement compared to other publicly available implementations, such as SWPS3, CBESW, SW-CUDA, and NCBI-BLAST. CUDASW++ supports query sequences of length up to 59K and for query sequences ranging in length from 144 to 5,478 in Swiss-Prot release 56.6, the single-GPU version achieves an average performance of 9.509 GCUPS with a lowest performance of 9.039 GCUPS and a highest performance of 9.660 GCUPS, and the dual-GPU version achieves an average performance of 14.484 GCUPS with a lowest performance of 10.660 GCUPS and a highest performance of 16.087 GCUPS. Conclusion CUDASW++ is publicly available open-source software. It provides a significant performance improvement for Smith-Waterman-based protein sequence database searches by fully exploiting the compute capability of commonly used CUDA-enabled low-cost GPUs.

  19. Measurement properties of translated versions of neck-specific questionnaires: a systematic review

    Directory of Open Access Journals (Sweden)

    de Vet Henrica C

    2011-06-01

    Full Text Available Abstract Background Several disease-specific questionnaires to measure pain and disability in patients with neck pain have been translated. However, a simple translation of the original version doesn't guarantee similar measurement properties. The objective of this study is to critically appraise the quality of the translation process, cross-cultural validation and the measurement properties of translated versions of neck-specific questionnaires. Methods Bibliographic databases were searched for articles concerning the translation or evaluation of the measurement properties of a translated version of a neck-specific questionnaire. The methodological quality of the selected studies and the results of the measurement properties were critically appraised and rated using the COSMIN checklist and criteria for measurement properties. Results The search strategy resulted in a total of 3641 unique hits, of which 27 articles, evaluating 6 different questionnaires in 15 different languages, were included in this study. Generally the methodological quality of the translation process is poor and none of the included studies performed a cross-cultural adaptation. A substantial amount of information regarding the measurement properties of translated versions of the different neck-specific questionnaires is lacking. Moreover, the evidence for the quality of measurement properties of the translated versions is mostly limited or assessed in studies of poor methodological quality. Conclusions Until results from high quality studies are available, we advise to use the Catalan, Dutch, English, Iranian, Korean, Spanish and Turkish version of the NDI, the Chinese version of the NPQ, and the Finnish, German and Italian version of the NPDS. The Greek NDI needs cross-cultural validation and there is no methodologically sound information for the Swedish NDI. For all other languages we advise to translate the original version of the NDI.

  20. Measurement properties of translated versions of neck-specific questionnaires: a systematic review

    Science.gov (United States)

    2011-01-01

    Background Several disease-specific questionnaires to measure pain and disability in patients with neck pain have been translated. However, a simple translation of the original version doesn't guarantee similar measurement properties. The objective of this study is to critically appraise the quality of the translation process, cross-cultural validation and the measurement properties of translated versions of neck-specific questionnaires. Methods Bibliographic databases were searched for articles concerning the translation or evaluation of the measurement properties of a translated version of a neck-specific questionnaire. The methodological quality of the selected studies and the results of the measurement properties were critically appraised and rated using the COSMIN checklist and criteria for measurement properties. Results The search strategy resulted in a total of 3641 unique hits, of which 27 articles, evaluating 6 different questionnaires in 15 different languages, were included in this study. Generally the methodological quality of the translation process is poor and none of the included studies performed a cross-cultural adaptation. A substantial amount of information regarding the measurement properties of translated versions of the different neck-specific questionnaires is lacking. Moreover, the evidence for the quality of measurement properties of the translated versions is mostly limited or assessed in studies of poor methodological quality. Conclusions Until results from high quality studies are available, we advise to use the Catalan, Dutch, English, Iranian, Korean, Spanish and Turkish version of the NDI, the Chinese version of the NPQ, and the Finnish, German and Italian version of the NPDS. The Greek NDI needs cross-cultural validation and there is no methodologically sound information for the Swedish NDI. For all other languages we advise to translate the original version of the NDI. PMID:21645355

  1. System requirements and design description for the document basis database interface (DocBasis)

    International Nuclear Information System (INIS)

    Lehman, W.J.

    1997-01-01

    This document describes system requirements and the design description for the Document Basis Database Interface (DocBasis). The DocBasis application is used to manage procedures used within the tank farms. The application maintains information in a small database to track the document basis for a procedure, as well as the current version/modification level and the basis for the procedure. The basis for each procedure is substantiated by Administrative, Technical, Procedural, and Regulatory requirements. The DocBasis user interface was developed by Science Applications International Corporation (SAIC)

  2. Organizing, exploring, and analyzing antibody sequence data: the case for relational-database managers.

    Science.gov (United States)

    Owens, John

    2009-01-01

    Technological advances in the acquisition of DNA and protein sequence information and the resulting onrush of data can quickly overwhelm the scientist unprepared for the volume of information that must be evaluated and carefully dissected to discover its significance. Few laboratories have the luxury of dedicated personnel to organize, analyze, or consistently record a mix of arriving sequence data. A methodology based on a modern relational-database manager is presented that is both a natural storage vessel for antibody sequence information and a conduit for organizing and exploring sequence data and accompanying annotation text. The expertise necessary to implement such a plan is equal to that required by electronic word processors or spreadsheet applications. Antibody sequence projects maintained as independent databases are selectively unified by the relational-database manager into larger database families that contribute to local analyses, reports, interactive HTML pages, or exported to facilities dedicated to sophisticated sequence analysis techniques. Database files are transposable among current versions of Microsoft, Macintosh, and UNIX operating systems.

  3. Motivational Objects in Natural Scenes (MONS: A Database of >800 Objects

    Directory of Open Access Journals (Sweden)

    Judith Schomaker

    2017-09-01

    Full Text Available In daily life, we are surrounded by objects with pre-existing motivational associations. However, these are rarely controlled for in experiments with natural stimuli. Research on natural stimuli would therefore benefit from stimuli with well-defined motivational properties; in turn, such stimuli also open new paths in research on motivation. Here we introduce a database of Motivational Objects in Natural Scenes (MONS. The database consists of 107 scenes. Each scene contains 2 to 7 objects placed at approximately equal distance from the scene center. Each scene was photographed creating 3 versions, with one object (“critical object” being replaced to vary the overall motivational value of the scene (appetitive, aversive, and neutral, while maintaining high visual similarity between the three versions. Ratings on motivation, valence, arousal and recognizability were obtained using internet-based questionnaires. Since the main objective was to provide stimuli of well-defined motivational value, three motivation scales were used: (1 Desire to own the object; (2 Approach/Avoid; (3 Desire to interact with the object. Three sets of ratings were obtained in independent sets of observers: for all 805 objects presented on a neutral background, for 321 critical objects presented in their scene context, and for the entire scenes. On the basis of the motivational ratings, objects were subdivided into aversive, neutral, and appetitive categories. The MONS database will provide a standardized basis for future studies on motivational value under realistic conditions.

  4. Motivational Objects in Natural Scenes (MONS): A Database of >800 Objects.

    Science.gov (United States)

    Schomaker, Judith; Rau, Elias M; Einhäuser, Wolfgang; Wittmann, Bianca C

    2017-01-01

    In daily life, we are surrounded by objects with pre-existing motivational associations. However, these are rarely controlled for in experiments with natural stimuli. Research on natural stimuli would therefore benefit from stimuli with well-defined motivational properties; in turn, such stimuli also open new paths in research on motivation. Here we introduce a database of Motivational Objects in Natural Scenes (MONS). The database consists of 107 scenes. Each scene contains 2 to 7 objects placed at approximately equal distance from the scene center. Each scene was photographed creating 3 versions, with one object ("critical object") being replaced to vary the overall motivational value of the scene (appetitive, aversive, and neutral), while maintaining high visual similarity between the three versions. Ratings on motivation, valence, arousal and recognizability were obtained using internet-based questionnaires. Since the main objective was to provide stimuli of well-defined motivational value, three motivation scales were used: (1) Desire to own the object; (2) Approach/Avoid; (3) Desire to interact with the object. Three sets of ratings were obtained in independent sets of observers: for all 805 objects presented on a neutral background, for 321 critical objects presented in their scene context, and for the entire scenes. On the basis of the motivational ratings, objects were subdivided into aversive, neutral, and appetitive categories. The MONS database will provide a standardized basis for future studies on motivational value under realistic conditions.

  5. CHEMVAL project. Critical evaluation of the CHEMVAL thermodynamic database with respect to its contents and relevance to radioactive waste disposal at Sellafield and Dounreay

    International Nuclear Information System (INIS)

    Falck, W.E.

    1992-01-01

    This report is concerned with assessing the applicability of the CHEMVAL Thermodynamic Database (Version 3.0) to studies of radioactive waste disposal at Sellafield and Dounreay. Comparisons are drawn with similar listings produced elsewhere and suggestions made for database enhancement. The feasibility of extending the database to take into account simulations at elevated temperatures is also addressed. (author)

  6. Development of a California commercial building benchmarking database

    International Nuclear Information System (INIS)

    Kinney, Satkartar; Piette, Mary Ann

    2002-01-01

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database

  7. NODC Standard Product: World Ocean Database 1998 version 1 (5 disc set) (NODC Accession 0095340)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The World Ocean Database 1998 (WOD98) is comprised of five CD-ROMs containing profile and plankton/biomass data in compressed format. WOD98-01 through WOD98-04...

  8. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The... Article title: Author name(s): Journal: External Links: Original website information Database maintenance s...DB (Protein Data Bank) KEGG PATHWAY Database DrugPort Entry list Available Query search Available Web servic

  9. Version 2 of the Protuberance Correlations for the Shuttle-Orbiter Boundary Layer Transition Tool

    Science.gov (United States)

    King, Rudolph A.; Kegerise, Michael A.; Berry, Scott A.

    2009-01-01

    Orbiter-specific transition data, acquired in four ground-based facilities (LaRC 20-Inch Mach 6 Air Tunnel, LaRC 31-Inch Mach 10 Air Tunnel, LaRC 20-Inch Mach 6 CF4 Tunnel, and CUBRC LENS-I Shock Tunnel) with three wind tunnel model scales (0.75, 0.90, and 1.8%) and from Orbiter historical flight data, have been analyzed to improve a pre-existing engineering tool for reentry transition prediction on the windward side of the Orbiter. Boundary layer transition (BLT) engineering correlations for transition induced by isolated protuberances are presented using a laminar Navier-Stokes (N-S) database to provide the relevant boundary-layer properties. It is demonstrated that the earlier version of the BLT correlation that had been developed using parameters derived from an engineering boundary-layer code has improved data collapse when developed with the N-S database. Of the new correlations examined, the proposed correlation 5, based on boundary-layer edge and wall properties, was found to provide the best overall correlation metrics when the entire database is employed. The second independent correlation (proposed correlation 7) selected is based on properties within the boundary layer at the protuberance height. The Aeroheating Panel selected a process to derive the recommended coefficients for Version 2 of the BLT Tool. The assumptions and limitations of the recommended protuberance BLT Tool V.2 are presented.

  10. Modeling CANDU type fuel behaviour during extended burnup irradiations using a revised version of the ELESIM code

    International Nuclear Information System (INIS)

    Arimescu, V.I.; Richmond, W.R.

    1992-05-01

    The high-burnup database for CANDU fuel, with a variety of cases, offers a good opportunity to check models of fuel behaviour, and to identify areas for improvement. Good agreement of calculated values of fission-gas release, and sheath hoop strain, with experimental data indicates that the global behaviour of the fuel element is adequately simulated by a computer code. Using, the ELESIM computer code, the fission-gas release, swelling, and fuel pellet expansion models were analysed, and changes made for gaseous swelling, and diffusional release of fission-gas atoms to the grain boundaries. Using this revised version of ELESIM, satisfactory agreement between measured values of fission-gas release was found for most of the high-burnup database cases. It is concluded that the revised version of the ELESIM code is able to simulate with reasonable accuracy high-burnup as well as low-burnup CANDU fuel

  11. PROXiMATE: a database of mutant protein-protein complex thermodynamics and kinetics.

    Science.gov (United States)

    Jemimah, Sherlyn; Yugandhar, K; Michael Gromiha, M

    2017-09-01

    We have developed PROXiMATE, a database of thermodynamic data for more than 6000 missense mutations in 174 heterodimeric protein-protein complexes, supplemented with interaction network data from STRING database, solvent accessibility, sequence, structural and functional information, experimental conditions and literature information. Additional features include complex structure visualization, search and display options, download options and a provision for users to upload their data. The database is freely available at http://www.iitm.ac.in/bioinfo/PROXiMATE/ . The website is implemented in Python, and supports recent versions of major browsers such as IE10, Firefox, Chrome and Opera. gromiha@iitm.ac.in. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  12. Database Description - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database

  13. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  14. MINIZOO in de Benelux : Structure and use of a database of skin irritating organisms

    NARCIS (Netherlands)

    Bronswijk, van J.E.M.H.; Reichl, E.R.

    1986-01-01

    MI NIZOO database is structured within the standard software package SIRv2 (= Scientific Information Retrieval version 2). This flexible program is installed on the university mainframe (a CYBER 180). The program dBASE II employed on a microcomputer (MICROSOL), can be used for part of data entry and

  15. Development of the severe accident risk information database management system SARD

    International Nuclear Information System (INIS)

    Ahn, Kwang Il; Kim, Dong Ha

    2003-01-01

    The main purpose of this report is to introduce essential features and functions of a severe accident risk information management system, SARD (Severe Accident Risk Database Management System) version 1.0, which has been developed in Korea Atomic Energy Research Institute, and database management and data retrieval procedures through the system. The present database management system has powerful capabilities that can store automatically and manage systematically the plant-specific severe accident analysis results for core damage sequences leading to severe accidents, and search intelligently the related severe accident risk information. For that purpose, the present database system mainly takes into account the plant-specific severe accident sequences obtained from the Level 2 Probabilistic Safety Assessments (PSAs), base case analysis results for various severe accident sequences (such as code responses and summary for key-event timings), and related sensitivity analysis results for key input parameters/models employed in the severe accident codes. Accordingly, the present database system can be effectively applied in supporting the Level 2 PSA of similar plants, for fast prediction and intelligent retrieval of the required severe accident risk information for the specific plant whose information was previously stored in the database system, and development of plant-specific severe accident management strategies

  16. Development of the severe accident risk information database management system SARD

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Kwang Il; Kim, Dong Ha

    2003-01-01

    The main purpose of this report is to introduce essential features and functions of a severe accident risk information management system, SARD (Severe Accident Risk Database Management System) version 1.0, which has been developed in Korea Atomic Energy Research Institute, and database management and data retrieval procedures through the system. The present database management system has powerful capabilities that can store automatically and manage systematically the plant-specific severe accident analysis results for core damage sequences leading to severe accidents, and search intelligently the related severe accident risk information. For that purpose, the present database system mainly takes into account the plant-specific severe accident sequences obtained from the Level 2 Probabilistic Safety Assessments (PSAs), base case analysis results for various severe accident sequences (such as code responses and summary for key-event timings), and related sensitivity analysis results for key input parameters/models employed in the severe accident codes. Accordingly, the present database system can be effectively applied in supporting the Level 2 PSA of similar plants, for fast prediction and intelligent retrieval of the required severe accident risk information for the specific plant whose information was previously stored in the database system, and development of plant-specific severe accident management strategies.

  17. INE: a rice genome database with an integrated map view.

    Science.gov (United States)

    Sakata, K; Antonio, B A; Mukai, Y; Nagasaki, H; Sakai, Y; Makino, K; Sasaki, T

    2000-01-01

    The Rice Genome Research Program (RGP) launched a large-scale rice genome sequencing in 1998 aimed at decoding all genetic information in rice. A new genome database called INE (INtegrated rice genome Explorer) has been developed in order to integrate all the genomic information that has been accumulated so far and to correlate these data with the genome sequence. A web interface based on Java applet provides a rapid viewing capability in the database. The first operational version of the database has been completed which includes a genetic map, a physical map using YAC (Yeast Artificial Chromosome) clones and PAC (P1-derived Artificial Chromosome) contigs. These maps are displayed graphically so that the positional relationships among the mapped markers on each chromosome can be easily resolved. INE incorporates the sequences and annotations of the PAC contig. A site on low quality information ensures that all submitted sequence data comply with the standard for accuracy. As a repository of rice genome sequence, INE will also serve as a common database of all sequence data obtained by collaborating members of the International Rice Genome Sequencing Project (IRGSP). The database can be accessed at http://www. dna.affrc.go.jp:82/giot/INE. html or its mirror site at http://www.staff.or.jp/giot/INE.html

  18. [External cephalic version].

    Science.gov (United States)

    Navarro-Santana, B; Duarez-Coronado, M; Plaza-Arranz, J

    2016-08-01

    To analyze the rate of successful external cephalic versions in our center and caesarean sections that would be avoided with the use of external cephalic versions. From January 2012 to March 2016 external cephalic versions carried out at our center, which were a total of 52. We collected data about female age, gestational age at the time of the external cephalic version, maternal body mass index (BMI), fetal variety and situation, fetal weight, parity, location of the placenta, amniotic fluid index (ILA), tocolysis, analgesia, and newborn weight at birth, minor adverse effects (dizziness, hypotension and maternal pain) and major adverse effects (tachycardia, bradycardia, decelerations and emergency cesarean section). 45% of the versions were unsuccessful and 55% were successful. The percentage of successful vaginal delivery in versions was 84% (4% were instrumental) and 15% of caesarean sections. With respect to the variables studied, only significant differences in birth weight were found; suggesting that birth weight it is related to the outcome of external cephalic version. Probably we did not find significant differences due to the number of patients studied. For women with breech presentation, we recommend external cephalic version before the expectant management or performing a cesarean section. The external cephalic version increases the proportion of fetuses in cephalic presentation and also decreases the rate of caesarean sections.

  19. Hmrbase: a database of hormones and their receptors

    Science.gov (United States)

    Rashid, Mamoon; Singla, Deepak; Sharma, Arun; Kumar, Manish; Raghava, Gajendra PS

    2009-01-01

    Background Hormones are signaling molecules that play vital roles in various life processes, like growth and differentiation, physiology, and reproduction. These molecules are mostly secreted by endocrine glands, and transported to target organs through the bloodstream. Deficient, or excessive, levels of hormones are associated with several diseases such as cancer, osteoporosis, diabetes etc. Thus, it is important to collect and compile information about hormones and their receptors. Description This manuscript describes a database called Hmrbase which has been developed for managing information about hormones and their receptors. It is a highly curated database for which information has been collected from the literature and the public databases. The current version of Hmrbase contains comprehensive information about ~2000 hormones, e.g., about their function, source organism, receptors, mature sequences, structures etc. Hmrbase also contains information about ~3000 hormone receptors, in terms of amino acid sequences, subcellular localizations, ligands, and post-translational modifications etc. One of the major features of this database is that it provides data about ~4100 hormone-receptor pairs. A number of online tools have been integrated into the database, to provide the facilities like keyword search, structure-based search, mapping of a given peptide(s) on the hormone/receptor sequence, sequence similarity search. This database also provides a number of external links to other resources/databases in order to help in the retrieving of further related information. Conclusion Owing to the high impact of endocrine research in the biomedical sciences, the Hmrbase could become a leading data portal for researchers. The salient features of Hmrbase are hormone-receptor pair-related information, mapping of peptide stretches on the protein sequences of hormones and receptors, Pfam domain annotations, categorical browsing options, online data submission, Drug

  20. DB2 9 for Linux, Unix, and Windows database administration upgrade certification study guide

    CERN Document Server

    Sanders, Roger E

    2007-01-01

    Written by one of the world's leading DB2 authors who is an active participant in the development of the DB2 certification exams, this resource covers everything a database adminstrator needs to know to pass the DB2 9 for Linux, UNIX, and Windows Database Administration Certification Upgrade exam (Exam 736). This comprehensive study guide discusses all exam topics: server management, data placement, XML concepts, analyzing activity, high availability, database security, and much more. Each chapter contains an extensive set of practice questions along with carefully explained answers. Both information-technology professionals who have experience as database administrators and have a current DBA certification on version 8 of DB2 and individuals who would like to learn the new features of DB2 9 will benefit from the information in this reference guide.

  1. Space Images for NASA JPL Android Version

    Science.gov (United States)

    Nelson, Jon D.; Gutheinz, Sandy C.; Strom, Joshua R.; Arca, Jeremy M.; Perez, Martin; Boggs, Karen; Stanboli, Alice

    2013-01-01

    This software addresses the demand for easily accessible NASA JPL images and videos by providing a user friendly and simple graphical user interface that can be run via the Android platform from any location where Internet connection is available. This app is complementary to the iPhone version of the application. A backend infrastructure stores, tracks, and retrieves space images from the JPL Photojournal and Institutional Communications Web server, and catalogs the information into a streamlined rating infrastructure. This system consists of four distinguishing components: image repository, database, server-side logic, and Android mobile application. The image repository contains images from various JPL flight projects. The database stores the image information as well as the user rating. The server-side logic retrieves the image information from the database and categorizes each image for display. The Android mobile application is an interfacing delivery system that retrieves the image information from the server for each Android mobile device user. Also created is a reporting and tracking system for charting and monitoring usage. Unlike other Android mobile image applications, this system uses the latest emerging technologies to produce image listings based directly on user input. This allows for countless combinations of images returned. The backend infrastructure uses industry-standard coding and database methods, enabling future software improvement and technology updates. The flexibility of the system design framework permits multiple levels of display possibilities and provides integration capabilities. Unique features of the software include image/video retrieval from a selected set of categories, image Web links that can be shared among e-mail users, sharing to Facebook/Twitter, marking as user's favorites, and image metadata searchable for instant results.

  2. Cyclebase 3.0: a multi-organism database on cell-cycle regulation and phenotypes.

    Science.gov (United States)

    Santos, Alberto; Wernersson, Rasmus; Jensen, Lars Juhl

    2015-01-01

    The eukaryotic cell division cycle is a highly regulated process that consists of a complex series of events and involves thousands of proteins. Researchers have studied the regulation of the cell cycle in several organisms, employing a wide range of high-throughput technologies, such as microarray-based mRNA expression profiling and quantitative proteomics. Due to its complexity, the cell cycle can also fail or otherwise change in many different ways if important genes are knocked out, which has been studied in several microscopy-based knockdown screens. The data from these many large-scale efforts are not easily accessed, analyzed and combined due to their inherent heterogeneity. To address this, we have created Cyclebase--available at http://www.cyclebase.org--an online database that allows users to easily visualize and download results from genome-wide cell-cycle-related experiments. In Cyclebase version 3.0, we have updated the content of the database to reflect changes to genome annotation, added new mRNA and protein expression data, and integrated cell-cycle phenotype information from high-content screens and model-organism databases. The new version of Cyclebase also features a new web interface, designed around an overview figure that summarizes all the cell-cycle-related data for a gene. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. An update of the DEF database of protein fold class predictions

    DEFF Research Database (Denmark)

    Reczko, Martin; Karras, Dimitris; Bohr, Henrik

    1997-01-01

    An update is given on the Database of Expected Fold classes (DEF) that contains a collection of fold-class predictions made from protein sequences and a mail server that provides new predictions for new sequences. To any given sequence one of 49 fold-classes is chosen to classify the structure re...... related to the sequence with high accuracy. The updated predictions system is developed using data from the new version of the 3D-ALI database of aligned protein structures and thus is giving more reliable and more detailed predictions than the previous DEF system.......An update is given on the Database of Expected Fold classes (DEF) that contains a collection of fold-class predictions made from protein sequences and a mail server that provides new predictions for new sequences. To any given sequence one of 49 fold-classes is chosen to classify the structure...

  4. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  5. Development of a database for prompt γ-ray neutron activation analysis: Summary report of the third research coordination meeting

    International Nuclear Information System (INIS)

    Lindstrom, Richard M.; Firestone, Richard B.; Paviotti-Corcuera, R.

    2003-01-01

    The main discussions and conclusions from the Third Co-ordination Meeting on the Development of a Database for Prompt Gamma-ray Neutron Activation Analysis are summarized in this report. All results were reviewed in detail, and the final version of the TECDOC and the corresponding software were agreed upon and approved for preparation. Actions were formulated with the aim of completing the final version of the TECDOC and associated software by May 2003

  6. Development of a database for prompt γ-ray neutron activation analysis. Summary report of the third research coordination meeting

    International Nuclear Information System (INIS)

    Lindstorm, Richard M.; Firestone, Richard B.; Paviotti-Corcuera, R.

    2003-04-01

    The main discussions and conclusions from the Third Co-ordination Meeting on the Development of a Database for Prompt γ-ray Neutron Activation Analysis are summarised in this report. All results were reviewed in detail, and the final version of the TECDOC and the corresponding software were agreed upon and approved for preparation. Actions were formulated with the aim of completing the final version of the TECDOC and associated software by May 2003. (author)

  7. Description of hydrogeological data in SKB's database GEOTAB. Version 2

    International Nuclear Information System (INIS)

    Gerlach, M.

    1991-12-01

    During the research and development program performed by SKB for the final disposal of spent nuclear fuel, a large quantity of geoscientific data was collected. Most of this data was stored in a database called GEOTAB. The data is organized into eight groups (subjects) as follows: - Background information. - Geological data. - Borehole geophysical measurements. - Ground surface geophysical measurements. - Hydrogeological and meteorological data. - Hydrochemical data. - Petrophysical measurements. - Tracer tests. Except for the case of borehole geophysical data, ground surface geophysical data and petrophysical data, described in the same report, the data in each group is described in a separate SKB report. The present report described data within the hydrogeological data group. The hydrogeological data groups (subject), called HYDRO, is divided into several subgroups (methods). BHEQUIPE: equipments in borehole. CONDINT: electrical conductivity in pumped water. FLOWMETE: flowmeter tests. GRWB: groundwater level registrations in boreholes. HUFZ: hydraulic unit fracture zones. HURM: hydraulic unit rock mass. HYCHEM: hydraulic test during chemical Sampling. INTER: interference tests. METEOR: meteorological and hydrological measurements. PIEZO: piezometric measurements at depths in boreholes. RECTES: recovery tests. ROCKRM: hydraulic unit rock types in the rock mass. SFHEAD: single hole falling head test. SHBUP: single hole build up test. SHSINJ: single hole steady state tests. SHTINJ: single hole transient injection tests. SHTOLD: single hole transient injections tests - old data. A method consists of one or several data tables. In each chapter a method and its data tables are described. (au)

  8. Bibliographical database of radiation biological dosimetry and risk assessment: Part 2

    International Nuclear Information System (INIS)

    Straume, T.; Ricker, Y.; Thut, M.

    1990-09-01

    This is part 11 of a database constructed to support research in radiation biological dosimetry and risk assessment. Relevant publications were identified through detailed searches of national and international electronic databases and through our personal knowledge of the subject. Publications were numbered and key worded, and referenced in an electronic data-retrieval system that permits quick access through computerized searches on authors, key words, title, year, journal name, or publication number. Photocopies of the publications contained in the database are maintained in a file that is numerically arranged by our publication acquisition numbers. This volume contains 1048 additional entries, which are listed in alphabetical order by author. The computer software used for the database is a simple but sophisticated relational database program that permits quick information access, high flexibility, and the creation of customized reports. This program is inexpensive and is commercially available for the Macintosh and the IBM PC. Although the database entries were made using a Macintosh computer, we have the capability to convert the files into the IBM PC version. As of this date, the database cites 2260 publications. Citations in the database are from 200 different scientific journals. There are also references to 80 books and published symposia, and 158 reports. Information relevant to radiation biological dosimetry and risk assessment is widely distributed within the scientific literature, although a few journals clearly predominate. The journals publishing the largest number of relevant papers are Health Physics, with a total of 242 citations in the database, and Mutation Research, with 185 citations. Other journals with over 100 citations in the database, are Radiation Research, with 136, and International Journal of Radiation Biology, with 132

  9. Conceptual Model of an Application for Automated Generation of Webpage Mobile Versions

    Directory of Open Access Journals (Sweden)

    Todor Rachovski

    2017-11-01

    Full Text Available Accessing webpages through various types of mobile devices with different screen sizes and using different browsers has put new demands on web developers. The main challenge is the development of websites with responsive design that is adaptable depending on the mobile device used. The article presents a conceptual model of an app for automated generation of mobile pages. It has five-layer architecture: database, database management layer, business logic layer, web services layer and a presentation layer. The database stores all the data needed to run the application. The database management layer uses an ORM model to convert relational data into an object-oriented format and control the access to them. The business logic layer contains components that perform the actual work on building a mobile version of the page, including parsing, building a hierarchical model of the page and a number of transformations. The web services layer provides external applications with access to lower-level functionalities, and the presentation layer is responsible for choosing and using the appropriate CSS. A web application that uses the proposed model was developed and experiments were conducted.

  10. A curated database of cyanobacterial strains relevant for modern taxonomy and phylogenetic studies.

    Science.gov (United States)

    Ramos, Vitor; Morais, João; Vasconcelos, Vitor M

    2017-04-25

    The dataset herein described lays the groundwork for an online database of relevant cyanobacterial strains, named CyanoType (http://lege.ciimar.up.pt/cyanotype). It is a database that includes categorized cyanobacterial strains useful for taxonomic, phylogenetic or genomic purposes, with associated information obtained by means of a literature-based curation. The dataset lists 371 strains and represents the first version of the database (CyanoType v.1). Information for each strain includes strain synonymy and/or co-identity, strain categorization, habitat, accession numbers for molecular data, taxonomy and nomenclature notes according to three different classification schemes, hierarchical automatic classification, phylogenetic placement according to a selection of relevant studies (including this), and important bibliographic references. The database will be updated periodically, namely by adding new strains meeting the criteria for inclusion and by revising and adding up-to-date metadata for strains already listed. A global 16S rDNA-based phylogeny is provided in order to assist users when choosing the appropriate strains for their studies.

  11. Global Precipitation Climatology Project (GPCP) - Monthly, Version 2.2 (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Version 2.2 of the dataset has been superseded by a newer version. Users should not use version 2.2 except in rare cases (e.g., when reproducing previous studies...

  12. Development of database management system for monitoring of radiation workers for actinides

    International Nuclear Information System (INIS)

    Kalyane, G.N.; Mishra, L.; Nadar, M.Y.; Singh, I.S.; Rao, D.D.

    2012-01-01

    Annually around 500 radiation workers are monitored for estimation of lung activities and internal dose due to Pu/Am and U from various divisions of Bhabha Atomic Research Centre (Trombay) and from PREFRE and A3F facilities (Tarapur) in lung counting laboratory located at Bhabha Atomic Research Centre hospital under Routine and Special monitoring program. A 20 cm diameter phoswich and an array of HPGe detector were used for this purpose. In case of positive contamination, workers are followed up and monitored using both the detection systems in different geometries. Management of this huge data becomes difficult and therefore an easily retrievable database system containing all the relevant data of the monitored radiation workers. Materials and methods: The database management system comprises of three main modules integrated together: 1) Apache server installed on a Windows (XP) platform (Apache version 2.2.17) 2) MySQL database management system (MySQL version 5.5.8) 3) PHP (Preformatted Hypertext) programming language (PHP version 5.3.5). All the 3 modules work together seamlessly as a single software program. The front end user interaction is through an user friendly and interactive local web page where internet connection is not required. This front page has hyperlinks to many other pages, which have different utilities for the user. The user has to log in using username and password. Results and Conclusions: Database management system is used for entering, updating and management of lung monitoring data of radiation workers, The program is having following utilities: bio-data entry of new subjects, editing of bio-data of old subjects (only one subject at a time), entry of counting data of that day's lung monitoring, retrieval of old records based on a number of parameters and filters like date of counting, employee number, division, counts fulfilling a given criterion, etc. and calculation of MEQ CWT (Muscle Equivalent Chest Wall Thickness), energy

  13. Simpevarp - site descriptive model version 0

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-11-01

    During 2002, SKB is starting detailed investigations at two potential sites for a deep repository in the Precambrian rocks of the Fennoscandian Shield. The present report concerns one of those sites, Simpevarp, which lies in the municipality of Oskarshamn, on the southeast coast of Sweden, about 250 kilometres south of Stockholm. The site description will have two main components: a written synthesis of the site, summarising the current state of knowledge, as documented in the databases containing the primary data from the site investigations, and one or several site descriptive models, in which the collected information is interpreted and presented in a form which can be used in numerical models for rock engineering, environmental impact and long-term safety assessments. SKB maintains two main databases at the present time, a site characterisation database called SICADA and a geographic information system called SKB GIS. The site descriptive model will be developed and presented with the aid of the SKB GIS capabilities, and with SKBs Rock Visualisation System (RVS), which is also linked to SICADA. The version 0 model forms an important framework for subsequent model versions, which are developed successively, as new information from the site investigations becomes available. Version 0 is developed out of the information available at the start of the site investigation. In the case of Simpevarp, this is essentially the information which was compiled for the Oskarshamn feasibility study, which led to the choice of that area as a favourable object for further study, together with information collected since its completion. This information, with the exception of the extensive data base from the nearby Aespoe Hard Rock Laboratory, is mainly 2D in nature (surface data), and is general and regional, rather than site-specific, in content. Against this background, the present report consists of the following components: an overview of the present content of the databases

  14. The MANAGE database: nutrient load and site characteristic updates and runoff concentration data.

    Science.gov (United States)

    Harmel, Daren; Qian, Song; Reckhow, Ken; Casebolt, Pamela

    2008-01-01

    The "Measured Annual Nutrient loads from AGricultural Environments" (MANAGE) database was developed to be a readily accessible, easily queried database of site characteristic and field-scale nutrient export data. The original version of MANAGE, which drew heavily from an early 1980s compilation of nutrient export data, created an electronic database with nutrient load data and corresponding site characteristics from 40 studies on agricultural (cultivated and pasture/range) land uses. In the current update, N and P load data from 15 additional studies of agricultural runoff were included along with N and P concentration data for all 55 studies. The database now contains 1677 watershed years of data for various agricultural land uses (703 for pasture/rangeland; 333 for corn; 291 for various crop rotations; 177 for wheat/oats; and 4-33 yr for barley, citrus, vegetables, sorghum, soybeans, cotton, fallow, and peanuts). Across all land uses, annual runoff loads averaged 14.2 kg ha(-1) for total N and 2.2 kg ha(-1) for total P. On average, these losses represented 10 to 25% of applied fertilizer N and 4 to 9% of applied fertilizer P. Although such statistics produce interesting generalities across a wide range of land use, management, and climatic conditions, regional crop-specific analyses should be conducted to guide regulatory and programmatic decisions. With this update, MANAGE contains data from a vast majority of published peer-reviewed N and P export studies on homogeneous agricultural land uses in the USA under natural rainfall-runoff conditions and thus provides necessary data for modeling and decision-making related to agricultural runoff. The current version can be downloaded at http://www.ars.usda.gov/spa/manage-nutrient.

  15. A new version of the ERICA tool to facilitate impact assessments of radioactivity on wild plants and animals

    International Nuclear Information System (INIS)

    Brown, J.E.; Alfonso, B.; Avila, R.; Beresford, N.A.; Copplestone, D.; Hosseini, A.

    2016-01-01

    A new version of the ERICA Tool (version 1.2) was released in November 2014; this constitutes the first major update of the Tool since release in 2007. The key features of the update are presented in this article. Of particular note are new transfer databases extracted from an international compilation of concentration ratios (CR_w_o_-_m_e_d_i_a) and the modification of ‘extrapolation’ approaches used to select transfer data in cases where information is not available. Bayesian updating approaches have been used in some cases to draw on relevant information that would otherwise have been excluded in the process of deriving CR_w_o_-_m_e_d_i_a statistics. All of these efforts have in turn led to the requirement to update Environmental Media Concentration Limits (EMCLs) used in Tier 1 assessments. Some of the significant changes with regard to EMCLs are highlighted. - Highlights: • The ERICA Tool for performing environmental risk assessment has been updated. • The new version is underpinned by an internationally supported transfer database. • The Tool provides coverage for many organism groups and radioisotopes. • New calculations were required to derive environmental media concentration limits. • Modified approaches to deriving missing transfer parameters are elaborated.

  16. Extending the Intermediate Data Structure (IDS for longitudinal historical databases to include geographic data

    Directory of Open Access Journals (Sweden)

    Finn Hedefalk

    2014-09-01

    Full Text Available The Intermediate Data Structure (IDS is a standardised database structure for longitudinal historical databases. Such a common structure facilitates data sharing and comparative research. In this study, we propose an extended version of IDS, named IDS-Geo, that also includes geographic data. The geographic data that will be stored in IDS-Geo are primarily buildings and/or property units, and the purpose of these geographic data is mainly to link individuals to places in space. When we want to assign such detailed spatial locations to individuals (in times before there were any detailed house addresses available, we often have to create tailored geographic datasets. In those cases, there are benefits of storing geographic data in the same structure as the demographic data. Moreover, we propose the export of data from IDS-Geo using an eXtensible Markup Language (XML Schema. IDS-Geo is implemented in a case study using historical property units, for the period 1804 to 1913, stored in a geographically extended version of the Scanian Economic Demographic Database (SEDD. To fit into the IDS-Geo data structure, we included an object lifeline representation of all of the property units (based on the snapshot time representation of single historical maps and poll-tax registers. The case study verifies that the IDS-Geo model is capable of handling geographic data that can be linked to demographic data.

  17. International Shock-Wave Database: Current Status

    Science.gov (United States)

    Levashov, Pavel

    2013-06-01

    Shock-wave and related dynamic material response data serve for calibrating, validating, and improving material models over very broad regions of the pressure-temperature-density phase space. Since the middle of the 20th century vast amount of shock-wave experimental information has been obtained. To systemize it a number of compendiums of shock-wave data has been issued by LLNL, LANL (USA), CEA (France), IPCP and VNIIEF (Russia). In mid-90th the drawbacks of the paper handbooks became obvious, so the first version of the online shock-wave database appeared in 1997 (http://www.ficp.ac.ru/rusbank). It includes approximately 20000 experimental points on shock compression, adiabatic expansion, measurements of sound velocity behind the shock front and free-surface-velocity for more than 650 substances. This is still a useful tool for the shock-wave community, but it has a number of serious disadvantages which can't be easily eliminated: (i) very simple data format for points and references; (ii) minimalistic user interface for data addition; (iii) absence of history of changes; (iv) bad feedback from users. The new International Shock-Wave database (ISWdb) is intended to solve these and some other problems. The ISWdb project objectives are: (i) to develop a database on thermodynamic and mechanical properties of materials under conditions of shock-wave and other dynamic loadings, selected related quantities of interest, and the meta-data that describes the provenance of the measurements and material models; and (ii) to make this database available internationally through the Internet, in an interactive form. The development and operation of the ISWdb is guided by an advisory committee. The database will be installed on two mirrored web-servers, one in Russia and the other in USA (currently only one server is available). The database provides access to original experimental data on shock compression, non-shock dynamic loadings, isentropic expansion, measurements of sound

  18. Global Historical Climatology Network (GHCN), Version 1 (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Please note, this dataset has been superseded by a newer version (see below). Users should not use this version except in rare cases (e.g., when reproducing previous...

  19. The Service Status and Development Strategy of the Mobile Application Service of Ancient Books Database

    Directory of Open Access Journals (Sweden)

    Yang Siluo

    2017-12-01

    Full Text Available [Purpose/significance] The mobile application of ancient books database is a change of the ancient books database from the online version to the mobile one. At present, the mobile application of ancient books database is in the initial stage of development, so it is necessary to investigate the current situation and provide suggestions for the development of it. [Method/process] This paper selected two kinds of ancient books databases, namely WeChat platform and the mobile phone client, and analyzed the operation mode and the main function. [Result/conclusion] We come to conclusion that the ancient database mobile application has some defects, such as resources in a small scale, single content and data form, and the function of single platform construction is not perfect, users pay inadequate attention to such issues. Then, we put forward some corresponding suggestions and point out that in order to construct ancient books database mobile applications, it is necessary to improve the platform construction, enrich the data form and quantity, optimize the function, emphasize the communication and interaction with the user.

  20. Cluster Analysis of the International Stellarator Confinement Database

    International Nuclear Information System (INIS)

    Kus, A.; Dinklage, A.; Preuss, R.; Ascasibar, E.; Harris, J. H.; Okamura, S.; Yamada, H.; Sano, F.; Stroth, U.; Talmadge, J.

    2008-01-01

    Heterogeneous structure of collected data is one of the problems that occur during derivation of scalings for energy confinement time, and whose analysis tourns out to be wide and complicated matter. The International Stellarator Confinement Database [1], shortly ISCDB, comprises in its latest version 21 a total of 3647 observations from 8 experimental devices, 2067 therefrom beeing so far completed for upcoming analyses. For confinement scaling studies 1933 observation were chosen as the standard dataset. Here we describe a statistical method of cluster analysis for identification of possible cohesive substructures in ISDCB and present some preliminary results

  1. Database Description - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Database Description General information of database Database... name Yeast Interacting Proteins Database Alternative name - DOI 10.18908/lsdba.nbdc00742-000 Creator C...-ken 277-8561 Tel: +81-4-7136-3989 FAX: +81-4-7136-3979 E-mail : Database classif...s cerevisiae Taxonomy ID: 4932 Database description Information on interactions and related information obta...l Acad Sci U S A. 2001 Apr 10;98(8):4569-74. Epub 2001 Mar 13. External Links: Original website information Database

  2. MyMolDB: a micromolecular database solution with open source and free components.

    Science.gov (United States)

    Xia, Bing; Tai, Zheng-Fu; Gu, Yu-Cheng; Li, Bang-Jing; Ding, Li-Sheng; Zhou, Yan

    2011-10-01

    To manage chemical structures in small laboratories is one of the important daily tasks. Few solutions are available on the internet, and most of them are closed source applications. The open-source applications typically have limited capability and basic cheminformatics functionalities. In this article, we describe an open-source solution to manage chemicals in research groups based on open source and free components. It has a user-friendly interface with the functions of chemical handling and intensive searching. MyMolDB is a micromolecular database solution that supports exact, substructure, similarity, and combined searching. This solution is mainly implemented using scripting language Python with a web-based interface for compound management and searching. Almost all the searches are in essence done with pure SQL on the database by using the high performance of the database engine. Thus, impressive searching speed has been archived in large data sets for no external Central Processing Unit (CPU) consuming languages were involved in the key procedure of the searching. MyMolDB is an open-source software and can be modified and/or redistributed under GNU General Public License version 3 published by the Free Software Foundation (Free Software Foundation Inc. The GNU General Public License, Version 3, 2007. Available at: http://www.gnu.org/licenses/gpl.html). The software itself can be found at http://code.google.com/p/mymoldb/. Copyright © 2011 Wiley Periodicals, Inc.

  3. Fuzzy Versions of Epistemic and Deontic Logic

    Science.gov (United States)

    Gounder, Ramasamy S.; Esterline, Albert C.

    1998-01-01

    Epistemic and deontic logics are modal logics, respectively, of knowledge and of the normative concepts of obligation, permission, and prohibition. Epistemic logic is useful in formalizing systems of communicating processes and knowledge and belief in AI (Artificial Intelligence). Deontic logic is useful in computer science wherever we must distinguish between actual and ideal behavior, as in fault tolerance and database integrity constraints. We here discuss fuzzy versions of these logics. In the crisp versions, various axioms correspond to various properties of the structures used in defining the semantics of the logics. Thus, any axiomatic theory will be characterized not only by its axioms but also by the set of properties holding of the corresponding semantic structures. Fuzzy logic does not proceed with axiomatic systems, but fuzzy versions of the semantic properties exist and can be shown to correspond to some of the axioms for the crisp systems in special ways that support dependency networks among assertions in a modal domain. This in turn allows one to implement truth maintenance systems. For the technical development of epistemic logic, and for that of deontic logic. To our knowledge, we are the first to address fuzzy epistemic and fuzzy deontic logic explicitly and to consider the different systems and semantic properties available. We give the syntax and semantics of epistemic logic and discuss the correspondence between axioms of epistemic logic and properties of semantic structures. The same topics are covered for deontic logic. Fuzzy epistemic and fuzzy deontic logic discusses the relationship between axioms and semantic properties for these logics. Our results can be exploited in truth maintenance systems.

  4. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  5. Improvements to SFCOMPO - a database on isotopic composition of spent nuclear fuel

    International Nuclear Information System (INIS)

    Suyama, Kenya; Nouri, Ali; Mochizuki, Hiroki; Nomura, Yasushi

    2003-01-01

    Isotopic composition is one of the most relevant data to be used in the calculation of burnup of irradiated nuclear fuel. Since autumn 2002, the Organisation for Economic Co-operation and Development/Nuclear Energy Agency (OECD/NEA) has operated a database of isotopic composition - SFCOMPO, initially developed in Japan Atomic Energy Research Institute. This paper describes the latest version of SFCOMPO and the future development plan in OECD/NEA. (author)

  6. Concierge: Personal database software for managing digital research resources

    Directory of Open Access Journals (Sweden)

    Hiroyuki Sakai

    2007-11-01

    Full Text Available This article introduces a desktop application, named Concierge, for managing personal digital research resources. Using simple operations, it enables storage of various types of files and indexes them based on content descriptions. A key feature of the software is a high level of extensibility. By installing optional plug-ins, users can customize and extend the usability of the software based on their needs. In this paper, we also introduce a few optional plug-ins: literaturemanagement, electronic laboratory notebook, and XooNlps client plug-ins. XooNIps is a content management system developed to share digital research resources among neuroscience communities. It has been adopted as the standard database system in Japanese neuroinformatics projects. Concierge, therefore, offers comprehensive support from management of personal digital research resources to their sharing in open-access neuroinformatics databases such as XooNIps. This interaction between personal and open-access neuroinformatics databases is expected to enhance the dissemination of digital research resources. Concierge is developed as an open source project; Mac OS X and Windows XP versions have been released at the official site (http://concierge.sourceforge.jp.

  7. FEPs and scenarios auditing of TVO-92 and TILA-96 against international FEP database

    Energy Technology Data Exchange (ETDEWEB)

    Vieno, T.; Nordman, H. [VTT Energy, Espoo (Finland)

    1997-12-01

    The NEA International Database of Features, Events and Processes (FEPs) relevant to the assessment of post-closure safety of radioactive waste repositories has been compiled by a working group within the Nuclear Energy Agency (NEA) of the OECD. The main parts of the database are a master list of 150 generalized FEPs and the original project-specific databases containing descriptions, comments and references on the FEPs. The first version of the database includes in total 1261 FEPs from seven national or international performance assessment projects. All project FEPs are mapped to one or more of the FEPs in the master list. The aim of the auditing was to discuss how the FEBs in the international database have been treated in the TVO-92 and TILA-96 safety assessments on spent fuel disposal (in Finland), where no formal methods were applied to develop scenarios. The auditing was made against all the 1261 projectspecific FEPs in the international database. The FEPs were discussed one by one and classified into categories according to their treatment in TVO-92 and TILA-96 or in the technical design of the disposal system. 37 refs.

  8. FEPs and scenarios auditing of TVO-92 and TILA-96 against international FEP database

    International Nuclear Information System (INIS)

    Vieno, T.; Nordman, H.

    1997-12-01

    The NEA International Database of Features, Events and Processes (FEPs) relevant to the assessment of post-closure safety of radioactive waste repositories has been compiled by a working group within the Nuclear Energy Agency (NEA) of the OECD. The main parts of the database are a master list of 150 generalized FEPs and the original project-specific databases containing descriptions, comments and references on the FEPs. The first version of the database includes in total 1261 FEPs from seven national or international performance assessment projects. All project FEPs are mapped to one or more of the FEPs in the master list. The aim of the auditing was to discuss how the FEBs in the international database have been treated in the TVO-92 and TILA-96 safety assessments on spent fuel disposal (in Finland), where no formal methods were applied to develop scenarios. The auditing was made against all the 1261 projectspecific FEPs in the international database. The FEPs were discussed one by one and classified into categories according to their treatment in TVO-92 and TILA-96 or in the technical design of the disposal system

  9. Version 1.00 programmer's tools used in constructing the INEL RML/analytical radiochemistry sample tracking database and its user interface

    International Nuclear Information System (INIS)

    Femec, D.A.

    1995-09-01

    This report describes two code-generating tools used to speed design and implementation of relational databases and user interfaces: CREATE-SCHEMA and BUILD-SCREEN. CREATE-SCHEMA produces the SQL commands that actually create and define the database. BUILD-SCREEN takes templates for data entry screens and generates the screen management system routine calls to display the desired screen. Both tools also generate the related FORTRAN declaration statements and precompiled SQL calls. Included with this report is the source code for a number of FORTRAN routines and functions used by the user interface. This code is broadly applicable to a number of different databases

  10. The Ruby UCSC API: accessing the UCSC genome database using Ruby.

    Science.gov (United States)

    Mishima, Hiroyuki; Aerts, Jan; Katayama, Toshiaki; Bonnal, Raoul J P; Yoshiura, Koh-ichiro

    2012-09-21

    The University of California, Santa Cruz (UCSC) genome database is among the most used sources of genomic annotation in human and other organisms. The database offers an excellent web-based graphical user interface (the UCSC genome browser) and several means for programmatic queries. A simple application programming interface (API) in a scripting language aimed at the biologist was however not yet available. Here, we present the Ruby UCSC API, a library to access the UCSC genome database using Ruby. The API is designed as a BioRuby plug-in and built on the ActiveRecord 3 framework for the object-relational mapping, making writing SQL statements unnecessary. The current version of the API supports databases of all organisms in the UCSC genome database including human, mammals, vertebrates, deuterostomes, insects, nematodes, and yeast.The API uses the bin index-if available-when querying for genomic intervals. The API also supports genomic sequence queries using locally downloaded *.2bit files that are not stored in the official MySQL database. The API is implemented in pure Ruby and is therefore available in different environments and with different Ruby interpreters (including JRuby). Assisted by the straightforward object-oriented design of Ruby and ActiveRecord, the Ruby UCSC API will facilitate biologists to query the UCSC genome database programmatically. The API is available through the RubyGem system. Source code and documentation are available at https://github.com/misshie/bioruby-ucsc-api/ under the Ruby license. Feedback and help is provided via the website at http://rubyucscapi.userecho.com/.

  11. The Ruby UCSC API: accessing the UCSC genome database using Ruby

    Science.gov (United States)

    2012-01-01

    Background The University of California, Santa Cruz (UCSC) genome database is among the most used sources of genomic annotation in human and other organisms. The database offers an excellent web-based graphical user interface (the UCSC genome browser) and several means for programmatic queries. A simple application programming interface (API) in a scripting language aimed at the biologist was however not yet available. Here, we present the Ruby UCSC API, a library to access the UCSC genome database using Ruby. Results The API is designed as a BioRuby plug-in and built on the ActiveRecord 3 framework for the object-relational mapping, making writing SQL statements unnecessary. The current version of the API supports databases of all organisms in the UCSC genome database including human, mammals, vertebrates, deuterostomes, insects, nematodes, and yeast. The API uses the bin index—if available—when querying for genomic intervals. The API also supports genomic sequence queries using locally downloaded *.2bit files that are not stored in the official MySQL database. The API is implemented in pure Ruby and is therefore available in different environments and with different Ruby interpreters (including JRuby). Conclusions Assisted by the straightforward object-oriented design of Ruby and ActiveRecord, the Ruby UCSC API will facilitate biologists to query the UCSC genome database programmatically. The API is available through the RubyGem system. Source code and documentation are available at https://github.com/misshie/bioruby-ucsc-api/ under the Ruby license. Feedback and help is provided via the website at http://rubyucscapi.userecho.com/. PMID:22994508

  12. The Ruby UCSC API: accessing the UCSC genome database using Ruby

    Directory of Open Access Journals (Sweden)

    Mishima Hiroyuki

    2012-09-01

    Full Text Available Abstract Background The University of California, Santa Cruz (UCSC genome database is among the most used sources of genomic annotation in human and other organisms. The database offers an excellent web-based graphical user interface (the UCSC genome browser and several means for programmatic queries. A simple application programming interface (API in a scripting language aimed at the biologist was however not yet available. Here, we present the Ruby UCSC API, a library to access the UCSC genome database using Ruby. Results The API is designed as a BioRuby plug-in and built on the ActiveRecord 3 framework for the object-relational mapping, making writing SQL statements unnecessary. The current version of the API supports databases of all organisms in the UCSC genome database including human, mammals, vertebrates, deuterostomes, insects, nematodes, and yeast. The API uses the bin index—if available—when querying for genomic intervals. The API also supports genomic sequence queries using locally downloaded *.2bit files that are not stored in the official MySQL database. The API is implemented in pure Ruby and is therefore available in different environments and with different Ruby interpreters (including JRuby. Conclusions Assisted by the straightforward object-oriented design of Ruby and ActiveRecord, the Ruby UCSC API will facilitate biologists to query the UCSC genome database programmatically. The API is available through the RubyGem system. Source code and documentation are available at https://github.com/misshie/bioruby-ucsc-api/ under the Ruby license. Feedback and help is provided via the website at http://rubyucscapi.userecho.com/.

  13. Stellar Abundances for Galactic Archaeology Database. IV. Compilation of stars in dwarf galaxies

    Science.gov (United States)

    Suda, Takuma; Hidaka, Jun; Aoki, Wako; Katsuta, Yutaka; Yamada, Shimako; Fujimoto, Masayuki Y.; Ohtani, Yukari; Masuyama, Miyu; Noda, Kazuhiro; Wada, Kentaro

    2017-10-01

    We have constructed a database of stars in Local Group galaxies using the extended version of the SAGA (Stellar Abundances for Galactic Archaeology) database that contains stars in 24 dwarf spheroidal galaxies and ultra-faint dwarfs. The new version of the database includes more than 4500 stars in the Milky Way, by removing the previous metallicity criterion of [Fe/H] ≤ -2.5, and more than 6000 stars in the Local Group galaxies. We examined the validity of using a combined data set for elemental abundances. We also checked the consistency between the derived distances to individual stars and those to galaxies as given in the literature. Using the updated database, the characteristics of stars in dwarf galaxies are discussed. Our statistical analyses of α-element abundances show that the change of the slope of the [α/Fe] relative to [Fe/H] (so-called "knee") occurs at [Fe/H] = -1.0 ± 0.1 for the Milky Way. The knee positions for selected galaxies are derived by applying the same method. The star formation history of individual galaxies is explored using the slope of the cumulative metallicity distribution function. Radial gradients along the four directions are inspected in six galaxies where we find no direction-dependence of metallicity gradients along the major and minor axes. The compilation of all the available data shows a lack of CEMP-s population in dwarf galaxies, while there may be some CEMP-no stars at [Fe/H] ≲ -3 even in the very small sample. The inspection of the relationship between Eu and Ba abundances confirms an anomalously Ba-rich population in Fornax, which indicates a pre-enrichment of interstellar gas with r-process elements. We do not find any evidence of anti-correlations in O-Na and Mg-Al abundances, which characterizes the abundance trends in the Galactic globular clusters.

  14. A computer adaptive testing version of the Addiction Severity Index-Multimedia Version (ASI-MV): The Addiction Severity CAT.

    Science.gov (United States)

    Butler, Stephen F; Black, Ryan A; McCaffrey, Stacey A; Ainscough, Jessica; Doucette, Ann M

    2017-05-01

    The purpose of this study was to develop and validate a computer adaptive testing (CAT) version of the Addiction Severity Index-Multimedia Version (ASI-MV), the Addiction Severity CAT. This goal was accomplished in 4 steps. First, new candidate items for Addiction Severity CAT domains were evaluated after brainstorming sessions with experts in substance abuse treatment. Next, this new item bank was psychometrically evaluated on a large nonclinical (n = 4,419) and substance abuse treatment (n = 845) sample. Based on these results, final items were selected and calibrated for the creation of the Addiction Severity CAT algorithms. Once the algorithms were developed for the entire assessment, a fully functioning prototype of an Addiction Severity CAT was created. CAT simulations were conducted, and optimal termination criteria were selected for the Addiction Severity CAT algorithms. Finally, construct validity of the CAT algorithms was evaluated by examining convergent and discriminant validity and sensitivity to change. The Addiction Severity CAT was determined to be valid, sensitive to change, and reliable. Further, the Addiction Severity CAT's time of completion was found to be significantly less than the average time of completion for the ASI-MV composite scores. This study represents the initial validation of an Addiction Severity CAT based on item response theory, and further exploration of the Addiction Severity CAT is needed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Update History of This Database - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27 Arabidopsis Phenome Data...base English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...

  16. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  17. Update History of This Database - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Update History of This Database Date Update contents 2017/03/13 SKIP Stemcell Database... English archive site is opened. 2013/03/29 SKIP Stemcell Database ( https://www.skip.med.k...eio.ac.jp/SKIPSearch/top?lang=en ) is opened. About This Database Database Description Download License Update History of This Databa...se Site Policy | Contact Us Update History of This Database - SKIP Stemcell Database | LSDB Archive ...

  18. UNITE: a database providing web-based methods for the molecular identification of ectomycorrhizal fungi

    DEFF Research Database (Denmark)

    Köljalg, U.; Larsson, K.H.; Abarenkov, K.

    2005-01-01

    Identification of ectomycorrhizal (ECM) fungi is often achieved through comparisons of ribosomal DNA internal transcribed spacer (ITS) sequences with accessioned sequences deposited in public databases. A major problem encountered is that annotation of the sequences in these databases is not always....... At present UNITE contains 758 ITS sequences from 455 species and 67 genera of ECM fungi. •  UNITE can be searched by taxon name, via sequence similarity using blastn, and via phylogenetic sequence identification using galaxie. Following implementation, galaxie performs a phylogenetic analysis of the query...... sequence after alignment either to pre-existing generic alignments, or to matches retrieved from a blast search on the UNITE data. It should be noted that the current version of UNITE is dedicated to the reliable identification of ECM fungi. •  The UNITE database is accessible through the URL http://unite.zbi.ee...

  19. CaveMan Enterprise version 1.0 Software Validation and Verification.

    Energy Technology Data Exchange (ETDEWEB)

    Hart, David

    2014-10-01

    The U.S. Department of Energy Strategic Petroleum Reserve stores crude oil in caverns solution-mined in salt domes along the Gulf Coast of Louisiana and Texas. The CaveMan software program has been used since the late 1990s as one tool to analyze pressure mea- surements monitored at each cavern. The purpose of this monitoring is to catch potential cavern integrity issues as soon as possible. The CaveMan software was written in Microsoft Visual Basic, and embedded in a Microsoft Excel workbook; this method of running the CaveMan software is no longer sustainable. As such, a new version called CaveMan Enter- prise has been developed. CaveMan Enterprise version 1.0 does not have any changes to the CaveMan numerical models. CaveMan Enterprise represents, instead, a change from desktop-managed work- books to an enterprise framework, moving data management into coordinated databases and porting the numerical modeling codes into the Python programming language. This document provides a report of the code validation and verification testing.

  20. The version control service for ATLAS data acquisition configuration files

    CERN Document Server

    Soloviev, Igor; The ATLAS collaboration

    2012-01-01

    To configure data taking session the ATLAS systems and detectors store more than 160 MBytes of data acquisition related configuration information in OKS XML files [1]. The total number of the files exceeds 1300 and they are updated by many system experts. In the past from time to time after such updates we had experienced problems caused by XML syntax errors or inconsistent state of files from a point of view of the overall ATLAS configuration. It was not always possible to know who made a modification causing problems or how to go back to a previous version of the modified file. Few years ago a special service addressing these issues has been implemented and deployed on ATLAS Point-1. It excludes direct write access to XML files stored in a central database repository. Instead, for an update the files are copied into a user repository, validated after modifications and committed using a version control system. The system's callback updates the central repository. Also, it keeps track of all modifications pro...

  1. Construction of a bibliographic information database and development of retrieval system for research reports in nuclear science and technology (II)

    International Nuclear Information System (INIS)

    Han, Duk Haeng; Kim, Tae Whan; Choi, Kwang; Yoo, An Na; Keum, Jong Yong; Kim, In Kwon

    1996-05-01

    The major goal of this project is to construct a bibliographic information database in nuclear engineering and to develop a prototype retrieval system. To give an easy access to microfiche research report, this project has accomplished the construction of microfiche research reports database and the development of retrieval system. The results of the project are as follows; 1. Microfiche research reports database was constructed by downloading from DOE Energy, NTIS, INIS. 2. The retrieval system was developed in host and web version using access point such as title, abstracts, keyword, report number. 6 tabs., 8 figs., 11 refs. (Author) .new

  2. Construction of a bibliographic information database and development of retrieval system for research reports in nuclear science and technology (II)

    Energy Technology Data Exchange (ETDEWEB)

    Han, Duk Haeng; Kim, Tae Whan; Choi, Kwang; Yoo, An Na; Keum, Jong Yong; Kim, In Kwon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-05-01

    The major goal of this project is to construct a bibliographic information database in nuclear engineering and to develop a prototype retrieval system. To give an easy access to microfiche research report, this project has accomplished the construction of microfiche research reports database and the development of retrieval system. The results of the project are as follows; 1. Microfiche research reports database was constructed by downloading from DOE Energy, NTIS, INIS. 2. The retrieval system was developed in host and web version using access point such as title, abstracts, keyword, report number. 6 tabs., 8 figs., 11 refs. (Author) .new.

  3. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  4. Verification of RESRAD-build computer code, version 3.1

    International Nuclear Information System (INIS)

    2003-01-01

    for the review and any actions that were taken when these items were missing are documented in Section 5 of this report. The availability and use of user experience were limited to extensive experience in performing RESRAD-BUILD calculations by the verification project manager and by participation in the RESRAD-BUILD workshop offered by the code developers on May 11, 2001. The level of a posteriori verification that was implemented is defined in Sections 2 through 4 of this report. In general, a rigorous verification review plan addresses program requirements, design, coding, documentation, test coverage, and evaluation of test results. The scope of the RESRAD-BUILD verification is to focus primarily on program requirements, documentation, testing and evaluation. Detailed program design and source code review would be warranted only in those cases when the evaluation of test results and user experience revealed possible problems in these areas. The verification tasks were conducted in three parts and were applied to version 3.1 of the RESRAD-BUILD code and the final version of the user.s manual, issued in November 2001 (Yu (and others) 2001). These parts include the verification of the deterministic models used in RESRAD-BUILD (Section 2), the verification of the uncertainty analysis model included in RESRAD-BUILD (Section 3), and recommendations for improvement of the RESRAD-BUILD user interface, including evaluations of the user's manual, code design, and calculation methodology (Section 4). Any verification issues that were identified were promptly communicated to the RESRAD-BUILD development team, in particular those that arose from the database and parameter verification tasks. This allowed the developers to start implementing necessary database or coding changes well before this final report was issued

  5. PC/FRAM, Version 3.2 User Manual

    International Nuclear Information System (INIS)

    Kelley, T.A.; Sampson, T.E.

    1999-01-01

    This manual describes the use of version 3.2 of the PC/FRAM plutonium isotopic analysis software developed in the Safeguards Science and Technology Group, NE-5, Nonproliferation and International Security Division Los Alamos National Laboratory. The software analyzes the gamma ray spectrum from plutonium-bearing items and determines the isotopic distribution of the plutonium 241Am content and concentration of other isotopes in the item. The software can also determine the isotopic distribution of uranium isotopes in items containing only uranium. The body of this manual descenies the generic version of the code. Special facility-specific enhancements, if they apply, will be described in the appendices. The information in this manual applies equally well to version 3.3, which has been licensed to ORTEC. The software can analyze data that is stored in a file on disk. It understands several storage formats including Canberra's S1OO format, ORTEC'S 'chn' and 'SPC' formats, and several ASCII text formats. The software can also control data acquisition using an MCA and then store the results in a file on disk for later analysis or analyze the spectrum directly after the acquisition. The software currently only supports the control of ORTEC MCB'S. Support for Canbema's Genie-2000 Spectroscopy Systems will be added in the future. Support for reading and writing CAM files will also be forthcoming. A versatile parameter fde database structure governs all facets of the data analysis. User editing of the parameter sets allows great flexibility in handling data with different isotopic distributions, interfering isotopes, and different acquisition parameters such as energy calibration, and detector type. This manual is intended for the system supervisor or the local user who is to be the resident expert. Excerpts from this manual may also be appropriate for the system operator who will routinely use the instrument

  6. NOAA Climate Data Record (CDR) of Zonal Mean Ozone Binary Database of Profiles (BDBP), version 1.0

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This NOAA Climate Data Record (CDR) of Zonal Mean Ozone Binary Database of Profiles (BDBP) dataset is a vertically resolved, global, gap-free and zonal mean dataset...

  7. Database Description - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Open TG-GATEs Pathological Image Database Database Description General information of database Database... name Open TG-GATEs Pathological Image Database Alternative name - DOI 10.18908/lsdba.nbdc00954-0...iomedical Innovation 7-6-8, Saito-asagi, Ibaraki-city, Osaka 567-0085, Japan TEL:81-72-641-9826 Email: Database... classification Toxicogenomics Database Organism Taxonomy Name: Rattus norvegi... Article title: Author name(s): Journal: External Links: Original website information Database

  8. Reconciling leaf physiological traits and canopy flux data: Use of the TRY and FLUXNET databases in the Community Land Model version 4

    Science.gov (United States)

    Bonan, Gordon B.; Oleson, Keith W.; Fisher, Rosie A.; Lasslop, Gitta; Reichstein, Markus

    2012-06-01

    The Community Land Model version 4 overestimates gross primary production (GPP) compared with estimates from FLUXNET eddy covariance towers. The revised model of Bonan et al. (2011) is consistent with FLUXNET, but values for the leaf-level photosynthetic parameterVcmaxthat yield realistic GPP at the canopy-scale are lower than observed in the global synthesis of Kattge et al. (2009), except for tropical broadleaf evergreen trees. We investigate this discrepancy betweenVcmaxand canopy fluxes. A multilayer model with explicit calculation of light absorption and photosynthesis for sunlit and shaded leaves at depths in the canopy gives insight to the scale mismatch between leaf and canopy. We evaluate the model with light-response curves at individual FLUXNET towers and with empirically upscaled annual GPP. Biases in the multilayer canopy with observedVcmaxare similar, or improved, compared with the standard two-leaf canopy and its lowVcmax, though the Amazon is an exception. The difference relates to light absorption by shaded leaves in the two-leaf canopy, and resulting higher photosynthesis when the canopy scaling parameterKn is low, but observationally constrained. Larger Kndecreases shaded leaf photosynthesis and reduces the difference between the two-leaf and multilayer canopies. The low modelVcmaxis diagnosed from nitrogen reduction of GPP in simulations with carbon-nitrogen biogeochemistry. Our results show that the imposed nitrogen reduction compensates for deficiency in the two-leaf canopy that produces high GPP. Leaf trait databases (Vcmax), within-canopy profiles of photosynthetic capacity (Kn), tower fluxes, and empirically upscaled fields provide important complementary information for model evaluation.

  9. iPfam: a database of protein family and domain interactions found in the Protein Data Bank.

    Science.gov (United States)

    Finn, Robert D; Miller, Benjamin L; Clements, Jody; Bateman, Alex

    2014-01-01

    The database iPfam, available at http://ipfam.org, catalogues Pfam domain interactions based on known 3D structures that are found in the Protein Data Bank, providing interaction data at the molecular level. Previously, the iPfam domain-domain interaction data was integrated within the Pfam database and website, but it has now been migrated to a separate database. This allows for independent development, improving data access and giving clearer separation between the protein family and interactions datasets. In addition to domain-domain interactions, iPfam has been expanded to include interaction data for domain bound small molecule ligands. Functional annotations are provided from source databases, supplemented by the incorporation of Wikipedia articles where available. iPfam (version 1.0) contains >9500 domain-domain and 15 500 domain-ligand interactions. The new website provides access to this data in a variety of ways, including interactive visualizations of the interaction data.

  10. TRANSNET -- access to radioactive and hazardous materials transportation codes and databases

    International Nuclear Information System (INIS)

    Cashwell, J.W.

    1992-01-01

    TRANSNET has been developed and maintained by Sandia National Laboratories under the sponsorship of the United States Department of Energy (DOE) Office of Environmental Restoration and Waste Management to permit outside access to computerized routing, risk and systems analysis models, and associated databases. The goal of the TRANSNET system is to enable transfer of transportation analytical methods and data to qualified users by permitting direct, timely access to the up-to-date versions of the codes and data. The TRANSNET facility comprises a dedicated computer with telephone ports on which these codes and databases are adapted, modified, and maintained. To permit the widest spectrum of outside users, TRANSNET is designed to minimize hardware and documentation requirements. The user is thus required to have an IBM-compatible personal computer, Hayes-compatible modem with communications software, and a telephone. Maintenance and operation of the TRANSNET facility are underwritten by the program sponsor(s) as are updates to the respective models and data, thus the only charges to the user of the system are telephone hookup charges. TRANSNET provides access to the most recent versions of the models and data developed by or for Sandia National Laboratories. Code modifications that have been made since the last published documentation are noted to the user on the introductory screens. User friendly interfaces have been developed for each of the codes and databases on TRANSNET. In addition, users are provided with default input data sets for typical problems which can either be used directly or edited. Direct transfers of analytical or data files between codes are provided to permit the user to perform complex analyses with a minimum of input. Recent developments to the TRANSNET system include use of the system to directly pass data files between both national and international users as well as development and integration of graphical depiction techniques

  11. IAEA/NDS requirements related to database software

    International Nuclear Information System (INIS)

    Pronyaev, V.; Zerkin, V.

    2001-01-01

    and standard Java used for database description, management and access should be reported. Location and structure of external files should be given. 3. If the authors use any other software for creating database and access to data through Web and/or on CD-ROM they should submit all source files and run-time or self-expanding executable for CD-ROM version which should contain all necessary components to be installed at the user's computer. 4. All implementations should satisfy the license agreements for software used in development and for data dissemination. 5. All databases and developed software should be documented. (author)

  12. CBD: a biomarker database for colorectal cancer.

    Science.gov (United States)

    Zhang, Xueli; Sun, Xiao-Feng; Cao, Yang; Ye, Benchen; Peng, Qiliang; Liu, Xingyun; Shen, Bairong; Zhang, Hong

    2018-01-01

    Colorectal cancer (CRC) biomarker database (CBD) was established based on 870 identified CRC biomarkers and their relevant information from 1115 original articles in PubMed published from 1986 to 2017. In this version of the CBD, CRC biomarker data were collected, sorted, displayed and analysed. The CBD with the credible contents as a powerful and time-saving tool provide more comprehensive and accurate information for further CRC biomarker research. The CBD was constructed under MySQL server. HTML, PHP and JavaScript languages have been used to implement the web interface. The Apache was selected as HTTP server. All of these web operations were implemented under the Windows system. The CBD could provide to users the multiple individual biomarker information and categorized into the biological category, source and application of biomarkers; the experiment methods, results, authors and publication resources; the research region, the average age of cohort, gender, race, the number of tumours, tumour location and stage. We only collect data from the articles with clear and credible results to prove the biomarkers are useful in the diagnosis, treatment or prognosis of CRC. The CBD can also provide a professional platform to researchers who are interested in CRC research to communicate, exchange their research ideas and further design high-quality research in CRC. They can submit their new findings to our database via the submission page and communicate with us in the CBD.Database URL: http://sysbio.suda.edu.cn/CBD/.

  13. Specialized microbial databases for inductive exploration of microbial genome sequences

    Directory of Open Access Journals (Sweden)

    Cabau Cédric

    2005-02-01

    Full Text Available Abstract Background The enormous amount of genome sequence data asks for user-oriented databases to manage sequences and annotations. Queries must include search tools permitting function identification through exploration of related objects. Methods The GenoList package for collecting and mining microbial genome databases has been rewritten using MySQL as the database management system. Functions that were not available in MySQL, such as nested subquery, have been implemented. Results Inductive reasoning in the study of genomes starts from "islands of knowledge", centered around genes with some known background. With this concept of "neighborhood" in mind, a modified version of the GenoList structure has been used for organizing sequence data from prokaryotic genomes of particular interest in China. GenoChore http://bioinfo.hku.hk/genochore.html, a set of 17 specialized end-user-oriented microbial databases (including one instance of Microsporidia, Encephalitozoon cuniculi, a member of Eukarya has been made publicly available. These databases allow the user to browse genome sequence and annotation data using standard queries. In addition they provide a weekly update of searches against the world-wide protein sequences data libraries, allowing one to monitor annotation updates on genes of interest. Finally, they allow users to search for patterns in DNA or protein sequences, taking into account a clustering of genes into formal operons, as well as providing extra facilities to query sequences using predefined sequence patterns. Conclusion This growing set of specialized microbial databases organize data created by the first Chinese bacterial genome programs (ThermaList, Thermoanaerobacter tencongensis, LeptoList, with two different genomes of Leptospira interrogans and SepiList, Staphylococcus epidermidis associated to related organisms for comparison.

  14. Data-Based Predictive Control with Multirate Prediction Step

    Science.gov (United States)

    Barlow, Jonathan S.

    2010-01-01

    Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.

  15. A hierarchical spatial framework and database for the national river fish habitat condition assessment

    Science.gov (United States)

    Wang, L.; Infante, D.; Esselman, P.; Cooper, A.; Wu, D.; Taylor, W.; Beard, D.; Whelan, G.; Ostroff, A.

    2011-01-01

    Fisheries management programs, such as the National Fish Habitat Action Plan (NFHAP), urgently need a nationwide spatial framework and database for health assessment and policy development to protect and improve riverine systems. To meet this need, we developed a spatial framework and database using National Hydrography Dataset Plus (I-.100,000-scale); http://www.horizon-systems.com/nhdplus). This framework uses interconfluence river reaches and their local and network catchments as fundamental spatial river units and a series of ecological and political spatial descriptors as hierarchy structures to allow users to extract or analyze information at spatial scales that they define. This database consists of variables describing channel characteristics, network position/connectivity, climate, elevation, gradient, and size. It contains a series of catchment-natural and human-induced factors that are known to influence river characteristics. Our framework and database assembles all river reaches and their descriptors in one place for the first time for the conterminous United States. This framework and database provides users with the capability of adding data, conducting analyses, developing management scenarios and regulation, and tracking management progresses at a variety of spatial scales. This database provides the essential data needs for achieving the objectives of NFHAP and other management programs. The downloadable beta version database is available at http://ec2-184-73-40-15.compute-1.amazonaws.com/nfhap/main/.

  16. Update History of This Database - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Update History of This Database Date Update contents 201...0/03/29 Yeast Interacting Proteins Database English archive site is opened. 2000/12/4 Yeast Interacting Proteins Database...( http://itolab.cb.k.u-tokyo.ac.jp/Y2H/ ) is released. About This Database Database Description... Download License Update History of This Database Site Policy | Contact Us Update History of This Database... - Yeast Interacting Proteins Database | LSDB Archive ...

  17. System administrator's manual (SAM) for the enhanced logistics intratheater support tool (ELIST) database segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    This document is the System Administrator's Manual (SAM) for the Enhanced Logistics Intratheater Support Tool (ELIST) Database Segment. It covers errors that can arise during the segment's installation and deinstallation, and it outlines appropriate recovery actions. It also tells how to extend the database storage available to Oracle if a datastore becomes filled during the use of ELIST. The latter subject builds on some of the actions that must be performed when installing this segment, as documented in the Installation Procedures (IP) for the Enhanced Logistics Intratheater Support Tool (ELIST) Global Data Segment, Database Instance Segment, Database Fill Segment, Database Segment, Database Utility Segment, Software Segment, and Reference Data Segment (referred to in portions of this document as the ELIST IP). The information in this document is expected to be of use only rarely. Other than errors arising from the failure to follow instructions, difficulties are not expected to be encountered during the installation or deinstallation of the segment. The need to extend database storage likewise typically arises infrequently. Most administrators will only need to be aware of the help that is provided in this document and will probably not actually need to read and make use of it

  18. The Hispanic Stress Inventory Version 2: Improving the assessment of acculturation stress.

    Science.gov (United States)

    Cervantes, Richard C; Fisher, Dennis G; Padilla, Amado M; Napper, Lucy E

    2016-05-01

    This article reports on a 2-phase study to revise the Hispanic Stress Inventory (HSI; Cervantes, Padilla, & Salgado de Snyder, 1991). The necessity for a revised stress-assessment instrument was determined by demographic and political shifts affecting Latin American immigrants and later-generation Hispanics in the United States in the 2 decades since the development of the HSI. The data for the revision of the HSI (termed the HSI2) was collected at 4 sites: Los Angeles, El Paso, Miami, and Boston, and included 941 immigrants and 575 U.S.-born Hispanics and a diverse population of Hispanic subgroups. The immigrant version of the HSI2 includes 10 stress subscales, whereas the U.S.-born version includes 6 stress subscales. Both versions of the HSI2 are shown to possess satisfactory Cronbach's alpha reliabilities and demonstrate expert-based content validity, as well as concurrent validity when correlated with subscales of the Brief Symptom Inventory (Derogatis, 1993) and the Patient Health Questionnaire-9 (Kroenke, Spitzer, & Williams, 2001). The new HSI2 instruments are recommended for use by clinicians and researchers interested in assessing psychosocial stress among diverse Hispanic populations of various ethnic subgroups, age groups, and geographic location. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. National Geochronological Database

    Science.gov (United States)

    Revised by Sloan, Jan; Henry, Christopher D.; Hopkins, Melanie; Ludington, Steve; Original database by Zartman, Robert E.; Bush, Charles A.; Abston, Carl

    2003-01-01

    The National Geochronological Data Base (NGDB) was established by the United States Geological Survey (USGS) to collect and organize published isotopic (also known as radiometric) ages of rocks in the United States. The NGDB (originally known as the Radioactive Age Data Base, RADB) was started in 1974. A committee appointed by the Director of the USGS was given the mission to investigate the feasibility of compiling the published radiometric ages for the United States into a computerized data bank for ready access by the user community. A successful pilot program, which was conducted in 1975 and 1976 for the State of Wyoming, led to a decision to proceed with the compilation of the entire United States. For each dated rock sample reported in published literature, a record containing information on sample location, rock description, analytical data, age, interpretation, and literature citation was constructed and included in the NGDB. The NGDB was originally constructed and maintained on a mainframe computer, and later converted to a Helix Express relational database maintained on an Apple Macintosh desktop computer. The NGDB and a program to search the data files were published and distributed on Compact Disc-Read Only Memory (CD-ROM) in standard ISO 9660 format as USGS Digital Data Series DDS-14 (Zartman and others, 1995). As of May 1994, the NGDB consisted of more than 18,000 records containing over 30,000 individual ages, which is believed to represent approximately one-half the number of ages published for the United States through 1991. Because the organizational unit responsible for maintaining the database was abolished in 1996, and because we wanted to provide the data in more usable formats, we have reformatted the data, checked and edited the information in some records, and provided this online version of the NGDB. This report describes the changes made to the data and formats, and provides instructions for the use of the database in geographic

  20. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...19&lang=en Whole data download - Referenced database Rice Expression Database (RED) Rice full-length cDNA Database... (KOME) Rice Genome Integrated Map Database (INE) Rice Mutant Panel Database (Tos17) Rice Genome Annotation Database

  1. Italian Present-day Stress Indicators: IPSI Database

    Science.gov (United States)

    Mariucci, M. T.; Montone, P.

    2017-12-01

    In Italy, since the 90s of the last century, researches concerning the contemporary stress field have been developing at Istituto Nazionale di Geofisica e Vulcanologia (INGV) with local and regional scale studies. Throughout the years many data have been analysed and collected: now they are organized and available for an easy end-use online. IPSI (Italian Present-day Stress Indicators) database, is the first geo-referenced repository of information on the crustal present-day stress field maintained at INGV through a web application database and website development by Gabriele Tarabusi. Data consist of horizontal stress orientations analysed and compiled in a standardized format and quality-ranked for reliability and comparability on a global scale with other database. Our first database release includes 855 data records updated to December 2015. Here we present an updated version that will be released in 2018, after new earthquake data entry up to December 2017. The IPSI web site (http://ipsi.rm.ingv.it/) allows accessing data on a standard map viewer and choose which data (category and/or quality) to plot easily. The main information of each single element (type, quality, orientation) can be viewed simply going over the related symbol, all the information appear by clicking the element. At the same time, simple basic information on the different data type, tectonic regime assignment, quality ranking method are available with pop-up windows. Data records can be downloaded in some common formats, moreover it is possible to download a file directly usable with SHINE, a web based application to interpolate stress orientations (http://shine.rm.ingv.it). IPSI is mainly conceived for those interested in studying the characters of Italian peninsula and surroundings although Italian data are part of the World Stress Map (http://www.world-stress-map.org/) as evidenced by many links that redirect to this database for more details on standard practices in this field.

  2. UNITE: a database providing web-based methods for the molecular identification of ectomycorrhizal fungi.

    Science.gov (United States)

    Kõljalg, Urmas; Larsson, Karl-Henrik; Abarenkov, Kessy; Nilsson, R Henrik; Alexander, Ian J; Eberhardt, Ursula; Erland, Susanne; Høiland, Klaus; Kjøller, Rasmus; Larsson, Ellen; Pennanen, Taina; Sen, Robin; Taylor, Andy F S; Tedersoo, Leho; Vrålstad, Trude; Ursing, Björn M

    2005-06-01

    Identification of ectomycorrhizal (ECM) fungi is often achieved through comparisons of ribosomal DNA internal transcribed spacer (ITS) sequences with accessioned sequences deposited in public databases. A major problem encountered is that annotation of the sequences in these databases is not always complete or trustworthy. In order to overcome this deficiency, we report on UNITE, an open-access database. UNITE comprises well annotated fungal ITS sequences from well defined herbarium specimens that include full herbarium reference identification data, collector/source and ecological data. At present UNITE contains 758 ITS sequences from 455 species and 67 genera of ECM fungi. UNITE can be searched by taxon name, via sequence similarity using blastn, and via phylogenetic sequence identification using galaxie. Following implementation, galaxie performs a phylogenetic analysis of the query sequence after alignment either to pre-existing generic alignments, or to matches retrieved from a blast search on the UNITE data. It should be noted that the current version of UNITE is dedicated to the reliable identification of ECM fungi. The UNITE database is accessible through the URL http://unite.zbi.ee

  3. MPID-T2: a database for sequence-structure-function analyses of pMHC and TR/pMHC structures.

    Science.gov (United States)

    Khan, Javed Mohammed; Cheruku, Harish Reddy; Tong, Joo Chuan; Ranganathan, Shoba

    2011-04-15

    Sequence-structure-function information is critical in understanding the mechanism of pMHC and TR/pMHC binding and recognition. A database for sequence-structure-function information on pMHC and TR/pMHC interactions, MHC-Peptide Interaction Database-TR version 2 (MPID-T2), is now available augmented with the latest PDB and IMGT/3Dstructure-DB data, advanced features and new parameters for the analysis of pMHC and TR/pMHC structures. http://biolinfo.org/mpid-t2. shoba.ranganathan@mq.edu.au Supplementary data are available at Bioinformatics online.

  4. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  5. A Spatio-Temporal Building Exposure Database and Information Life-Cycle Management Solution

    Directory of Open Access Journals (Sweden)

    Marc Wieland

    2017-04-01

    Full Text Available With an ever-increasing volume and complexity of data collected from a variety of sources, the efficient management of geospatial information becomes a key topic in disaster risk management. For example, the representation of assets exposed to natural disasters is subjected to changes throughout the different phases of risk management reaching from pre-disaster mitigation to the response after an event and the long-term recovery of affected assets. Spatio-temporal changes need to be integrated into a sound conceptual and technological framework able to deal with data coming from different sources, at varying scales, and changing in space and time. Especially managing the information life-cycle, the integration of heterogeneous information and the distributed versioning and release of geospatial information are important topics that need to become essential parts of modern exposure modelling solutions. The main purpose of this study is to provide a conceptual and technological framework to tackle the requirements implied by disaster risk management for describing exposed assets in space and time. An information life-cycle management solution is proposed, based on a relational spatio-temporal database model coupled with Git and GeoGig repositories for distributed versioning. Two application scenarios focusing on the modelling of residential building stocks are presented to show the capabilities of the implemented solution. A prototype database model is shared on GitHub along with the necessary scenario data.

  6. High-Performance Secure Database Access Technologies for HEP Grids

    Energy Technology Data Exchange (ETDEWEB)

    Matthew Vranicar; John Weicher

    2006-04-17

    secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.

  7. High-Performance Secure Database Access Technologies for HEP Grids

    International Nuclear Information System (INIS)

    Vranicar, Matthew; Weicher, John

    2006-01-01

    authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory's (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project's current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access

  8. The IAEA Member States' database of discharges of radionuclides to the atmosphere and the aquatic environment (DIRATA)

    International Nuclear Information System (INIS)

    Berkovskyy, Volodymyr; Hood, Graeme

    2008-01-01

    Full text: This paper provides the abstract model for authors. It embodies all the required formats and it is written complying with them. DIRATA is the IAEA Member States' database on discharges of radionuclides to the atmosphere and the aquatic environment (http://dirata.iaea.org/). It is a worldwide centralized repository of data submitted by IAEA Member States on a voluntary basis and each site dataset includes annual discharge and detection limits. Regulatory limits are given by Member States whenever available and a limited amount of information on the location of the site (country, geographical coordinates, water body into which radioactivity is released, number, names and types of installations) is also included. One of important purposes of DIRATA is to assist UNSCEAR in the preparation of the regular reports to the UN General Assembly and to serve Member States as a technical means for reporting and reviewing within the framework of the Joint Convention on the Safety of Spent Fuel Management and on the Safety of Radioactive Waste Management. The on-line version of the DIRATA database was deployed for the pilot application by Member States and the general public in 2006 and provides tools for: 1-)Input of the primary information by IAEA Member States and international organizations in batch or interactive (record by record) modes. The Microsoft Excel template is provided on the DIRATA website for the batch input; 2-) On-line access of Member States and the public to the dataset. The information contained in DIRATA is available for downloading (in CSV format) and interactive review. The new web-based version of DIRATA has inherited all of the important features contained on the previous CD-ROM versions, and has been extended by the number of principally new functionalities. The paper describes the structure, functionalities and content of the DIRATA database. (author)

  9. Global Historical Climatology Network - Daily (GHCN-Daily), Version 2 (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Please note, this dataset has been superseded by a newer version (see below). Users should not use this version except in rare cases (e.g., when reproducing previous...

  10. Cross-cultural adaptation of an environmental health measurement instrument: Brazilian version of the health-care waste management • rapid assessment tool.

    Science.gov (United States)

    Cozendey-Silva, Eliana Napoleão; da Silva, Cintia Ribeiro; Larentis, Ariane Leites; Wasserman, Julio Cesar; Rozemberg, Brani; Teixeira, Liliane Reis

    2016-09-05

    Periodic assessment is one of the recommendations for improving health-care waste management worldwide. This study aimed at translating and adapting the Health-Care Waste Management - Rapid Assessment Tool (HCWM-RAT), proposed by the World Health Organization, to a Brazilian Portuguese version, and resolving its cultural and legal issues. The work focused on the evaluation of the concepts, items and semantic equivalence between the original tool and the Brazilian Portuguese version. A cross-cultural adaptation methodology was used, including: initial translation to Brazilian Portuguese; back translation to English; syntheses of these translation versions; formation of an expert committee to achieve consensus about the preliminary version; and evaluation of the target audience's comprehension. Both the translated and the original versions' concepts, items and semantic equivalence are presented. The constructs in the original instrument were considered relevant and applicable to the Brazilian context. The Brazilian version of the tool has the potential to generate indicators, develop official database, feedback and subsidize political decisions at many geographical and organizational levels strengthening the Monitoring and evaluation (M&E) mechanism. Moreover, the cross-cultural translation expands the usefulness of the instrument to Portuguese-speaking countries in developing regions. The translated and original versions presented concept, item and semantic equivalence and can be applied to Brazil.

  11. User Manual for the NASA Glenn Ice Accretion Code LEWICE. Version 2.2.2

    Science.gov (United States)

    Wright, William B.

    2002-01-01

    A research project is underway at NASA Glenn to produce a computer code which can accurately predict ice growth under a wide range of meteorological conditions for any aircraft surface. This report will present a description of the code inputs and outputs from version 2.2.2 of this code, which is called LEWICE. This version differs from release 2.0 due to the addition of advanced thermal analysis capabilities for de-icing and anti-icing applications using electrothermal heaters or bleed air applications. An extensive effort was also undertaken to compare the results against the database of electrothermal results which have been generated in the NASA Glenn Icing Research Tunnel (IRT) as was performed for the validation effort for version 2.0. This report will primarily describe the features of the software related to the use of the program. Appendix A of this report has been included to list some of the inner workings of the software or the physical models used. This information is also available in the form of several unpublished documents internal to NASA. This report is intended as a replacement for all previous user manuals of LEWICE. In addition to describing the changes and improvements made for this version, information from previous manuals may be duplicated so that the user will not need to consult previous manuals to use this code.

  12. Database Description - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...e databases - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description... Links: Original website information Database maintenance site The Molecular Profiling Research Center for D...stration Not available About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - SAHG | LSDB Archive ...

  13. ZZ HATCHES-18, Database for radiochemical modelling

    International Nuclear Information System (INIS)

    Heath, T.G.

    2008-01-01

    1 - Description of program or function: HATCHES is a referenced, quality assured, thermodynamic database, developed by Serco Assurance for Nirex. Although originally compiled for use in radiochemical modelling work, HATCHES also includes data suitable for many other applications e.g. toxic waste disposal, effluent treatment and chemical processing. It is used in conjunction with chemical and geochemical computer programs, to simulate a wide variety of reactions in aqueous environments. The database includes thermodynamic data (the log formation constant and the enthalpy of formation for the chemical species) for the actinides, fission products and decay products. The datasets for Ni, Tc, U, Np, Pu and Am are based on the NEA reviews of the chemical thermodynamics of these elements. The data sets for these elements with oxalate, citrate and EDTA are based on the NEA-selected values. For iso-saccharinic acid, additional data (non-selected values) have been included from the NEA review as well as data derived from other sources. HATCHES also includes data for many toxic metals and for elements commonly found in groundwaters or geological materials. HARPHRQ operates by reference to the PHREEQE master species list. Thus the thermodynamic information supplied is: a) the log equilibrium constant for the formation reaction of the requested species from the PHREEQE master species for the corresponding elements; b) the enthalpy of reaction for the formation reaction of the requested species from the PHREEQE master species for the corresponding elements. This version of HATCHES has been updated since the previous release to provide consistency with the selected data from two recent publications in the OECD Nuclear Energy Agency series on chemical thermodynamics: Chemical Thermodynamics Series Volume 7 (2005): Chemical Thermodynamics of Selenium by Aeke Olin (Chairman), Bengt Nolaeng, Lars-Olof Oehman, Evgeniy Osadchii and Erik Rosen and Chemical Thermodynamics Series Volume 8

  14. MiCroKit 3.0: an integrated database of midbody, centrosome and kinetochore.

    Science.gov (United States)

    Ren, Jian; Liu, Zexian; Gao, Xinjiao; Jin, Changjiang; Ye, Mingliang; Zou, Hanfa; Wen, Longping; Zhang, Zhaolei; Xue, Yu; Yao, Xuebiao

    2010-01-01

    During cell division/mitosis, a specific subset of proteins is spatially and temporally assembled into protein super complexes in three distinct regions, i.e. centrosome/spindle pole, kinetochore/centromere and midbody/cleavage furrow/phragmoplast/bud neck, and modulates cell division process faithfully. Although many experimental efforts have been carried out to investigate the characteristics of these proteins, no integrated database was available. Here, we present the MiCroKit database (http://microkit.biocuckoo.org) of proteins that localize in midbody, centrosome and/or kinetochore. We collected into the MiCroKit database experimentally verified microkit proteins from the scientific literature that have unambiguous supportive evidence for subcellular localization under fluorescent microscope. The current version of MiCroKit 3.0 provides detailed information for 1489 microkit proteins from seven model organisms, including Saccharomyces cerevisiae, Schizasaccharomyces pombe, Caenorhabditis elegans, Drosophila melanogaster, Xenopus laevis, Mus musculus and Homo sapiens. Moreover, the orthologous information was provided for these microkit proteins, and could be a useful resource for further experimental identification. The online service of MiCroKit database was implemented in PHP + MySQL + JavaScript, while the local packages were developed in JAVA 1.5 (J2SE 5.0).

  15. Database Description - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PSCDB Alternative n...rial Science and Technology (AIST) Takayuki Amemiya E-mail: Database classification Structure Databases - Protein structure Database...554-D558. External Links: Original website information Database maintenance site Graduate School of Informat...available URL of Web services - Need for user registration Not available About This Database Database Descri...ption Download License Update History of This Database Site Policy | Contact Us Database Description - PSCDB | LSDB Archive ...

  16. Database Description - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ASTRA Alternative n...tics Journal Search: Contact address Database classification Nucleotide Sequence Databases - Gene structure,...3702 Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The database represents classified p...(10):1211-6. External Links: Original website information Database maintenance site National Institute of Ad... for user registration Not available About This Database Database Description Dow

  17. TEDS-M 2008 User Guide for the International Database. Supplement 2: National Adaptations of the TEDS-M Questionnaires

    Science.gov (United States)

    Brese, Falk, Ed.

    2012-01-01

    This supplement contains all adaptations made by countries to the international version of the TEDS-M questionnaires under careful supervision of and approval by the TEDS-M International Study Center at Michigan State University. This information provides users of the TEDS-M International Database with a guide to evaluate the availability of…

  18. Database Description - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RPD Alternative name Rice Proteome Database...titute of Crop Science, National Agriculture and Food Research Organization Setsuko Komatsu E-mail: Database... classification Proteomics Resources Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database... description Rice Proteome Database contains information on protei...and entered in the Rice Proteome Database. The database is searchable by keyword,

  19. Database Description - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PLACE Alternative name A Database...Kannondai, Tsukuba, Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Databas...e classification Plant databases Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database...99, Vol.27, No.1 :297-300 External Links: Original website information Database maintenance site National In...- Need for user registration Not available About This Database Database Descripti

  20. National Library of Norway's new database of 22 manuscript maps concerning the Swedish King Charles XII's campaign in Norway in 1716 and 1718

    Directory of Open Access Journals (Sweden)

    Benedicte Gamborg Brisa

    2003-03-01

    Full Text Available The National Library of Norway is planning to digitise approximately 1,500 manuscript maps. Two years ago we started working on a pilot project, and for this purpose we chose 22 maps small enough to be photographed in one piece. We made slides 6 x 7 cm in size, converted the slides into PhotoCDs and used four different resolutions on JPEG-files. To avoid large file sizes, we had to divide the version with the biggest resolution into four pieces. The preliminary work was done in Photoshop, the database on the web is made in Oracle. You can click on the map to zoom. Norwegians and probably Swedes during the Great Northern War drew the 22 maps when the Swedish King Charles XII in 1716 and 1718 unsuccessfully attempted to conquer Norway. The database is now accessible on the National Library of Norway's web site. The database is in Norwegian, but we are working on an English version as well. The maps are searchable on different topics, countries, counties, geographical names, shelfmarks or a combination of these. We are planning to expand the database to other manuscript maps later. This is the reason why it is possible to search for obvious subjects as Charles XII and the Great Northern War.

  1. Database Description - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name JSNP Alternative nam...n Science and Technology Agency Creator Affiliation: Contact address E-mail : Database...sapiens Taxonomy ID: 9606 Database description A database of about 197,000 polymorphisms in Japanese populat...1):605-610 External Links: Original website information Database maintenance site Institute of Medical Scien...er registration Not available About This Database Database Description Download License Update History of This Database

  2. Database Description - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti... Article title: Rice Expression Database: the gateway to rice functional genomics...nt Science (2002) Dec 7 (12):563-564 External Links: Original website information Database maintenance site

  3. Database Description - ConfC | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ConfC Alternative name Database...amotsu Noguchi Tel: 042-495-8736 E-mail: Database classification Structure Database...s - Protein structure Structure Databases - Small molecules Structure Databases - Nucleic acid structure Database... services - Need for user registration - About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Database Description - ConfC | LSDB Archive ...

  4. Notre Dame Nuclear Database: A New Chart of Nuclides

    Science.gov (United States)

    Lee, Kevin; Khouw, Timothy; Fasano, Patrick; Mumpower, Matthew; Aprahamian, Ani

    2014-09-01

    Nuclear data is critical to research fields from medicine to astrophysics. We are creating a database, the Notre Dame Nuclear Database, which can store theoretical and experimental datasets. We place emphasis on storing metadata and user interaction with the database. Users are able to search in addition to the specific nuclear datum, the author(s), the facility where the measurements were made, the institution of the facility, and device or method/technique used. We also allow users to interact with the database by providing online search, an interactive nuclide chart, and a command line interface. The nuclide chart is a more descriptive version of the periodic table that can be used to visualize nuclear properties such as half-lives and mass. We achieve this by using D3 (Data Driven Documents), HTML, and CSS3 to plot the nuclides and color them accordingly. Search capabilities can be applied dynamically to the chart by using Python to communicate with MySQL, allowing for customization. Users can save the customized chart they create to any image format. These features provide a unique approach for researchers to interface with nuclear data. We report on the current progress of this project and will present a working demo that highlights each aspect of the aforementioned features. This is the first time that all available technologies are put to use to make nuclear data more accessible than ever before in a manner that is much easier and fully detailed. This is a first and we will make it available as open source ware.

  5. Version of ORIGEN2 with automated sensitivity-calculation capability

    International Nuclear Information System (INIS)

    Worley, B.A.; Wright, R.Q.; Pin, F.G.

    1986-01-01

    ORIGEN2 is a widely used point-depletion and radioactive-decay computer code for use in simulating nuclear fuel cycles and/or spent fuel characteristics. The code calculates the amount of each nuclide being considered in the problem at a specified number of times, and upon request, a database of conversion factors relating mass compositions to specific material characteristics is used to calculate and print the total nuclide-dependent radioactivity, thermal power, and toxicity, as well as absorption, fission, neutron emission, and photon emission rates. The purpose of this paper is to report on the availability of a version of ORIGEN2 that will calculate, on option the derivative of all responses with respect to any variable used in the code

  6. Evaluación de un secadero solar tendalero túnel: estudio de secado de manzanas.

    OpenAIRE

    Iriarte, Adolfo Antonio; Bistoni, Silvia Noemi; Garcia, Victor Orlando; Luque, V.

    2015-01-01

    En la selección de un secadero solar se debe tener en cuenta las características del producto a secar, la ubicación geográfica del emprendimiento y el factor económico. A nivel nacional no existe una norma para evaluar la performance de un sistema de secado. En el presente trabajo se evalúa un secadero solar tipo túnel con convección forzada Está formado por dos partes: un colector solar y a continuación la cámara de secado. El producto utilizado fue manzana tipo comercial (61,7 kg), cortad...

  7. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  8. Target simulations with SCROLL non-LTE opacity/emissivity databases.

    Science.gov (United States)

    Klapisch, M.; Colombant, D.; Bar-Shalom, A.

    2001-10-01

    SCROLL[1], a collisional radiative model and code based on superconfigurations, is able to compute high Z non-LTE opacities and emissivities accurately and efficiently. It was used to create opacity/emissivity databases for Pd, Lu, Au on a 50 temperatures/80 densities grid. Incident radiation field was shown to have no effect on opacities in the case of interest, and was not taken into account. These databases were introduced in the hydrocode FAST1D[2]. SCROLL also gives an ionization temperature Tz which is used in FAST1D to obtain non-LTE corrections to the equation of state. Results will be compared to those of a previous version using Busquet’s algorithm[3]. Work supported by USDOE under a contract with NRL. [1] A. Bar-Shalom, J. Oreg and M. Klapisch, J. Quant. Spectrosc. Radiat. Transfer, 65, 43(2000). [2] J. H. Gardner, A. J. Schmitt, J. P. Dahlburg, C. J. Pawley, S. E. Bodner, S. P. Obenschain, V. Serlin and Y. Aglitskiy, Phys. Plasmas, 5, 1935 (1998). [3] M. Busquet, Phys. Fluids B, 5, 4191 (1993).

  9. Quality assurance for the IAEA International Database on Irradiated Nuclear Graphite Properties

    International Nuclear Information System (INIS)

    Wickham, A.J.; Humbert, D.

    2006-06-01

    Consideration has been given to the process of Quality Assurance applied to data entered into current versions of the IAEA International Database on Irradiated Nuclear Graphite Properties. Originally conceived simply as a means of collecting and preserving data on irradiation experiments and reactor operation, the data are increasingly being utilised for the preparation of safety arguments and in the design of new graphites for forthcoming generations of graphite-moderated plant. Under these circumstances, regulatory agencies require assurances that the data are of appropriate accuracy and correctly transcribed, that obvious errors in the original documentation are either highlighted or corrected, etc., before they are prepared to accept analyses built upon these data. The processes employed in the data transcription are described in this document, and proposals are made for the categorisation of data and for error reporting by Database users. (author)

  10. Validation of the Data Consolidation in Layout Database for the LHC Tunnel Cryogenics Controls Upgrade

    CERN Document Server

    Tovar-Gonzalez, A; Blanco, E; Fortescue-Beck, E; Fluder, C; Inglese, V; Pezzetti, M; Gomes, P; Wolak, T; Dudek, M; Frassinelli, F; Drozd, A; Zapolski, M

    2014-01-01

    The control system of the Large Hadron Collider cryogenics manages over 34’000 instrumentation and actuator channels. The complete information on their characteristics and parameters can be extracted from a set of views on the Layout database, to generate the specifications of the control system; from these, the code to populate PLCs (Programmable Logic Controller) and SCADA (Supervisory Control & Data Acquisition) is automatically produced, within the UNICOS framework (Unified Industrial Control System). The Layout database is, since 2003, progressively integrating and centralizing information on the whole CERN Accelerator complex. It models topographical organization (layouts) as functional positions and relationships. After three years of machine operation, many parameters have been manually adjusted in SCADA and PLCs; they now differ from their original values in the Layout database. Furthermore, to accommodate the upgrade of the UNICOS Continuous Process Control package to version 6, some data stru...

  11. Nencki Genomics Database--Ensembl funcgen enhanced with intersections, user data and genome-wide TFBS motifs.

    Science.gov (United States)

    Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal

    2013-01-01

    We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql -h database.nencki-genomics.org -u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client, including the Taverna Workbench with a graphical user interface.

  12. External cephalic version among women with a previous cesarean delivery: report on 36 cases and review of the literature.

    Science.gov (United States)

    Abenhaim, Haim A; Varin, Jocelyne; Boucher, Marc

    2009-01-01

    Whether or not women with a previous cesarean section should be considered for an external cephalic version remains unclear. In our study, we sought to examine the relationship between a history of previous cesarean section and outcomes of external cephalic version for pregnancies at 36 completed weeks of gestation or more. Data on obstetrical history and on external cephalic version outcomes was obtained from the C.H.U. Sainte-Justine External Cephalic Version Database. Baseline clinical characteristics were compared among women with and without a history of previous cesarean section. We used logistic regression analysis to evaluate the effect of previous cesarean section on success of external cephalic version while adjusting for parity, maternal body mass index, gestational age, estimated fetal weight, and amniotic fluid index. Over a 15-year period, 1425 external cephalic versions were attempted of which 36 (2.5%) were performed on women with a previous cesarean section. Although women with a history of previous cesarean section were more likely to be older and para >2 (38.93% vs. 15.0%), there were no difference in gestational age, estimated fetal weight, and amniotic fluid index. Women with a prior cesarean section had a success rate similar to women without [50.0% vs. 51.6%, adjusted OR: 1.31 (0.48-3.59)]. Women with a previous cesarean section who undergo an external cephalic version have similar success rates than do women without. Concern about procedural success in women with a previous cesarean section is unwarranted and should not deter attempting an external cephalic version.

  13. Base Carbone. Documentation about the emission factors of the Base CarboneR database

    International Nuclear Information System (INIS)

    2014-01-01

    The Base Carbone R is a public database of emission factors as required for carrying out carbon accounting exercises. It is administered by ADEME, but its governance involves many stakeholders and it can be added to freely. The articulation and convergence of environmental regulations requires data homogenization. The Base Carbone R proposes to be this centralized data source. Today, it is the reference database for article 75 of the Grenelle II Act. It is also entirely consistent with article L1341-3 of the French Transport Code and the default values of the European emission quotas exchange system. The data of the Base Carbone R can be freely consulted by all. Furthermore, the originality of this tool is that it enables third parties to propose their own data (feature scheduled for February 2015). These data are then assessed for their quality and transparency, then validated or refused for incorporation in the Base Carbone R . Lastly, a forum (planned for February 2015) will enable users to ask questions about the data, or to contest the data. The administration of the Base Carbone R is handled by ADEME. However, its orientation and the data that it contains are validated by a governance committee incorporating various public and private stakeholders. Lastly, transparency is one of the keystones of the Base Carbone R . Documentation details the hypotheses underlying the construction of all the data in the base, and refers to the studies that have enabled their construction. This document brings together the different versions of the Base Carbone R documentation: the most recent version (v11.5) and the previous versions (v11.0) which is shared in 2 parts dealing with the general case and with the specific case of overseas territories

  14. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RMG Alternative name ...raki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database... classification Nucleotide Sequence Databases Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database...rnal: Mol Genet Genomics (2002) 268: 434–445 External Links: Original website information Database...available URL of Web services - Need for user registration Not available About This Database Database Descri

  15. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  16. The sunspot databases of the Debrecen Observatory

    Science.gov (United States)

    Baranyi, Tünde; Gyori, Lajos; Ludmány, András

    2015-08-01

    We present the sunspot data bases and online tools available in the Debrecen Heliophysical Observatory: the DPD (Debrecen Photoheliographic Data, 1974 -), the SDD (SOHO/MDI-Debrecen Data, 1996-2010), the HMIDD (SDO/HMI-Debrecen Data, HMIDD, 2010-), the revised version of Greenwich Photoheliographic Data (GPR, 1874-1976) presented together with the Hungarian Historical Solar Drawings (HHSD, 1872-1919). These are the most detailed and reliable documentations of the sunspot activity in the relevant time intervals. They are very useful for studying sunspot group evolution on various time scales from hours to weeks. Time-dependent differences between the available long-term sunspot databases are investigated and cross-calibration factors are determined between them. This work has received funding from the European Community's Seventh Framework Programme (FP7/2012-2015) under grant agreement No. 284461 (eHEROES).

  17. Database Description - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DGBY Alternative name Database...EL: +81-29-838-8066 E-mail: Database classification Microarray Data and other Gene Expression Databases Orga...nism Taxonomy Name: Saccharomyces cerevisiae Taxonomy ID: 4932 Database descripti...-called phenomics). We uploaded these data on this website which is designated DGBY(Database for Gene expres...ma J, Ando A, Takagi H. Journal: Yeast. 2008 Mar;25(3):179-90. External Links: Original website information Database

  18. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name KOME Alternative nam... Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice ...Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description Information about approximately ...Hayashizaki Y, Kikuchi S. Journal: PLoS One. 2007 Nov 28; 2(11):e1235. External Links: Original website information Database...OS) Rice mutant panel database (Tos17) A Database of Plant Cis-acting Regulatory

  19. Ginseng Genome Database: an open-access platform for genomics of Panax ginseng.

    Science.gov (United States)

    Jayakodi, Murukarthick; Choi, Beom-Soon; Lee, Sang-Choon; Kim, Nam-Hoon; Park, Jee Young; Jang, Woojong; Lakshmanan, Meiyappan; Mohan, Shobhana V G; Lee, Dong-Yup; Yang, Tae-Jin

    2018-04-12

    The ginseng (Panax ginseng C.A. Meyer) is a perennial herbaceous plant that has been used in traditional oriental medicine for thousands of years. Ginsenosides, which have significant pharmacological effects on human health, are the foremost bioactive constituents in this plant. Having realized the importance of this plant to humans, an integrated omics resource becomes indispensable to facilitate genomic research, molecular breeding and pharmacological study of this herb. The first draft genome sequences of P. ginseng cultivar "Chunpoong" were reported recently. Here, using the draft genome, transcriptome, and functional annotation datasets of P. ginseng, we have constructed the Ginseng Genome Database http://ginsengdb.snu.ac.kr /, the first open-access platform to provide comprehensive genomic resources of P. ginseng. The current version of this database provides the most up-to-date draft genome sequence (of approximately 3000 Mbp of scaffold sequences) along with the structural and functional annotations for 59,352 genes and digital expression of genes based on transcriptome data from different tissues, growth stages and treatments. In addition, tools for visualization and the genomic data from various analyses are provided. All data in the database were manually curated and integrated within a user-friendly query page. This database provides valuable resources for a range of research fields related to P. ginseng and other species belonging to the Apiales order as well as for plant research communities in general. Ginseng genome database can be accessed at http://ginsengdb.snu.ac.kr /.

  20. System administrator's manual (SAM) for the enhanced logistics intratheater support tool (ELIST) database instance segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    This document is the System Administrator's Manual (SAM) for the Enhanced Logistics Intratheater Support Tool (ELIST) Database Instance Segment. It covers errors that can arise during the segment's installation and deinstallation, and it outlines appropriate recovery actions. It also tells how to change the password for the SYSTEM account of the database instance after the instance is created, and it discusses the creation of a suitable database instance for ELIST by means other than the installation of the segment. The latter subject is covered in more depth than its introductory discussion in the Installation Procedures (IP) for the Enhanced Logistics Intratheater Support Tool (ELIST) Global Data Segment, Database Instance Segment, Database Fill Segment, Database Segment, Database Utility Segment, Software Segment, and Reference Data Segment (referred to in portions of this document as the ELIST IP). The information in this document is expected to be of use only rarely. Other than errors arising from the failure to follow instructions, difficulties are not expected to be encountered during the installation or deinstallation of the segment. By the same token, the need to create a database instance for ELIST by means other than the installation of the segment is expected to be the exception, rather than the rule. Most administrators will only need to be aware of the help that is provided in this document and will probably not actually need to read and make use of it

  1. NCDC International Best Track Archive for Climate Stewardship (IBTrACS) Project, Version 2 (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Version 2 of the dataset has been superseded by a newer version. Users should not use version 2 except in rare cases (e.g., when reproducing previous studies that...

  2. NCDC International Best Track Archive for Climate Stewardship (IBTrACS) Project, Version 1 (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Version 1 of the dataset has been superseded by a newer version. Users should not use version 1 except in rare cases (e.g., when reproducing previous studies that...

  3. Versioning Complex Data

    Energy Technology Data Exchange (ETDEWEB)

    Macduff, Matt C.; Lee, Benno; Beus, Sherman J.

    2014-06-29

    Using the history of ARM data files, we designed and demonstrated a data versioning paradigm that is feasible. Assigning versions to sets of files that are modified with some special assumptions and domain specific rules was effective in the case of ARM data, which has more than 5000 datastreams and 500TB of data.

  4. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  5. Database structures and interfaces for W7-X

    International Nuclear Information System (INIS)

    Heimann, P.; Bluhm, T.; Hennig, Ch.; Kroiss, H.; Kuehner, G.; Maier, J.; Riemann, H.; Zilker, M.

    2008-01-01

    The W7-X experiment of the IPP, under construction in Greifswald Germany, is designed to operate in a quasi-steady-state scenario. The database structures and interfaces used for discharge description and execution have to reflect this continuous mode of operation. In close collaboration between the control group of W7-X and the data acquisition group a combined design of the data structures used for describing the configuration and the operation of the experiment was developed. To guarantee access to this information from all participating stations a TCP/IP portal and a proxy server were developed. This portal enables especially the VxWorks real-time operating systems of the control stations to access the information in the object-oriented database. The database schema includes now a more functional description of the experiment and gives the physicists a more simplified view of the necessary definitions of operational parameters. The scheduling of the long discharges of W7-X will be done by predefining operational parameters in segments and scenarios, where a scenario is a fixed sequence of segments with a common physical background. To hide the specialized information contained in the basic parameters from the experiment leader or physicist an abstraction layer was introduced that only shows physically interesting information. An executable segment will be generated after verifying the consistency of the high-level parameters by using a transformation function for every basic parameter needed. Since the database contains all configurations and discharge definitions necessary to operate the experiment, it is very important to give the user a tool to manipulate this information in an intuitive way. A special editor (ConfiX) was designed and implemented for this task. At the moment the basic functionality for dealing with all kind of objects in the database is available. Future releases will extend the functionality to defining and editing configurations, segments

  6. Description of surface systems. Preliminary site description Simpevarp sub area - Version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Lindborg, Tobias [ed.

    2005-03-01

    Swedish Nuclear Fuel and Waste Management Co is currently conducting site characterisation in the Simpevarp area. The area is divided into two subareas, the Simpevarp and the Laxemar subarea. The two subareas are surrounded by a common regional model area, the Simpevarp area. This report describes both the regional area and the subareas. This report is an interim version (model version 1.2) of the description of the surface systems at the Simpevarp area, and should be seen as a background report to the site description of the Simpevarp area, version 1.2, SKB-R--05-08. The basis for this description is quality-assured field data available in the SKB SICADA and GIS databases, together with generic data from the literature. The Surface system, here defined as everything above the bedrock, comprises a number of separate disciplines (e.g. hydrology, geology, topography, oceanography and ecology). Each discipline has developed descriptions and models for a number of properties that together represent the site description. The current methodology for developing the surface system description and the integration to ecosystem models is documented in a methodology strategy report SKB-R--03-06. The procedures and guidelines given in that report were followed in this report. Compared with version 1.1 of the surface system description SKB-R--04-25, this report presents considerable additional features, especially in the ecosystem description (Chapter 4) and in the description of the surface hydrology (Section 3.4). A first attempt has also been made to connect the flow of matter (carbon) between the different ecosystems into an overall ecosystem model at a landscape level. A summarised version of this report is also presented in SKB-R--05-08 together with geological-, hydrogeological-, transport properties-, thermal properties-, rock mechanics- and hydrogeochemical descriptions.

  7. Description of surface systems. Preliminary site description Simpevarp sub area - Version 1.2

    International Nuclear Information System (INIS)

    Lindborg, Tobias

    2005-03-01

    Swedish Nuclear Fuel and Waste Management Co is currently conducting site characterisation in the Simpevarp area. The area is divided into two subareas, the Simpevarp and the Laxemar subarea. The two subareas are surrounded by a common regional model area, the Simpevarp area. This report describes both the regional area and the subareas. This report is an interim version (model version 1.2) of the description of the surface systems at the Simpevarp area, and should be seen as a background report to the site description of the Simpevarp area, version 1.2, SKB-R--05-08. The basis for this description is quality-assured field data available in the SKB SICADA and GIS databases, together with generic data from the literature. The Surface system, here defined as everything above the bedrock, comprises a number of separate disciplines (e.g. hydrology, geology, topography, oceanography and ecology). Each discipline has developed descriptions and models for a number of properties that together represent the site description. The current methodology for developing the surface system description and the integration to ecosystem models is documented in a methodology strategy report SKB-R--03-06. The procedures and guidelines given in that report were followed in this report. Compared with version 1.1 of the surface system description SKB-R--04-25, this report presents considerable additional features, especially in the ecosystem description (Chapter 4) and in the description of the surface hydrology (Section 3.4). A first attempt has also been made to connect the flow of matter (carbon) between the different ecosystems into an overall ecosystem model at a landscape level. A summarised version of this report is also presented in SKB-R--05-08 together with geological-, hydrogeological-, transport properties-, thermal properties-, rock mechanics- and hydrogeochemical descriptions

  8. User's manual (UM) for the enhanced logistics intratheater support tool (ELIST) database utility segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    This document is the User's Manual (UM) for the Enhanced Logistics Intratheater Support Tool (ELIST) Database Utility Segment. It tells how to use its features to administer ELIST database user accounts

  9. Aerosol Robotic Network (AERONET) Version 3 Aerosol Optical Depth and Inversion Products

    Science.gov (United States)

    Giles, D. M.; Holben, B. N.; Eck, T. F.; Smirnov, A.; Sinyuk, A.; Schafer, J.; Sorokin, M. G.; Slutsker, I.

    2017-12-01

    The Aerosol Robotic Network (AERONET) surface-based aerosol optical depth (AOD) database has been a principal component of many Earth science remote sensing applications and modelling for more than two decades. During this time, the AERONET AOD database had utilized a semiautomatic quality assurance approach (Smirnov et al., 2000). Data quality automation developed for AERONET Version 3 (V3) was achieved by augmenting and improving upon the combination of Version 2 (V2) automatic and manual procedures to provide a more refined near real time (NRT) and historical worldwide database of AOD. The combined effect of these new changes provides a historical V3 AOD Level 2.0 data set comparable to V2 Level 2.0 AOD. The recently released V3 Level 2.0 AOD product uses Level 1.5 data with automated cloud screening and quality controls and applies pre-field and post-field calibrations and wavelength-dependent temperature characterizations. For V3, the AERONET aerosol retrieval code inverts AOD and almucantar sky radiances using a full vector radiative transfer called Successive ORDers of scattering (SORD; Korkin et al., 2017). The full vector code allows for potentially improving the real part of the complex index of refraction and the sphericity parameter and computing the radiation field in the UV (e.g., 380nm) and degree of linear depolarization. Effective lidar ratio and depolarization ratio products are also available with the V3 inversion release. Inputs to the inversion code were updated to the accommodate H2O, O3 and NO2 absorption to be consistent with the computation of V3 AOD. All of the inversion products are associated with estimated uncertainties that include the random error plus biases due to the uncertainty in measured AOD, absolute sky radiance calibration, and retrieved MODIS BRDF for snow-free and snow covered surfaces. The V3 inversion products use the same data quality assurance criteria as V2 inversions (Holben et al. 2006). The entire AERONET V3

  10. Database Description - SSBD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SSBD Alternative nam...ss 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe 650-0047, Japan, RIKEN Quantitative Biology Center Shuichi Onami E-mail: Database... classification Other Molecular Biology Databases Database classification Dynamic databa...elegans Taxonomy ID: 6239 Taxonomy Name: Escherichia coli Taxonomy ID: 562 Database description Systems Scie...i Onami Journal: Bioinformatics/April, 2015/Volume 31, Issue 7 External Links: Original website information Database

  11. Database Description - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name GETDB Alternative n...ame Gal4 Enhancer Trap Insertion Database DOI 10.18908/lsdba.nbdc00236-000 Creator Creator Name: Shigeo Haya... Chuo-ku, Kobe 650-0047 Tel: +81-78-306-3185 FAX: +81-78-306-3183 E-mail: Database classification Expression... Invertebrate genome database Organism Taxonomy Name: Drosophila melanogaster Taxonomy ID: 7227 Database des...riginal website information Database maintenance site Drosophila Genetic Resource

  12. JICST Factual DatabaseJICST Chemical Substance Safety Regulation Database

    Science.gov (United States)

    Abe, Atsushi; Sohma, Tohru

    JICST Chemical Substance Safety Regulation Database is based on the Database of Safety Laws for Chemical Compounds constructed by Japan Chemical Industry Ecology-Toxicology & Information Center (JETOC) sponsored by the Sience and Technology Agency in 1987. JICST has modified JETOC database system, added data and started the online service through JOlS-F (JICST Online Information Service-Factual database) in January 1990. JICST database comprises eighty-three laws and fourteen hundred compounds. The authors outline the database, data items, files and search commands. An example of online session is presented.

  13. The Human Gene Mutation Database: building a comprehensive mutation repository for clinical and molecular genetics, diagnostic testing and personalized genomic medicine.

    Science.gov (United States)

    Stenson, Peter D; Mort, Matthew; Ball, Edward V; Shaw, Katy; Phillips, Andrew; Cooper, David N

    2014-01-01

    The Human Gene Mutation Database (HGMD®) is a comprehensive collection of germline mutations in nuclear genes that underlie, or are associated with, human inherited disease. By June 2013, the database contained over 141,000 different lesions detected in over 5,700 different genes, with new mutation entries currently accumulating at a rate exceeding 10,000 per annum. HGMD was originally established in 1996 for the scientific study of mutational mechanisms in human genes. However, it has since acquired a much broader utility as a central unified disease-oriented mutation repository utilized by human molecular geneticists, genome scientists, molecular biologists, clinicians and genetic counsellors as well as by those specializing in biopharmaceuticals, bioinformatics and personalized genomics. The public version of HGMD (http://www.hgmd.org) is freely available to registered users from academic institutions/non-profit organizations whilst the subscription version (HGMD Professional) is available to academic, clinical and commercial users under license via BIOBASE GmbH.

  14. The Unified Extensional Versioning Model

    DEFF Research Database (Denmark)

    Asklund, U.; Bendix, Lars Gotfred; Christensen, H. B.

    1999-01-01

    Versioning of components in a system is a well-researched field where various adequate techniques have already been established. In this paper, we look at how versioning can be extended to cover also the structural aspects of a system. There exist two basic techniques for versioning - intentional...

  15. PAGES-Powell North America 2k database

    Science.gov (United States)

    McKay, N.

    2014-12-01

    Syntheses of paleoclimate data in North America are essential for understanding long-term spatiotemporal variability in climate and for properly assessing risk on decadal and longer timescales. Existing reconstructions of the past 2,000 years rely almost exclusively on tree-ring records, which can underestimate low-frequency variability and rarely extend beyond the last millennium. Meanwhile, many records from the full spectrum of paleoclimate archives are available and hold the potential of enhancing our understanding of past climate across North America over the past 2000 years. The second phase of the Past Global Changes (PAGES) North America 2k project began in 2014, with a primary goal of assembling these disparate paleoclimate records into a unified database. This effort is currently supported by the USGS Powell Center together with PAGES. Its success requires grassroots support from the community of researchers developing and interpreting paleoclimatic evidence relevant to the past 2000 years. Most likely, fewer than half of the published records appropriate for this database are publicly archived, and far fewer include the data needed to quantify geochronologic uncertainty, or to concisely describe how best to interpret the data in context of a large-scale paleoclimatic synthesis. The current version of the database includes records that (1) have been published in a peer-reviewed journal (including evidence of the record's relationship to climate), (2) cover a substantial portion of the past 2000 yr (>300 yr for annual records, >500 yr for lower frequency records) at relatively high resolution (<50 yr/observation), and (3) have reasonably small and quantifiable age uncertainty. Presently, the database includes records from boreholes, ice cores, lake and marine sediments, speleothems, and tree rings. This poster presentation will display the site locations and basic metadata of the records currently in the database. We invite anyone with interest in

  16. Database Description - KAIKOcDNA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us KAIKOcDNA Database Description General information of database Database name KAIKOcDNA Alter...National Institute of Agrobiological Sciences Akiya Jouraku E-mail : Database cla...ssification Nucleotide Sequence Databases Organism Taxonomy Name: Bombyx mori Taxonomy ID: 7091 Database des...rnal: G3 (Bethesda) / 2013, Sep / vol.9 External Links: Original website information Database maintenance si...available URL of Web services - Need for user registration Not available About This Database Database

  17. Download - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Download First of all, please read the license of this database. Data ...1.4 KB) Simple search and download Downlaod via FTP FTP server is sometimes jammed. If it is, access [here]. About This Database Data...base Description Download License Update History of This Database Site Policy | Contact Us Download - Trypanosomes Database | LSDB Archive ...

  18. The Development of a Graphical User Interface Engine for the Convenient Use of the HL7 Version 2.x Interface Engine.

    Science.gov (United States)

    Kim, Hwa Sun; Cho, Hune; Lee, In Keun

    2011-12-01

    The Health Level Seven Interface Engine (HL7 IE), developed by Kyungpook National University, has been employed in health information systems, however users without a background in programming have reported difficulties in using it. Therefore, we developed a graphical user interface (GUI) engine to make the use of the HL7 IE more convenient. The GUI engine was directly connected with the HL7 IE to handle the HL7 version 2.x messages. Furthermore, the information exchange rules (called the mapping data), represented by a conceptual graph in the GUI engine, were transformed into program objects that were made available to the HL7 IE; the mapping data were stored as binary files for reuse. The usefulness of the GUI engine was examined through information exchange tests between an HL7 version 2.x message and a health information database system. Users could easily create HL7 version 2.x messages by creating a conceptual graph through the GUI engine without requiring assistance from programmers. In addition, time could be saved when creating new information exchange rules by reusing the stored mapping data. The GUI engine was not able to incorporate information types (e.g., extensible markup language, XML) other than the HL7 version 2.x messages and the database, because it was designed exclusively for the HL7 IE protocol. However, in future work, by including additional parsers to manage XML-based information such as Continuity of Care Documents (CCD) and Continuity of Care Records (CCR), we plan to ensure that the GUI engine will be more widely accessible for the health field.

  19. License - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database License License to Use This Database Last updated : 2017/02/27 You may use this database...cense specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative ...Commons Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...ative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database, you ar

  20. MWIR-1995 DOE national mixed and TRU waste database users guide

    International Nuclear Information System (INIS)

    1995-11-01

    The Department of Energy (DOE) National 1995 Mixed Waste Inventory Report (MWIR-1995) Database Users Guide provides information on computer system requirements and describes installation, operation, and navigation through the database. The MWIR-1995 database contains a detailed, nationwide compilation of information on DOE mixed waste streams and treatment systems. In addition, the 1995 version includes data on non- mixed, transuranic (TRU) waste streams. These were added to the data set as a result of coordination of the 1995 update with the National Transuranic Program Office's (NTPO's) data needs to support the Waste Isolation Pilot Plant (WIPP) TRU Waste Baseline Inventory Report (WTWBIR). However, the information on the TRU waste streams is limited to that associated with the core mixed waste data requirements. The additional, non-core data on TRU streams collected specifically to support the WTWBIR is not included in the MWIR-1995 database. With respect to both the mixed and TRU waste stream data, the data set addresses open-quotes storedclose quotes streams. In this instance, open-quotes storedclose quotes streams are defined as (a) streams currently in storage at both EM-30 and EM-40 sites and (b) streams that have yet to be generated but are anticipated within the next five years from sources other than environmental restoration and decontamination and decommissioning (ER/D ampersand D) activities. Information on future ER/D ampersand D streams is maintained in the EM-40 core database. The MWIR-1995 database also contains limited information for both waste streams and treatment systems that have been removed or deleted since the 1994 MWIR. Data on these is maintained only through Section 2, Waste Stream Identification/Tracking/Source, to document the reason for removal from the data set

  1. EVLncRNAs: a manually curated database for long non-coding RNAs validated by low-throughput experiments

    Science.gov (United States)

    Zhao, Huiying; Yu, Jiafeng; Guo, Chengang; Dou, Xianghua; Song, Feng; Hu, Guodong; Cao, Zanxia; Qu, Yuanxu

    2018-01-01

    Abstract Long non-coding RNAs (lncRNAs) play important functional roles in various biological processes. Early databases were utilized to deposit all lncRNA candidates produced by high-throughput experimental and/or computational techniques to facilitate classification, assessment and validation. As more lncRNAs are validated by low-throughput experiments, several databases were established for experimentally validated lncRNAs. However, these databases are small in scale (with a few hundreds of lncRNAs only) and specific in their focuses (plants, diseases or interactions). Thus, it is highly desirable to have a comprehensive dataset for experimentally validated lncRNAs as a central repository for all of their structures, functions and phenotypes. Here, we established EVLncRNAs by curating lncRNAs validated by low-throughput experiments (up to 1 May 2016) and integrating specific databases (lncRNAdb, LncRANDisease, Lnc2Cancer and PLNIncRBase) with additional functional and disease-specific information not covered previously. The current version of EVLncRNAs contains 1543 lncRNAs from 77 species that is 2.9 times larger than the current largest database for experimentally validated lncRNAs. Seventy-four percent lncRNA entries are partially or completely new, comparing to all existing experimentally validated databases. The established database allows users to browse, search and download as well as to submit experimentally validated lncRNAs. The database is available at http://biophy.dzu.edu.cn/EVLncRNAs. PMID:28985416

  2. HEROD: a human ethnic and regional specific omics database.

    Science.gov (United States)

    Zeng, Xian; Tao, Lin; Zhang, Peng; Qin, Chu; Chen, Shangying; He, Weidong; Tan, Ying; Xia Liu, Hong; Yang, Sheng Yong; Chen, Zhe; Jiang, Yu Yang; Chen, Yu Zong

    2017-10-15

    Genetic and gene expression variations within and between populations and across geographical regions have substantial effects on the biological phenotypes, diseases, and therapeutic response. The development of precision medicines can be facilitated by the OMICS studies of the patients of specific ethnicity and geographic region. However, there is an inadequate facility for broadly and conveniently accessing the ethnic and regional specific OMICS data. Here, we introduced a new free database, HEROD, a human ethnic and regional specific OMICS database. Its first version contains the gene expression data of 53 070 patients of 169 diseases in seven ethnic populations from 193 cities/regions in 49 nations curated from the Gene Expression Omnibus (GEO), the ArrayExpress Archive of Functional Genomics Data (ArrayExpress), the Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC). Geographic region information of curated patients was mainly manually extracted from referenced publications of each original study. These data can be accessed and downloaded via keyword search, World map search, and menu-bar search of disease name, the international classification of disease code, geographical region, location of sample collection, ethnic population, gender, age, sample source organ, patient type (patient or healthy), sample type (disease or normal tissue) and assay type on the web interface. The HEROD database is freely accessible at http://bidd2.nus.edu.sg/herod/index.php. The database and web interface are implemented in MySQL, PHP and HTML with all major browsers supported. phacyz@nus.edu.sg. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  3. CO2 line-mixing database and software update and its tests in the 2.1 μm and 4.3 μm regions

    International Nuclear Information System (INIS)

    Lamouroux, J.; Régalia, L.; Thomas, X.; Vander Auwera, J.; Gamache, R.R.; Hartmann, J.-M.

    2015-01-01

    An update of the former version of the database and software for the calculation of CO 2 –air absorption coefficients taking line-mixing into account [Lamouroux et al. J Quant Spectrosc Radiat Transf 2010;111:2321] is described. In this new edition, the data sets were constructed using parameters from the 2012 version of the HITRAN database and recent measurements of line-shape parameters. Among other improvements, speed-dependent profiles can now be used if line-mixing is treated within the first order approximation. This new package is tested using laboratory spectra measured in the 2.1 μm and 4.3 μm spectral regions for various pressures, temperatures and CO 2 concentration conditions. Despite improvements at 4.3 μm at room temperature, the conclusions on the quality of this update are more ambiguous at low temperature and in the 2.1 μm region. Further tests using laboratory and atmospheric spectra are thus required for the evaluation of the performances of this updated package. - Highlights: • High resolution infrared spectroscopy. • CO 2 in air. • Updated tools. • Line mixing database and software

  4. Database Description - AcEST | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name AcEST Alternative n...hi, Tokyo-to 192-0397 Tel: +81-42-677-1111(ext.3654) E-mail: Database classificat...eneris Taxonomy ID: 13818 Database description This is a database of EST sequences of Adiantum capillus-vene...(3): 223-227. External Links: Original website information Database maintenance site Plant Environmental Res...base Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - AcEST | LSDB Archive ...

  5. License - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database License License to Use This Database Last updated : 2017/03/13 You may use this database...specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Common...s Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...al ... . The summary of the Creative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database

  6. KALIMER database development

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.

  7. KALIMER database development

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  8. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RPSD Alternative nam...e Rice Protein Structure Database DOI 10.18908/lsdba.nbdc00749-000 Creator Creator Name: Toshimasa Yamazaki ... Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Databas...e classification Structure Databases - Protein structure Organism Taxonomy Name: Or...or name(s): Journal: External Links: Original website information Database maintenance site National Institu

  9. Database Description - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us FANTOM5 Database Description General information of database Database name FANTOM5 Alternati...me: Rattus norvegicus Taxonomy ID: 10116 Taxonomy Name: Macaca mulatta Taxonomy ID: 9544 Database descriptio...l Links: Original website information Database maintenance site RIKEN Center for Life Science Technologies, ...ilable Web services Not available URL of Web services - Need for user registration Not available About This Database Database... Description Download License Update History of This Database Site Policy | Contact Us Database Description - FANTOM5 | LSDB Archive ...

  10. NoSQL databases

    OpenAIRE

    Mrozek, Jakub

    2012-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  11. DAMT - DISTRIBUTED APPLICATION MONITOR TOOL (HP9000 VERSION)

    Science.gov (United States)

    Keith, B.

    1994-01-01

    Typical network monitors measure status of host computers and data traffic among hosts. A monitor to collect statistics about individual processes must be unobtrusive and possess the ability to locate and monitor processes, locate and monitor circuits between processes, and report traffic back to the user through a single application program interface (API). DAMT, Distributed Application Monitor Tool, is a distributed application program that will collect network statistics and make them available to the user. This distributed application has one component (i.e., process) on each host the user wishes to monitor as well as a set of components at a centralized location. DAMT provides the first known implementation of a network monitor at the application layer of abstraction. Potential users only need to know the process names of the distributed application they wish to monitor. The tool locates the processes and the circuit between them, and reports any traffic between them at a user-defined rate. The tool operates without the cooperation of the processes it monitors. Application processes require no changes to be monitored by this tool. Neither does DAMT require the UNIX kernel to be recompiled. The tool obtains process and circuit information by accessing the operating system's existing process database. This database contains all information available about currently executing processes. Expanding the information monitored by the tool can be done by utilizing more information from the process database. Traffic on a circuit between processes is monitored by a low-level LAN analyzer that has access to the raw network data. The tool also provides features such as dynamic event reporting and virtual path routing. A reusable object approach was used in the design of DAMT. The tool has four main components; the Virtual Path Switcher, the Central Monitor Complex, the Remote Monitor, and the LAN Analyzer. All of DAMT's components are independent, asynchronously executing

  12. Pea Marker Database (PMD) - A new online database combining known pea (Pisum sativum L.) gene-based markers.

    Science.gov (United States)

    Kulaeva, Olga A; Zhernakov, Aleksandr I; Afonin, Alexey M; Boikov, Sergei S; Sulima, Anton S; Tikhonovich, Igor A; Zhukov, Vladimir A

    2017-01-01

    Pea (Pisum sativum L.) is the oldest model object of plant genetics and one of the most agriculturally important legumes in the world. Since the pea genome has not been sequenced yet, identification of genes responsible for mutant phenotypes or desirable agricultural traits is usually performed via genetic mapping followed by candidate gene search. Such mapping is best carried out using gene-based molecular markers, as it opens the possibility for exploiting genome synteny between pea and its close relative Medicago truncatula Gaertn., possessing sequenced and annotated genome. In the last 5 years, a large number of pea gene-based molecular markers have been designed and mapped owing to the rapid evolution of "next-generation sequencing" technologies. However, the access to the complete set of markers designed worldwide is limited because the data are not uniformed and therefore hard to use. The Pea Marker Database was designed to combine the information about pea markers in a form of user-friendly and practical online tool. Version 1 (PMD1) comprises information about 2484 genic markers, including their locations in linkage groups, the sequences of corresponding pea transcripts and the names of related genes in M. truncatula. Version 2 (PMD2) is an updated version comprising 15944 pea markers in the same format with several advanced features. To test the performance of the PMD, fine mapping of pea symbiotic genes Sym13 and Sym27 in linkage groups VII and V, respectively, was carried out. The results of mapping allowed us to propose the Sen1 gene (a homologue of SEN1 gene of Lotus japonicus (Regel) K. Larsen) as the best candidate gene for Sym13, and to narrow the list of possible candidate genes for Sym27 to ten, thus proving PMD to be useful for pea gene mapping and cloning. All information contained in PMD1 and PMD2 is available at www.peamarker.arriam.ru.

  13. QMM – A Quarterly Macroeconomic Model of the Icelandic Economy. Version 2.0

    DEFF Research Database (Denmark)

    Ólafsson, Tjörvi

    This paper documents and describes Version 2.0 of the Quarterly Macroeconomic Model of the Central Bank of Iceland (QMM). QMM and the underlying quarterly database have been under construction since 2001 at the Research and Forecasting Division of the Economics Department at the Bank and was first...... implemented in the forecasting round for the Monetary Bulletin 2006/1 in March 2006. QMM is used by the Bank for forecasting and various policy simulations and therefore plays a key role as an organisational framework for viewing the medium-term future when formulating monetary policy at the Bank. This paper...

  14. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (UNIX VERSION)

    Science.gov (United States)

    Donnell, B.

    1994-01-01

    COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the

  15. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (MACINTOSH VERSION)

    Science.gov (United States)

    Riley, G.

    1994-01-01

    COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the

  16. NETS - A NEURAL NETWORK DEVELOPMENT TOOL, VERSION 3.0 (MACINTOSH VERSION)

    Science.gov (United States)

    Phillips, T. A.

    1994-01-01

    allows the user to generate C code to implement the network loaded into the system. This permits the placement of networks as components, or subroutines, in other systems. In short, once a network performs satisfactorily, the Generate C Code option provides the means for creating a program separate from NETS to run the network. Other features: files may be stored in binary or ASCII format; multiple input propagation is permitted; bias values may be included; capability to scale data without writing scaling code; quick interactive testing of network from the main menu; and several options that allow the user to manipulate learning efficiency. NETS is written in ANSI standard C language to be machine independent. The Macintosh version (MSC-22108) includes code for both a graphical user interface version and a command line interface version. The machine independent version (MSC-21588) only includes code for the command line interface version of NETS 3.0. The Macintosh version requires a Macintosh II series computer and has been successfully implemented under System 7. Four executables are included on these diskettes, two for floating point operations and two for integer arithmetic. It requires Think C 5.0 to compile. A minimum of 1Mb of RAM is required for execution. Sample input files and executables for both the command line version and the Macintosh user interface version are provided on the distribution medium. The Macintosh version is available on a set of three 3.5 inch 800K Macintosh format diskettes. The machine independent version has been successfully implemented on an IBM PC series compatible running MS-DOS, a DEC VAX running VMS, a SunIPC running SunOS, and a CRAY Y-MP running UNICOS. Two executables for the IBM PC version are included on the MS-DOS distribution media, one compiled for floating point operations and one for integer arithmetic. The machine independent version is available on a set of three 5.25 inch 360K MS-DOS format diskettes (standard

  17. Conversion and distribution of bibliographic information for further use on microcomputers with database software such as CDS/ISIS

    International Nuclear Information System (INIS)

    Nieuwenhuysen, P.; Besemer, H.

    1990-05-01

    This paper describes methods to work on microcomputers with data obtained from bibliographic and related databases distributed by online data banks, on CD-ROM or on tape. Also, we mention some user reactions to this technique. We list the different types of software needed to perform these services. Afterwards, we report about our development of software, to convert data so that they can be entered into UNESCO's program named CDS/ISIS (Version 2.3) for local database management on IBM microcomputers or compatibles; this software allows the preservation of the structure of the source data in records, fields, subfields and field occurrences. (author). 10 refs, 1 fig

  18. User Manual for the NASA Glenn Ice Accretion Code LEWICE: Version 2.0

    Science.gov (United States)

    Wright, William B.

    1999-01-01

    A research project is underway at NASA Glenn to produce a computer code which can accurately predict ice growth under a wide range of meteorological conditions for any aircraft surface. This report will present a description of the code inputs and outputs from version 2.0 of this code, which is called LEWICE. This version differs from previous releases due to its robustness and its ability to reproduce results accurately for different spacing and time step criteria across computing platform. It also differs in the extensive effort undertaken to compare the results against the database of ice shapes which have been generated in the NASA Glenn Icing Research Tunnel (IRT) 1. This report will only describe the features of the code related to the use of the program. The report will not describe the inner working of the code or the physical models used. This information is available in the form of several unpublished documents which will be collectively referred to as a Programmers Manual for LEWICE 2 in this report. These reports are intended as an update/replacement for all previous user manuals of LEWICE. In addition to describing the changes and improvements made for this version, information from previous manuals may be duplicated so that the user will not need to consult previous manuals to use this code.

  19. Preliminary site description Forsmark area - version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-03-01

    This report presents the interim version (model version 1.1) of the preliminary Site Descriptive Model for Forsmark. The basis for this interim version is quality-assured, geoscientific and ecological field data from Forsmark that were available in the SKB databases SICADA and GIS at April 30, 2003 as well as version 0 of the Site Descriptive Model. The new data acquired during the initial site investigation phase to the date of data freeze 1.1 constitute the basis for the updating of version 0 to version 1.1. These data originate from surface investigations on the candidate area with its regional environment and from drilling and investigations in boreholes. The surface-based data sets were rather extensive whereas the data sets from boreholes were limited to information from one 1,000 m deep cored borehole (KFM01A) and eight 150 to 200 m deep percussion-drilled boreholes in the Forsmark candidate area. Discipline specific models are developed for a selected regional and local model volume and these are then integrated into a site description. The current methodologies for developing the discipline specific models and the integration of these are documented in methodology reports or strategy reports. In the present work, the guidelines given in those reports were followed to the extent possible with the data and information available at the time for data freeze for model version 1.1. Compared with version 0 there are considerable additional features in the version 1.1, especially in the geological description and in the description of the near surface. The geological models of lithology and deformation zones are based on borehole information and much higher resolution surface data. The existence of highly fractured sub-horizontal zones has been verified and these are now part of the model of the deformation zones. A discrete fracture network (DFN) model has also been developed. The rock mechanics model is based on strength information from SFR and an empirical

  20. Database Description - DMPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DMPD Alternative nam...e Dynamic Macrophage Pathway CSML Database DOI 10.18908/lsdba.nbdc00558-000 Creator Creator Name: Masao Naga...ty of Tokyo 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639 Tel: +81-3-5449-5615 FAX: +83-3-5449-5442 E-mail: Database...606 Taxonomy Name: Mammalia Taxonomy ID: 40674 Database description DMPD collects...e(s) Article title: Author name(s): Journal: External Links: Original website information Database maintenan

  1. Database Dump - fRNAdb | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us fRNAdb Database Dump Data detail Data name Database Dump DOI 10.18908/lsdba.nbdc00452-002 De... data (tab separeted text) Data file File name: Database_Dump File URL: ftp://ftp....biosciencedbc.jp/archive/frnadb/LATEST/Database_Dump File size: 673 MB Simple search URL - Data acquisition...s. Data analysis method - Number of data entries 4 files - About This Database Database Description Download... License Update History of This Database Site Policy | Contact Us Database Dump - fRNAdb | LSDB Archive ...

  2. The NEA International FEP Database and its use in support of regulatory review

    International Nuclear Information System (INIS)

    Sumerling, T.

    1999-01-01

    A working group of the Nuclear Energy Agency has developed a database of features, events and processes (FEPs) relevant to the assessment of the long-term safety of radioactive waste disposal facilities. The Environment Agency participated in this work as described in a previous report, R and D Technical Report P97. This report describes work done in order to (i) provide an electronic version of the NEA International FEP Database in a convenient form suited to the Agency's needs; (ii) determine procedures for use of the Database in support of the Agency's review of an applicant's safety case for solid radioactive disposal, and in other appropriate Agency activities. Section 1 of the report outlines the objectives and work done. Section 2 gives an overview of the current status, development and international use of the NEA FEP Database. Alternative uses of the Database by the Agency, and procedures for use, are discussed in Section 3. Two alternative procedures for use of the Database in scientific and technical review of an applicant's safety case are outlined and compared; these provide a framework for orderly identification and discussion of technical issues within the review. It is concluded that the way in which the Database is used will depend on the circumstances and also the aims and preferences of the Agency. Detailed procedures for the use of the Database are best defined for the specific circumstances, taking account of the level of information available from the applicant, and the time and resources which the Agency may wish to devote to a given phase of review. The NEA FEP Database has been developed as a tool to assist in performing or reviewing safety assessments of radioactive waste disposal facilities. The principle of the Database, and also the software framework, may be equally applicable to other technical or scientific assessments, e.g. of landfill facilities or river catchment pollution studies. Since the Database is now available to the Agency

  3. SPECTR-W3 online database on atomic properties of atoms and ions

    International Nuclear Information System (INIS)

    Faenov, A.Ya.; Magunov, A.I.; Pikuz, T.A.; Skobelev, I.Yu.; Loboda, P.A.; Bakshayev, N.N.; Gagarin, S.V.; Komosko, V.V.; Kuznetsov, K.S.; Markelenkov, S.A.; Petunin, S.A.; Popova, V.V.

    2002-01-01

    Recent progress in the novel information technologies based on the World-Wide Web (WWW) gives a new possibility for a worldwide exchange of atomic spectral and collisional data. This facilitates joint efforts of the international scientific community in basic and applied research, promising technological developments, and university education programs. Special-purpose atomic databases (ADBs) are needed for an effective employment of large-scale datasets. The ADB SPECTR developed at MISDC of VNIIFTRI has been used during the last decade in several laboratories in the world, including RFNC-VNIITF. The DB SPECTR accumulates a considerable amount of atomic data (about 500,000 records). These data were extracted from publications on experimental and theoretical studies in atomic physics, astrophysics, and plasma spectroscopy during the last few decades. The information for atoms and ions comprises the ionization potentials, the energy levels, the wavelengths and transition probabilities, and, to a lesser extent, - also the autoionization rates, and the electron-ion collision cross-sections and rates. The data are supplied with source references and comments elucidating the details of computations or measurements. Our goal is to create an interactive WWW information resource based on the extended and updated Web-oriented database version SPECTR-W3 and its further integration into the family of specialized atomic databases on the Internet. The version will incorporate novel experimental and theoretical data. An appropriate revision of the previously accumulated data will be performed from the viewpoint of their consistency to the current state-of-the-art. We are particularly interested in cooperation for storing the atomic collision data. Presently, a software shell with the up-to-date Web-interface is being developed to work with the SPECTR-W3 database. The shell would include the subsystems of information retrieval, input, update, and output in/from the database and

  4. Spectr-W3 Online Database On Atomic Properties Of Atoms And Ions

    Science.gov (United States)

    Faenov, A. Ya.; Magunov, A. I.; Pikuz, T. A.; Skobelev, I. Yu.; Loboda, P. A.; Bakshayev, N. N.; Gagarin, S. V.; Komosko, V. V.; Kuznetsov, K. S.; Markelenkov, S. A.

    2002-10-01

    Recent progress in the novel information technologies based on the World-Wide Web (WWW) gives a new possibility for a worldwide exchange of atomic spectral and collisional data. This facilitates joint efforts of the international scientific community in basic and applied research, promising technological developments, and university education programs. Special-purpose atomic databases (ADBs) are needed for an effective employment of large-scale datasets. The ADB SPECTR developed at MISDC of VNIIFTRI has been used during the last decade in several laboratories in the world, including RFNC-VNIITF. The DB SPECTR accumulates a considerable amount of atomic data (about 500,000 records). These data were extracted from publications on experimental and theoretical studies in atomic physics, astrophysics, and plasma spectroscopy during the last few decades. The information for atoms and ions comprises the ionization potentials, the energy levels, the wavelengths and transition probabilities, and, to a lesser extent, -- also the autoionization rates, and the electron-ion collision cross-sections and rates. The data are supplied with source references and comments elucidating the details of computations or measurements. Our goal is to create an interactive WWW information resource based on the extended and updated Web-oriented database version SPECTR-W3 and its further integration into the family of specialized atomic databases on the Internet. The version will incorporate novel experimental and theoretical data. An appropriate revision of the previously accumulated data will be performed from the viewpoint of their consistency to the current state-of-the-art. We are particularly interested in cooperation for storing the atomic collision data. Presently, a software shell with the up-to-date Web-interface is being developed to work with the SPECTR-W3 database. The shell would include the subsystems of information retrieval, input, update, and output in/from the database and

  5. Cross-cultural adaptation of an environmental health measurement instrument: Brazilian version of the health-care waste management • rapid assessment tool

    Directory of Open Access Journals (Sweden)

    Eliana Napoleão Cozendey-Silva

    2016-09-01

    Full Text Available Abstract Background Periodic assessment is one of the recommendations for improving health-care waste management worldwide. This study aimed at translating and adapting the Health-Care Waste Management - Rapid Assessment Tool (HCWM-RAT, proposed by the World Health Organization, to a Brazilian Portuguese version, and resolving its cultural and legal issues. The work focused on the evaluation of the concepts, items and semantic equivalence between the original tool and the Brazilian Portuguese version. Methods A cross-cultural adaptation methodology was used, including: initial translation to Brazilian Portuguese; back translation to English; syntheses of these translation versions; formation of an expert committee to achieve consensus about the preliminary version; and evaluation of the target audience’s comprehension. Results Both the translated and the original versions’ concepts, items and semantic equivalence are presented. The constructs in the original instrument were considered relevant and applicable to the Brazilian context. The Brazilian version of the tool has the potential to generate indicators, develop official database, feedback and subsidize political decisions at many geographical and organizational levels strengthening the Monitoring and evaluation (M&E mechanism. Moreover, the cross-cultural translation expands the usefulness of the instrument to Portuguese-speaking countries in developing regions. Conclusion The translated and original versions presented concept, item and semantic equivalence and can be applied to Brazil

  6. DOT Online Database

    Science.gov (United States)

    Page Home Table of Contents Contents Search Database Search Login Login Databases Advisory Circulars accessed by clicking below: Full-Text WebSearch Databases Database Records Date Advisory Circulars 2092 5 data collection and distribution policies. Document Database Website provided by MicroSearch

  7. TEJAS - TELEROBOTICS/EVA JOINT ANALYSIS SYSTEM VERSION 1.0

    Science.gov (United States)

    Drews, M. L.

    1994-01-01

    The primary objective of space telerobotics as a research discipline is the augmentation and/or support of extravehicular activity (EVA) with telerobotic activity; this allows increased emplacement of on-orbit assets while providing for their "in situ" management. Development of the requisite telerobot work system requires a well-understood correspondence between EVA and telerobotics that to date has been only partially established. The Telerobotics/EVA Joint Analysis Systems (TEJAS) hypermedia information system uses object-oriented programming to bridge the gap between crew-EVA and telerobotics activities. TEJAS Version 1.0 contains twenty HyperCard stacks that use a visual, customizable interface of icon buttons, pop-up menus, and relational commands to store, link, and standardize related information about the primitives, technologies, tasks, assumptions, and open issues involved in space telerobot or crew EVA tasks. These stacks are meant to be interactive and can be used with any database system running on a Macintosh, including spreadsheets, relational databases, word-processed documents, and hypermedia utilities. The software provides a means for managing volumes of data and for communicating complex ideas, relationships, and processes inherent to task planning. The stack system contains 3MB of data and utilities to aid referencing, discussion, communication, and analysis within the EVA and telerobotics communities. The six baseline analysis stacks (EVATasks, EVAAssume, EVAIssues, TeleTasks, TeleAssume, and TeleIssues) work interactively to manage and relate basic information which you enter about the crew-EVA and telerobot tasks you wish to analyze in depth. Analysis stacks draw on information in the Reference stacks as part of a rapid point-and-click utility for building scripts of specific task primitives or for any EVA or telerobotics task. Any or all of these stacks can be completely incorporated within other hypermedia applications, or they can be

  8. Establishment of nuclear knowledge and information infrastructure; establishment of web-based database system for nuclear events

    Energy Technology Data Exchange (ETDEWEB)

    Park, W. J.; Kim, K. J. [Korea Atomic Energy Research Institute , Taejeon (Korea); Lee, S. H. [Korea Institute of Nuclear Safety, Taejeon (Korea)

    2001-05-01

    Nuclear events data reported by nuclear power plants are useful to prevent nuclear accidents at the power plant by examine the cause of initiating events and removal of weak points in the aspects of operational safety, and to improve nuclear safety in design and operation stages by backfitting operational experiences and practices 'Nuclear Event Evaluation Database : NEED' system distributed by CD-ROM media are upgraded to the NEED-Web (Web-based Nuclear Event Evaluation Database) version to manage event data using database system on network basis and the event data and the statistics are provided to the authorized users in the Nuclear Portal Site and publics through Internet Web services. The efforts to establish the NEED-Web system will improve the integrity of events data occurred in Korean nuclear power plant and the usability of data services, and enhance the confidence building and the transparency to the public in nuclear safety. 11 refs., 27 figs. (Author)

  9. PharmMapper 2017 update: a web server for potential drug target identification with a comprehensive target pharmacophore database.

    Science.gov (United States)

    Wang, Xia; Shen, Yihang; Wang, Shiwei; Li, Shiliang; Zhang, Weilin; Liu, Xiaofeng; Lai, Luhua; Pei, Jianfeng; Li, Honglin

    2017-07-03

    The PharmMapper online tool is a web server for potential drug target identification by reversed pharmacophore matching the query compound against an in-house pharmacophore model database. The original version of PharmMapper includes more than 7000 target pharmacophores derived from complex crystal structures with corresponding protein target annotations. In this article, we present a new version of the PharmMapper web server, of which the backend pharmacophore database is six times larger than the earlier one, with a total of 23 236 proteins covering 16 159 druggable pharmacophore models and 51 431 ligandable pharmacophore models. The expanded target data cover 450 indications and 4800 molecular functions compared to 110 indications and 349 molecular functions in our last update. In addition, the new web server is united with the statistically meaningful ranking of the identified drug targets, which is achieved through the use of standard scores. It also features an improved user interface. The proposed web server is freely available at http://lilab.ecust.edu.cn/pharmmapper/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  11. HANFORD TANK WASTE OPERATIONS SIMULATOR VERSION DESCRIPTION DOCUMENT

    International Nuclear Information System (INIS)

    ALLEN, G.K.

    2003-01-01

    This document describes the software version controls established for the Hanford Tank Waste Operations Simulator (HTWOS). It defines: the methods employed to control the configuration of HTWOS; the version of each of the 26 separate modules for the version 1.0 of HTWOS; the numbering rules for incrementing the version number of each module; and a requirement to include module version numbers in each case results documentation. Version 1.0 of HTWOS is the first version under formal software version control. HTWOS contains separate revision numbers for each of its 26 modules. Individual module version numbers do not reflect the major release HTWOS configured version number

  12. Simpevarp - site descriptive model version 0

    International Nuclear Information System (INIS)

    2002-11-01

    During 2002, SKB is starting detailed investigations at two potential sites for a deep repository in the Precambrian rocks of the Fennoscandian Shield. The present report concerns one of those sites, Simpevarp, which lies in the municipality of Oskarshamn, on the southeast coast of Sweden, about 250 kilometres south of Stockholm. The site description will have two main components: a written synthesis of the site, summarising the current state of knowledge, as documented in the databases containing the primary data from the site investigations, and one or several site descriptive models, in which the collected information is interpreted and presented in a form which can be used in numerical models for rock engineering, environmental impact and long-term safety assessments. SKB maintains two main databases at the present time, a site characterisation database called SICADA and a geographic information system called SKB GIS. The site descriptive model will be developed and presented with the aid of the SKB GIS capabilities, and with SKBs Rock Visualisation System (RVS), which is also linked to SICADA. The version 0 model forms an important framework for subsequent model versions, which are developed successively, as new information from the site investigations becomes available. Version 0 is developed out of the information available at the start of the site investigation. In the case of Simpevarp, this is essentially the information which was compiled for the Oskarshamn feasibility study, which led to the choice of that area as a favourable object for further study, together with information collected since its completion. This information, with the exception of the extensive data base from the nearby Aespoe Hard Rock Laboratory, is mainly 2D in nature (surface data), and is general and regional, rather than site-specific, in content. Against this background, the present report consists of the following components: an overview of the present content of the databases

  13. Preliminary site description Forsmark area - version 1.1

    International Nuclear Information System (INIS)

    2004-03-01

    This report presents the interim version (model version 1.1) of the preliminary Site Descriptive Model for Forsmark. The basis for this interim version is quality-assured, geoscientific and ecological field data from Forsmark that were available in the SKB databases SICADA and GIS at April 30, 2003 as well as version 0 of the Site Descriptive Model. The new data acquired during the initial site investigation phase to the date of data freeze 1.1 constitute the basis for the updating of version 0 to version 1.1. These data originate from surface investigations on the candidate area with its regional environment and from drilling and investigations in boreholes. The surface-based data sets were rather extensive whereas the data sets from boreholes were limited to information from one 1,000 m deep cored borehole (KFM01A) and eight 150 to 200 m deep percussion-drilled boreholes in the Forsmark candidate area. Discipline specific models are developed for a selected regional and local model volume and these are then integrated into a site description. The current methodologies for developing the discipline specific models and the integration of these are documented in methodology reports or strategy reports. In the present work, the guidelines given in those reports were followed to the extent possible with the data and information available at the time for data freeze for model version 1.1. Compared with version 0 there are considerable additional features in the version 1.1, especially in the geological description and in the description of the near surface. The geological models of lithology and deformation zones are based on borehole information and much higher resolution surface data. The existence of highly fractured sub-horizontal zones has been verified and these are now part of the model of the deformation zones. A discrete fracture network (DFN) model has also been developed. The rock mechanics model is based on strength information from SFR and an empirical

  14. The comparison of CAP88-PC version 2.0 versus CAP88-PC version 1.0

    International Nuclear Information System (INIS)

    Yakubovich, B.A.; Klee, K.O.; Palmer, C.R.; Spotts, P.B.

    1997-12-01

    40 CFR Part 61 (Subpart H of the NESHAP) requires DOE facilities to use approved sampling procedures, computer models, or other approved procedures when calculating Effective Dose Equivalent (EDE) values to members of the public. Currently version 1.0 of the approved computer model CAP88-PC is used to calculate EDE values. The DOE has upgraded the CAP88-PC software to version 2.0. This version provides simplified data entry, better printing characteristics, the use of a mouse, and other features. The DOE has developed and released version 2.0 for testing and comment. This new software is a WINDOWS based application that offers a new graphical user interface with new utilities for preparing and managing population and weather data, and several new decay chains. The program also allows the user to view results before printing. This document describes a test that confirmed CAP88-PC version 2.0 generates results comparable to the original version of the CAP88-PC program

  15. Database Description - eSOL | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name eSOL Alternative nam...eator Affiliation: The Research and Development of Biological Databases Project, National Institute of Genet...nology 4259 Nagatsuta-cho, Midori-ku, Yokohama, Kanagawa 226-8501 Japan Email: Tel.: +81-45-924-5785 Database... classification Protein sequence databases - Protein properties Organism Taxonomy Name: Escherichia coli Taxonomy ID: 562 Database...i U S A. 2009 Mar 17;106(11):4201-6. External Links: Original website information Database maintenance site

  16. "Mr. Database" : Jim Gray and the History of Database Technologies.

    Science.gov (United States)

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  17. Mathematics for Databases

    NARCIS (Netherlands)

    ir. Sander van Laar

    2007-01-01

    A formal description of a database consists of the description of the relations (tables) of the database together with the constraints that must hold on the database. Furthermore the contents of a database can be retrieved using queries. These constraints and queries for databases can very well be

  18. Preliminary surficial geologic map database of the Amboy 30 x 60 minute quadrangle, California

    Science.gov (United States)

    Bedford, David R.; Miller, David M.; Phelps, Geoffrey A.

    2006-01-01

    The surficial geologic map database of the Amboy 30x60 minute quadrangle presents characteristics of surficial materials for an area approximately 5,000 km2 in the eastern Mojave Desert of California. This map consists of new surficial mapping conducted between 2000 and 2005, as well as compilations of previous surficial mapping. Surficial geology units are mapped and described based on depositional process and age categories that reflect the mode of deposition, pedogenic effects occurring post-deposition, and, where appropriate, the lithologic nature of the material. The physical properties recorded in the database focus on those that drive hydrologic, biologic, and physical processes such as particle size distribution (PSD) and bulk density. This version of the database is distributed with point data representing locations of samples for both laboratory determined physical properties and semi-quantitative field-based information. Future publications will include the field and laboratory data as well as maps of distributed physical properties across the landscape tied to physical process models where appropriate. The database is distributed in three parts: documentation, spatial map-based data, and printable map graphics of the database. Documentation includes this file, which provides a discussion of the surficial geology and describes the format and content of the map data, a database 'readme' file, which describes the database contents, and FGDC metadata for the spatial map information. Spatial data are distributed as Arc/Info coverage in ESRI interchange (e00) format, or as tabular data in the form of DBF3-file (.DBF) file formats. Map graphics files are distributed as Postscript and Adobe Portable Document Format (PDF) files, and are appropriate for representing a view of the spatial database at the mapped scale.

  19. The Surveillance Database Development of Risk Factor for Dengue Fever in Mataram District Health Office

    Directory of Open Access Journals (Sweden)

    Sinawan Sinawan

    2015-05-01

    Full Text Available System of DHF epidemiological surveillance that is currently running in Mataram District Health Office has not been able to provide information about the incidence of DHF is based on risk factors. Besides, the process of manufacturing and analysis of data were still done manually, so the level of consistency and accuracy of data was still less. This research aimed to develop database surveillance risk factor of DHF incidence. This type of research is action research. This research was conducted at the Mataram District Health Office NTB province at April 2014 until August 2014, informants in this study consists of three (3 members, namely Head of P2PB Section, DHF P2 Program Manager and Surveillance Staff. The data used are primary and secondary data. Database design includes logical and physical design. Performed on the logic design is the normalization of the data, create relationships between data illustrates the entity relationship diagram (ERD and proceed to the physical design to create a prototype database using Epi Info software application for Windows version 3.5.1. Trial involving two (2 the informants. Evaluation trials database surveillance of risk factors DHF incidence to assess the ease, speed, accuracy and completeness of the resulting data. Results of this study is new database surveillance risk factor of DHF incidence that can be used easily, quickly and can be results more accurate information. Keywords: DHF, surveillance, risk factor, database.

  20. PrionScan: an online database of predicted prion domains in complete proteomes.

    Science.gov (United States)

    Espinosa Angarica, Vladimir; Angulo, Alfonso; Giner, Arturo; Losilla, Guillermo; Ventura, Salvador; Sancho, Javier

    2014-02-05

    Prions are a particular type of amyloids related to a large variety of important processes in cells, but also responsible for serious diseases in mammals and humans. The number of experimentally characterized prions is still low and corresponds to a handful of examples in microorganisms and mammals. Prion aggregation is mediated by specific protein domains with a remarkable compositional bias towards glutamine/asparagine and against charged residues and prolines. These compositional features have been used to predict new prion proteins in the genomes of different organisms. Despite these efforts, there are only a few available data sources containing prion predictions at a genomic scale. Here we present PrionScan, a new database of predicted prion-like domains in complete proteomes. We have previously developed a predictive methodology to identify and score prionogenic stretches in protein sequences. In the present work, we exploit this approach to scan all the protein sequences in public databases and compile a repository containing relevant information of proteins bearing prion-like domains. The database is updated regularly alongside UniprotKB and in its present version contains approximately 28000 predictions in proteins from different functional categories in more than 3200 organisms from all the taxonomic subdivisions. PrionScan can be used in two different ways: database query and analysis of protein sequences submitted by the users. In the first mode, simple queries allow to retrieve a detailed description of the properties of a defined protein. Queries can also be combined to generate more complex and specific searching patterns. In the second mode, users can submit and analyze their own sequences. It is expected that this database would provide relevant insights on prion functions and regulation from a genome-wide perspective, allowing researches performing cross-species prion biology studies. Our database might also be useful for guiding experimentalists

  1. A comparison of the Space Station version of ASTROMAG with two free-flyer versions

    International Nuclear Information System (INIS)

    Green, M.A.

    1992-06-01

    This Report compares the Space Station version of ASTROMAG with free-flyer versions of ASTROMAG which could fly on an Atlas lla rocket and a Delta rocket. Launch with either free-flyer imposes severe weight limits on the magnet and its cryogenic system. Both versions of ASTROMAG magnet which fly on free-flying satellites do not have to be charged more than once during the mission. This permits one to simplify the charging system and the cryogenic system. The helium ll pump loop which supplies helium to the gas cooled electrical leads can be eliminated in both of the free-flyer versions of the ASTROMAG magnet. This report describes the superconducting dipole moment correction coils which are necessary for the magnet to operate on a free-flying satellite

  2. DB2 11 the ultimate database for cloud, analytics, and mobile

    CERN Document Server

    Campbell, John; Jones, Gareth; Parekh, Surekha; Yothers, Jay

    2014-01-01

    Building on the prior book ""DB2 11: The Database for Big Data and Analytics,"" published in 2013, this book is written particularly for new and existing DB2 for z/OS customers and users who want to learn as much as they can about the new software version before migrating their organizations to DB2 11 for z/OS. The book begins with a technical overview of DB2 11 features and explains how the new functions in DB2 11 can help enterprise customers address the challenges they face with the explosion of data and information. There has been rapid growth in the variety, volume, and velocity of dat

  3. Versioning of printed products

    Science.gov (United States)

    Tuijn, Chris

    2005-01-01

    During the definition of a printed product in an MIS system, a lot of attention is paid to the production process. The MIS systems typically gather all process-related parameters at such a level of detail that they can determine what the exact cost will be to make a specific product. This information can then be used to make a quote for the customer. Considerably less attention is paid to the content of the products since this does not have an immediate impact on the production costs (assuming that the number of inks or plates is known in advance). The content management is typically carried out either by the prepress systems themselves or by dedicated workflow servers uniting all people that contribute to the manufacturing of a printed product. Special care must be taken when considering versioned products. With versioned products we here mean distinct products that have a number of pages or page layers in common. Typical examples are comic books that have to be printed in different languages. In this case, the color plates can be shared over the different versions and the black plate will be different. Other examples are nation-wide magazines or newspapers that have an area with regional pages or advertising leaflets in different languages or currencies. When considering versioned products, the content will become an important cost factor. First of all, the content management (and associated proofing and approval cycles) becomes much more complex and, therefore, the risk that mistakes will be made increases considerably. Secondly, the real production costs are very much content-dependent because the content will determine whether plates can be shared across different versions or not and how many press runs will be needed. In this paper, we will present a way to manage different versions of a printed product. First, we will introduce a data model for version management. Next, we will show how the content of the different versions can be supplied by the customer

  4. Time Series Discord Detection in Medical Data using a Parallel Relational Database

    Energy Technology Data Exchange (ETDEWEB)

    Woodbridge, Diane; Rintoul, Mark Daniel; Wilson, Andrew T.; Goldstein, Richard

    2015-10-01

    Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithms on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.

  5. Establishment of database for Japan Sea parameters on marine environment and radioactivity (JASPER). Volume 2. Radiocarbon and oceanographic properties

    International Nuclear Information System (INIS)

    Otosaka, Shigeyoshi; Suzuki, Takashi; Ito, Toshimichi; Kobayashi, Takuya; Kawamura, Hideyuki; Togawa, Orihiko; Tanaka, Takayuki; Minakawa, Masayuki; Aramaki, Takafumi; Senjyu, Tomoharu

    2010-02-01

    The database for the Japan Sea Parameters on Marine Environment and Radionuclides (JASPER) has been established by the Japan Atomic Energy Agency as a product of the Japan Sea Expeditions. By the previous volume of the database, data for representative anthropogenic radionuclides (strontium-90, cesium-137, and plutonium-239, 240) were opened to public. And now, data for radiocarbon and fundamental oceanographic properties (salinity, temperature, dissolved oxygen) including nutrients (silicate, phosphate, nitrate and nitrite) are released as the second volume of the database. At the beginning of this report (chapter 1), backgrounds, objectives and brief overview of this report are given as an introduction. Then, specifications of this database and methodology in obtaining the concentration data are described in chapter 2. The data stored in the database are presented in tabular and figure forms in chapter 3. Finally, chapter 4 is assigned concluding remarks. In the second version of database, 20,292 data records are stored in the database including 2,695 data for temperature, 2,883 data for salinity, 2,109 data for dissolved oxygen, 11,051 data for the nutrients, and 1,660 data for radiocarbon. The database will be a strong tool for the continuous monitoring for contamination by anthropogenic radionuclides, studies on biogeochemical cycle, and development/validation of models for numerical simulations in the sea. (author)

  6. Database development and management

    CERN Document Server

    Chao, Lee

    2006-01-01

    Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database

  7. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (IBM PC VERSION)

    Science.gov (United States)

    Donnell, B.

    1994-01-01

    COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the

  8. Database reliability engineering designing and operating resilient database systems

    CERN Document Server

    Campbell, Laine

    2018-01-01

    The infrastructure-as-code revolution in IT is also affecting database administration. With this practical book, developers, system administrators, and junior to mid-level DBAs will learn how the modern practice of site reliability engineering applies to the craft of database architecture and operations. Authors Laine Campbell and Charity Majors provide a framework for professionals looking to join the ranks of today’s database reliability engineers (DBRE). You’ll begin by exploring core operational concepts that DBREs need to master. Then you’ll examine a wide range of database persistence options, including how to implement key technologies to provide resilient, scalable, and performant data storage and retrieval. With a firm foundation in database reliability engineering, you’ll be ready to dive into the architecture and operations of any modern database. This book covers: Service-level requirements and risk management Building and evolving an architecture for operational visibility ...

  9. Interactive fluka: a world wide web version for a simulation code in proton therapy

    International Nuclear Information System (INIS)

    Garelli, S.; Giordano, S.; Piemontese, G.; Squarcia, S.

    1998-01-01

    We considered the possibility of using the simulation code FLUKA, in the framework of TERA. We provided a window under World Wide Web in which an interactive version of the code is available. The user can find instructions for the installation, an on-line FLUKA manual and interactive windows for inserting all the data required by the configuration running file in a very simple way. The database choice allows a more versatile use for data verification and update, recall of old simulations and comparison with selected examples. A completely new tool for geometry drawing under Java has also been developed. (authors)

  10. Verification of ICESat-2/ATLAS Science Receiver Algorithm Onboard Databases

    Science.gov (United States)

    Carabajal, C. C.; Saba, J. L.; Leigh, H. W.; Magruder, L. A.; Urban, T. J.; Mcgarry, J.; Schutz, B. E.

    2013-12-01

    NASA's ICESat-2 mission will fly the Advanced Topographic Laser Altimetry System (ATLAS) instrument on a 3-year mission scheduled to launch in 2016. ATLAS is a single-photon detection system transmitting at 532nm with a laser repetition rate of 10 kHz, and a 6 spot pattern on the Earth's surface. A set of onboard Receiver Algorithms will perform signal processing to reduce the data rate and data volume to acceptable levels. These Algorithms distinguish surface echoes from the background noise, limit the daily data volume, and allow the instrument to telemeter only a small vertical region about the signal. For this purpose, three onboard databases are used: a Surface Reference Map (SRM), a Digital Elevation Model (DEM), and a Digital Relief Maps (DRMs). The DEM provides minimum and maximum heights that limit the signal search region of the onboard algorithms, including a margin for errors in the source databases, and onboard geolocation. Since the surface echoes will be correlated while noise will be randomly distributed, the signal location is found by histogramming the received event times and identifying the histogram bins with statistically significant counts. Once the signal location has been established, the onboard Digital Relief Maps (DRMs) will be used to determine the vertical width of the telemetry band about the signal. University of Texas-Center for Space Research (UT-CSR) is developing the ICESat-2 onboard databases, which are currently being tested using preliminary versions and equivalent representations of elevation ranges and relief more recently developed at Goddard Space Flight Center (GSFC). Global and regional elevation models have been assessed in terms of their accuracy using ICESat geodetic control, and have been used to develop equivalent representations of the onboard databases for testing against the UT-CSR databases, with special emphasis on the ice sheet regions. A series of verification checks have been implemented, including

  11. Solving Relational Database Problems with ORDBMS in an Advanced Database Course

    Science.gov (United States)

    Wang, Ming

    2011-01-01

    This paper introduces how to use the object-relational database management system (ORDBMS) to solve relational database (RDB) problems in an advanced database course. The purpose of the paper is to provide a guideline for database instructors who desire to incorporate the ORDB technology in their traditional database courses. The paper presents…

  12. NETS - A NEURAL NETWORK DEVELOPMENT TOOL, VERSION 3.0 (MACHINE INDEPENDENT VERSION)

    Science.gov (United States)

    Baffes, P. T.

    1994-01-01

    allows the user to generate C code to implement the network loaded into the system. This permits the placement of networks as components, or subroutines, in other systems. In short, once a network performs satisfactorily, the Generate C Code option provides the means for creating a program separate from NETS to run the network. Other features: files may be stored in binary or ASCII format; multiple input propagation is permitted; bias values may be included; capability to scale data without writing scaling code; quick interactive testing of network from the main menu; and several options that allow the user to manipulate learning efficiency. NETS is written in ANSI standard C language to be machine independent. The Macintosh version (MSC-22108) includes code for both a graphical user interface version and a command line interface version. The machine independent version (MSC-21588) only includes code for the command line interface version of NETS 3.0. The Macintosh version requires a Macintosh II series computer and has been successfully implemented under System 7. Four executables are included on these diskettes, two for floating point operations and two for integer arithmetic. It requires Think C 5.0 to compile. A minimum of 1Mb of RAM is required for execution. Sample input files and executables for both the command line version and the Macintosh user interface version are provided on the distribution medium. The Macintosh version is available on a set of three 3.5 inch 800K Macintosh format diskettes. The machine independent version has been successfully implemented on an IBM PC series compatible running MS-DOS, a DEC VAX running VMS, a SunIPC running SunOS, and a CRAY Y-MP running UNICOS. Two executables for the IBM PC version are included on the MS-DOS distribution media, one compiled for floating point operations and one for integer arithmetic. The machine independent version is available on a set of three 5.25 inch 360K MS-DOS format diskettes (standard

  13. NOAA Climate Data Record (CDR) of Ocean Heat Fluxes, Version 1.0 (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Please note, this dataset has been superseded by a newer version (see below). Users should not use this version except in rare cases (e.g., when reproducing previous...

  14. Generalized Database Management System Support for Numeric Database Environments.

    Science.gov (United States)

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  15. libChEBI: an API for accessing the ChEBI database.

    Science.gov (United States)

    Swainston, Neil; Hastings, Janna; Dekker, Adriano; Muthukrishnan, Venkatesh; May, John; Steinbeck, Christoph; Mendes, Pedro

    2016-01-01

    ChEBI is a database and ontology of chemical entities of biological interest. It is widely used as a source of identifiers to facilitate unambiguous reference to chemical entities within biological models, databases, ontologies and literature. ChEBI contains a wealth of chemical data, covering over 46,500 distinct chemical entities, and related data such as chemical formula, charge, molecular mass, structure, synonyms and links to external databases. Furthermore, ChEBI is an ontology, and thus provides meaningful links between chemical entities. Unlike many other resources, ChEBI is fully human-curated, providing a reliable, non-redundant collection of chemical entities and related data. While ChEBI is supported by a web service for programmatic access and a number of download files, it does not have an API library to facilitate the use of ChEBI and its data in cheminformatics software. To provide this missing functionality, libChEBI, a comprehensive API library for accessing ChEBI data, is introduced. libChEBI is available in Java, Python and MATLAB versions from http://github.com/libChEBI, and provides full programmatic access to all data held within the ChEBI database through a simple and documented API. libChEBI is reliant upon the (automated) download and regular update of flat files that are held locally. As such, libChEBI can be embedded in both on- and off-line software applications. libChEBI allows better support of ChEBI and its data in the development of new cheminformatics software. Covering three key programming languages, it allows for the entirety of the ChEBI database to be accessed easily and quickly through a simple API. All code is open access and freely available.

  16. ONKALO rock mechanics model (RMM). Version 2.3

    Energy Technology Data Exchange (ETDEWEB)

    Haekkinen, T.; Merjama, S.; Moenkkoenen, H. [WSP Finland, Helsinki (Finland)

    2014-07-15

    The Rock Mechanics Model of the ONKALO rock volume includes the most important rock mechanics features and parameters at the Olkiluoto site. The main objective of the model is to be a tool to predict rock properties, rock quality and hence provide an estimate for the rock stability of the potential repository at Olkiluoto. The model includes a database of rock mechanics raw data and a block model in which the rock mechanics parameters are estimated through block volumes based on spatial rock mechanics raw data. In this version 2.3, special emphasis was placed on refining the estimation of the block model. The model was divided into rock mechanics domains which were used as constraints during the block model estimation. During the modelling process, a display profile and toolbar were developed for the GEOVIA Surpac software to improve visualisation and access to the rock mechanics data for the Olkiluoto area. (orig.)

  17. GENII Version 2 Users’ Guide

    Energy Technology Data Exchange (ETDEWEB)

    Napier, Bruce A.

    2004-03-08

    The GENII Version 2 computer code was developed for the Environmental Protection Agency (EPA) at Pacific Northwest National Laboratory (PNNL) to incorporate the internal dosimetry models recommended by the International Commission on Radiological Protection (ICRP) and the radiological risk estimating procedures of Federal Guidance Report 13 into updated versions of existing environmental pathway analysis models. The resulting environmental dosimetry computer codes are compiled in the GENII Environmental Dosimetry System. The GENII system was developed to provide a state-of-the-art, technically peer-reviewed, documented set of programs for calculating radiation dose and risk from radionuclides released to the environment. The codes were designed with the flexibility to accommodate input parameters for a wide variety of generic sites. Operation of a new version of the codes, GENII Version 2, is described in this report. Two versions of the GENII Version 2 code system are available, a full-featured version and a version specifically designed for demonstrating compliance with the dose limits specified in 40 CFR 61.93(a), the National Emission Standards for Hazardous Air Pollutants (NESHAPS) for radionuclides. The only differences lie in the limitation of the capabilities of the user to change specific parameters in the NESHAPS version. This report describes the data entry, accomplished via interactive, menu-driven user interfaces. Default exposure and consumption parameters are provided for both the average (population) and maximum individual; however, these may be modified by the user. Source term information may be entered as radionuclide release quantities for transport scenarios, or as basic radionuclide concentrations in environmental media (air, water, soil). For input of basic or derived concentrations, decay of parent radionuclides and ingrowth of radioactive decay products prior to the start of the exposure scenario may be considered. A single code run can

  18. License - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us Trypanoso... Attribution-Share Alike 2.1 Japan . If you use data from this database, please be sure attribute this database as follows: Trypanoso...nse Update History of This Database Site Policy | Contact Us License - Trypanosomes Database | LSDB Archive ...

  19. A constructive version of AIP revisited

    NARCIS (Netherlands)

    Barros, A.; Hou, T.

    2008-01-01

    In this paper, we review a constructive version of the Approximation Induction Principle. This version states that bisimilarity of regular processes can be decided by observing only a part of their behaviour. We use this constructive version to formulate a complete inference system for the Algebra

  20. Effect of regional anesthesia on the success rate of external cephalic version: a systematic review and meta-analysis.

    Science.gov (United States)

    Goetzinger, Katherine R; Harper, Lorie M; Tuuli, Methodius G; Macones, George A; Colditz, Graham A

    2011-11-01

    To estimate whether the use of regional anesthesia is associated with increased success of external cephalic version. We searched MEDLINE, EMBASE, the Cochrane Library, and clinical trial registries. Electronic databases were searched from 1966 through April 2011 for published, randomized controlled trials in the English language comparing regional anesthesia with no regional anesthesia for external cephalic version. The primary outcome was external cephalic version success. Secondary outcomes included cesarean delivery, maternal discomfort, and adverse events. Pooled risk ratios (relative risk) were calculated using a random-effects model. Heterogeneity was assessed using the Cochran's Q statistic and quantified using the I Z method. Six randomized controlled trials met criteria for study inclusion. Regional anesthesia was associated with a higher external cephalic version success rate compared with intravenous or no analgesia (59.7% compared with 37.6%; pooled relative risk 1.58; 95% confidence interval [CI] 1.29-1.93). This significant association persisted when the data were stratified by type of regional anesthesia (spinal compared with epidural). The number needed to treat with regional anesthesia to achieve one additional successful external cephalic version was five. There was no evidence of statistical heterogeneity (P=.32, I Z=14.9%) or publication bias (Harbord test P=.78). There was no statistically significant difference in the risk of cesarean delivery comparing regional anesthesia with intravenous or no analgesia (48.4% compared with 59.3%; pooled relative risk 0.80; 95% CI 0.55-1.17). Adverse events were rare and not significantly different between the two groups. Regional anesthesia is associated with a higher success rate of external cephalic version.

  1. Model-based version management system framework

    International Nuclear Information System (INIS)

    Mehmood, W.

    2016-01-01

    In this paper we present a model-based version management system. Version Management System (VMS) a branch of software configuration management (SCM) aims to provide a controlling mechanism for evolution of software artifacts created during software development process. Controlling the evolution requires many activities to perform, such as, construction and creation of versions, identification of differences between versions, conflict detection and merging. Traditional VMS systems are file-based and consider software systems as a set of text files. File based VMS systems are not adequate for performing software configuration management activities such as, version control on software artifacts produced in earlier phases of the software life cycle. New challenges of model differencing, merge, and evolution control arise while using models as central artifact. The goal of this work is to present a generic framework model-based VMS which can be used to overcome the problem of tradition file-based VMS systems and provide model versioning services. (author)

  2. Federal databases

    International Nuclear Information System (INIS)

    Welch, M.J.; Welles, B.W.

    1988-01-01

    Accident statistics on all modes of transportation are available as risk assessment analytical tools through several federal agencies. This paper reports on the examination of the accident databases by personal contact with the federal staff responsible for administration of the database programs. This activity, sponsored by the Department of Energy through Sandia National Laboratories, is an overview of the national accident data on highway, rail, air, and marine shipping. For each mode, the definition or reporting requirements of an accident are determined and the method of entering the accident data into the database is established. Availability of the database to others, ease of access, costs, and who to contact were prime questions to each of the database program managers. Additionally, how the agency uses the accident data was of major interest

  3. The YH database: the first Asian diploid genome database

    DEFF Research Database (Denmark)

    Li, Guoqing; Ma, Lijia; Song, Chao

    2009-01-01

    genome consensus. The YH database is currently one of the three personal genome database, organizing the original data and analysis results in a user-friendly interface, which is an endeavor to achieve fundamental goals for establishing personal medicine. The database is available at http://yh.genomics.org.cn....

  4. Database Description - tRNADB-CE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us tRNAD...B-CE Database Description General information of database Database name tRNADB-CE Alter...CC BY-SA Detail Background and funding Name: MEXT Integrated Database Project Reference(s) Article title: tRNAD... 2009 Jan;37(Database issue):D163-8. External Links: Article title: tRNADB-CE 2011: tRNA gene database curat...n Download License Update History of This Database Site Policy | Contact Us Database Description - tRNADB-CE | LSDB Archive ...

  5. China Energy Databook -- User Guide and Documentation, Version 7.0

    Energy Technology Data Exchange (ETDEWEB)

    Fridley, Ed., David; Aden, Ed., Nathaniel; Lu, Ed., Hongyou; Zheng, Ed., Nina

    2008-10-01

    Since 2001, China's energy consumption has grown more quickly than expected by Chinese or international observers. This edition of the China Energy Databook traces the growth of the energy system through 2006. As with version six, the Databook covers a wide range of energy-related information, including resources and reserves, production, consumption, investment, equipment, prices, trade, environment, economy, and demographic data. These data provide an extensive quantitative foundation for understanding China's growing energy system. In addition to providing updated data through 2006, version seven includes revised energy and GDP data back to the 1990s. In the 2005 China Energy Statistical Yearbook, China's National Bureau of Statistics (NBS) published revised energy production, consumption, and usage data covering the years 1998 to 2003. Most of these revisions related to coal production and consumption, though natural gas data were also adjusted. In order to accommodate underestimated service sector growth, the NBS also released revised GDP data in 2005. Beyond the inclusion of historical revisions in the seventh edition, no attempt has been made to rectify known or suspected issues in the official data. The purpose of this volume is to provide a common basis for understanding China's energy system. In order to broaden understanding of China's energy system, the Databook includes information from industry yearbooks, periodicals, and government websites in addition to data published by NBS. Rather than discarding discontinued data series, information that is no longer possible to update has been placed in C section tables and figures in each chapter. As with previous versions, the data are presented in digital database and tabular formats. The compilation of updated data is the result of tireless work by Lu Hongyou and Nina Zheng.

  6. Determining Optimal Decision Version

    Directory of Open Access Journals (Sweden)

    Olga Ioana Amariei

    2014-06-01

    Full Text Available In this paper we start from the calculation of the product cost, applying the method of calculating the cost of hour- machine (THM, on each of the three cutting machines, namely: the cutting machine with plasma, the combined cutting machine (plasma and water jet and the cutting machine with a water jet. Following the calculation of cost and taking into account the precision of manufacturing of each machine, as well as the quality of the processed surface, the optimal decisional version needs to be determined regarding the product manufacturing. To determine the optimal decisional version, we resort firstly to calculating the optimal version on each criterion, and then overall using multiattribute decision methods.

  7. SNPpy--database management for SNP data from genome wide association studies.

    Directory of Open Access Journals (Sweden)

    Faheem Mitha

    Full Text Available BACKGROUND: We describe SNPpy, a hybrid script database system using the Python SQLAlchemy library coupled with the PostgreSQL database to manage genotype data from Genome-Wide Association Studies (GWAS. This system makes it possible to merge study data with HapMap data and merge across studies for meta-analyses, including data filtering based on the values of phenotype and Single-Nucleotide Polymorphism (SNP data. SNPpy and its dependencies are open source software. RESULTS: The current version of SNPpy offers utility functions to import genotype and annotation data from two commercial platforms. We use these to import data from two GWAS studies and the HapMap Project. We then export these individual datasets to standard data format files that can be imported into statistical software for downstream analyses. CONCLUSIONS: By leveraging the power of relational databases, SNPpy offers integrated management and manipulation of genotype and phenotype data from GWAS studies. The analysis of these studies requires merging across GWAS datasets as well as patient and marker selection. To this end, SNPpy enables the user to filter the data and output the results as standardized GWAS file formats. It does low level and flexible data validation, including validation of patient data. SNPpy is a practical and extensible solution for investigators who seek to deploy central management of their GWAS data.

  8. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  9. Human Ageing Genomic Resources: Integrated databases and tools for the biology and genetics of ageing

    Science.gov (United States)

    Tacutu, Robi; Craig, Thomas; Budovsky, Arie; Wuttke, Daniel; Lehmann, Gilad; Taranukha, Dmitri; Costa, Joana; Fraifeld, Vadim E.; de Magalhães, João Pedro

    2013-01-01

    The Human Ageing Genomic Resources (HAGR, http://genomics.senescence.info) is a freely available online collection of research databases and tools for the biology and genetics of ageing. HAGR features now several databases with high-quality manually curated data: (i) GenAge, a database of genes associated with ageing in humans and model organisms; (ii) AnAge, an extensive collection of longevity records and complementary traits for >4000 vertebrate species; and (iii) GenDR, a newly incorporated database, containing both gene mutations that interfere with dietary restriction-mediated lifespan extension and consistent gene expression changes induced by dietary restriction. Since its creation about 10 years ago, major efforts have been undertaken to maintain the quality of data in HAGR, while further continuing to develop, improve and extend it. This article briefly describes the content of HAGR and details the major updates since its previous publications, in terms of both structure and content. The completely redesigned interface, more intuitive and more integrative of HAGR resources, is also presented. Altogether, we hope that through its improvements, the current version of HAGR will continue to provide users with the most comprehensive and accessible resources available today in the field of biogerontology. PMID:23193293

  10. The version control service for ATLAS data acquisition configuration filesDAQ ; configuration ; OKS ; XML

    CERN Document Server

    Soloviev, Igor; The ATLAS collaboration

    2012-01-01

    To configure data taking session the ATLAS systems and detectors store more than 160 MBytes of data acquisition related configuration information in OKS XML files. The total number of the files exceeds 1300 and they are updated by many system experts. In the past from time to time after such updates we had experienced problems caused by XML syntax errors or inconsistent state of files from a point of view of the overall ATLAS configuration. It was not always possible to know who made a modification causing problems or how to go back to a previous version of the modified file. Few years ago a special service addressing these issues has been implemented and deployed on ATLAS Point-1. It excludes direct write access to XML files stored in a central database repository. Instead, for an update the files are copied into a user repository, validated after modifications and committed using a version control system. The system's callback updates the central repository. Also, it keeps track of all modifications providi...

  11. Database Description - TMFunction | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available sidue (or mutant) in a protein. The experimental data are collected from the literature both by searching th...the sequence database, UniProt, structural database, PDB, and literature database

  12. PrimateLit Database

    Science.gov (United States)

    Primate Info Net Related Databases NCRR PrimateLit: A bibliographic database for primatology Top of any problems with this service. We welcome your feedback. The PrimateLit database is no longer being Resources, National Institutes of Health. The database is a collaborative project of the Wisconsin Primate

  13. LHCb: LHCb Software and Conditions Database Cross-Compatibility Tracking: a Graph-Theory Approach

    CERN Multimedia

    Cattaneo, M; Shapoval, I

    2012-01-01

    The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data or all LHCb data processing applications (simulation, high level trigger, reconstruction, analysis). The evolution of CondDB and of the LHCb applications is a weakly-homomorphic process. It means that compatibility between a CondDB state and LHCb application state may not be preserved across different database and application generations. More over, a CondDB state by itself belongs to a complex three-dimensional phase space which evolves according to certain CondDB self-compatibility criteria, so it is sometimes difficult even to determine a self-consistent CondDB state. These compatibility issues may lead to various kinds of problems in the LHCb production, varying from unexpected application crashes to incorrect data processing results. Thus, there is a need for defining a well-established set of compatibility criteria between mentioned above entities, together with developing a compatibil...

  14. LHCb Software and Conditions Database Cross-Compatibility Tracking System: a Graph-Theory Approach

    CERN Document Server

    Cattaneo, M; Shapoval, I

    2012-01-01

    The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data or all LHCb data processing applications (simulation, high level trigger, reconstruction, analysis). The evolution of CondDB and of the LHCb applications is a weakly-homomorphic process. It means that compatibility between a CondDB state and LHCb application state may not be preserved across different database and application generations. More over, a CondDB state by itself belongs to a complex three-dimensional phase space which evolves according to certain CondDB self-compatibility criteria, so it is sometimes difficult even to determine a self-consistent CondDB state. These compatibility issues may lead to various kinds of problems in the LHCb production, varying from unexpected application crashes to incorrect data processing results. Thus, there is a need for defining a well-established set of compatibility criteria between mentioned above entities, together with developing a compatibil...

  15. OAP- OFFICE AUTOMATION PILOT GRAPHICS DATABASE SYSTEM

    Science.gov (United States)

    Ackerson, T.

    1994-01-01

    The Office Automation Pilot (OAP) Graphics Database system offers the IBM PC user assistance in producing a wide variety of graphs and charts. OAP uses a convenient database system, called a chartbase, for creating and maintaining data associated with the charts, and twelve different graphics packages are available to the OAP user. Each of the graphics capabilities is accessed in a similar manner. The user chooses creation, revision, or chartbase/slide show maintenance options from an initial menu. The user may then enter or modify data displayed on a graphic chart. The cursor moves through the chart in a "circular" fashion to facilitate data entries and changes. Various "help" functions and on-screen instructions are available to aid the user. The user data is used to generate the graphics portion of the chart. Completed charts may be displayed in monotone or color, printed, plotted, or stored in the chartbase on the IBM PC. Once completed, the charts may be put in a vector format and plotted for color viewgraphs. The twelve graphics capabilities are divided into three groups: Forms, Structured Charts, and Block Diagrams. There are eight Forms available: 1) Bar/Line Charts, 2) Pie Charts, 3) Milestone Charts, 4) Resources Charts, 5) Earned Value Analysis Charts, 6) Progress/Effort Charts, 7) Travel/Training Charts, and 8) Trend Analysis Charts. There are three Structured Charts available: 1) Bullet Charts, 2) Organization Charts, and 3) Work Breakdown Structure (WBS) Charts. The Block Diagram available is an N x N Chart. Each graphics capability supports a chartbase. The OAP graphics database system provides the IBM PC user with an effective means of managing data which is best interpreted as a graphic display. The OAP graphics database system is written in IBM PASCAL 2.0 and assembler for interactive execution on an IBM PC or XT with at least 384K of memory, and a color graphics adapter and monitor. Printed charts require an Epson, IBM, OKIDATA, or HP Laser

  16. TRMM Version 7 Level 3 Gridded Monthly Accumulations of GPROF Precipitation Retrievals

    Science.gov (United States)

    Stocker, E. F.; Kelley, O. A.

    2012-01-01

    In July 2011, improved versions of the retrieval algorithms were approved for TRMM. All data starting with June 2011 are produced only with the version 7 code. At the same time, version 7 reprocessing of all TRMM mission data was started. By the end of August 2011, the 14+ years of the reprocessed mission data became available online to users. This reprocessing provided the opportunity to redo and enhance upon an analysis of V7 impacts on L3 data accumulations that was presented at the 2010 EGU General Assembly. This paper will discuss the impact of algorithm changes made in th GPROF retrieval on the Level 2 swath products. Perhaps the most important change in that retrieval was to replacement of a model based a priori database with one created from Precipitation Radar (PR) and TMI brightness temperature (Tb) data. The radar pays a major role in the V7 GPROF (GPROF2010) in determining existence of rain. The level 2 retrieval algorithm also introduced a field providing the probability of rain. This combined use of the PR has some impact on the retrievals and created areas, particularly over ocean, where many areas of low-probability precipitation are retrieved whereas in version 6, these areas contained zero rain rates. This paper will discuss how these impacts get translated to the space/time averaged monthly products that use the GPROF retrievals. The level 3 products discussed are the gridded text product 3G68 and the standard 3A12 and 3B31 products. The paper provides an overview of the changes and explanation of how the level 3 products dealt with the change in the retrieval approach. Using the .25 deg x .25 degree grid, the paper will show that agreement between the swath product and the level 3 remains very high. It will also present comparisons of V6 and V7 GPROF retrievals as seen both at the swath level and the level 3 time/space gridded accumulations. It will show that the various L3 products based on GPROF level 2 retrievals are in close agreement. The

  17. Detailed analysis of the Japanese version of the Rapid Dementia Screening Test, revised version.

    Science.gov (United States)

    Moriyama, Yasushi; Yoshino, Aihide; Muramatsu, Taro; Mimura, Masaru

    2017-11-01

    The number-transcoding task on the Japanese version of the Rapid Dementia Screening Test (RDST-J) requires mutual conversion between Arabic and Chinese numerals (209 to , 4054 to , to 681, to 2027). In this task, question and answer styles of Chinese numerals are written horizontally. We investigated the impact of changing the task so that Chinese numerals are written vertically. Subjects were 211 patients with very mild to severe Alzheimer's disease and 42 normal controls. Mini-Mental State Examination scores ranged from 26 to 12, and Clinical Dementia Rating scores ranged from 0.5 to 3. Scores of all four subtasks of the transcoding task significantly improved in the revised version compared with the original version. The sensitivity and specificity of total scores ≥9 on the RDST-J original and revised versions for discriminating between controls and subjects with Clinical Dementia Rating scores of 0.5 were 63.8% and 76.6% on the original and 60.1% and 85.8% on revised version. The revised RDST-J total score had low sensitivity and high specificity compared with the original RDST-J for discriminating subjects with Clinical Dementia Rating scores of 0.5 from controls. © 2017 Japanese Psychogeriatric Society.

  18. License - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database License to Use This Database Last updated : 2010/02/15 You may use this database...nal License described below. The Standard License specifies the license terms regarding the use of this database... and the requirements you must follow in using this database. The Additional ...the Standard License. Standard License The Standard License for this database is the license specified in th...e Creative Commons Attribution-Share Alike 2.1 Japan . If you use data from this database

  19. NoSQL database scaling

    OpenAIRE

    Žardin, Norbert

    2017-01-01

    NoSQL database scaling is a decision, where system resources or financial expenses are traded for database performance or other benefits. By scaling a database, database performance and resource usage might increase or decrease, such changes might have a negative impact on an application that uses the database. In this work it is analyzed how database scaling affect database resource usage and performance. As a results, calculations are acquired, using which database scaling types and differe...

  20. Incident and Trafficking Database: New Systems for Reporting and Accessing State Information

    International Nuclear Information System (INIS)

    Dimitrovski, D.; Kittley, S.

    2015-01-01

    The IAEA's Incident and Trafficking Database (ITDB) is the Agency's authoritative source for information on incidents in which nuclear and other radioactive material is out of national regulatory control. It was established in 1995 and, as of June 2014, 126 States participate in the ITDB programme. Currently, the database contains over 2500 confirmed incidents, out of which 21% involve nuclear material, 62% radioactive source and 17% radioactively contaminated material. In recent years, the system for States to report incidents to the ITDB has been evolving — moving from fax-based to secure email and most recently to secure on-line reporting. A Beta version of the on-line system was rolled out this June, offering a simple, yet secure, communication channel for member states to provide information. In addition the system serves as a central hub for information related to official communication of the IAEA with Member States so some communication that is traditionally shared by e-mail does not get lost when ITDB counterparts change. In addition the new reporting system incorporates optional features that allow multiple Member State users to collaboratively contribute toward an INF. States are also being given secure on-line access to a streamlined version of the ITDB. This improves States' capabilities to retrieve and analyze information for their own purposes. In addition, on-line access to ITDB statistical information on incidents is available to States through an ITDB Dashboard. The dashboard contains aggregate information on number and types of incidents, material involved, as well some other statistics related to the ITDB that is typically provided in the ITDB Quarterly reports. (author)

  1. Version 2 of RSXMULTI

    International Nuclear Information System (INIS)

    Heinicke, P.; Berg, D.; Constanta-Fanourakis, P.; Quigg, E.K.

    1985-01-01

    MULTI is a general purpose, high speed, high energy physics interface to data acquisition and data investigation system that runs on PDP-11 and VAX architecture. This paper describes the latest version of MULTI, which runs under RSX-11M version 4.1 and supports a modular approach to the separate tasks that interface to it, allowing the same system to be used in single CPU test beam experiments as well as multiple interconnected CPU, large scale experiments. MULTI uses CAMAC (IEE-583) for control and monitoring of an experiment, and is written in FORTRAN-77 and assembler. The design of this version, which simplified the interface between tasks, and eliminated the need for a hard to maintain homegrown I/O system is also discussed

  2. ForC: a global database of forest carbon stocks and fluxes.

    Science.gov (United States)

    Anderson-Teixeira, Kristina J; Wang, Maria M H; McGarvey, Jennifer C; Herrmann, Valentine; Tepley, Alan J; Bond-Lamberty, Ben; LeBauer, David S

    2018-06-01

    Forests play an influential role in the global carbon (C) cycle, storing roughly half of terrestrial C and annually exchanging with the atmosphere more than five times the carbon dioxide (CO 2 ) emitted by anthropogenic activities. Yet, scaling up from field-based measurements of forest C stocks and fluxes to understand global scale C cycling and its climate sensitivity remains an important challenge. Tens of thousands of forest C measurements have been made, but these data have yet to be integrated into a single database that makes them accessible for integrated analyses. Here we present an open-access global Forest Carbon database (ForC) containing previously published records of field-based measurements of ecosystem-level C stocks and annual fluxes, along with disturbance history and methodological information. ForC expands upon the previously published tropical portion of this database, TropForC (https://doi.org/10.5061/dryad.t516f), now including 17,367 records (previously 3,568) representing 2,731 plots (previously 845) in 826 geographically distinct areas. The database covers all forested biogeographic and climate zones, represents forest stands of all ages, and currently includes data collected between 1934 and 2015. We expect that ForC will prove useful for macroecological analyses of forest C cycling, for evaluation of model predictions or remote sensing products, for quantifying the contribution of forests to the global C cycle, and for supporting international efforts to inventory forest carbon and greenhouse gas exchange. A dynamic version of ForC is maintained at on GitHub (https://GitHub.com/forc-db), and we encourage the research community to collaborate in updating, correcting, expanding, and utilizing this database. ForC is an open access database, and we encourage use of the data for scientific research and education purposes. Data may not be used for commercial purposes without written permission of the database PI. Any publications using For

  3. The Genomes OnLine Database (GOLD) v.5: a metadata management system based on a four level (meta)genome project classification

    Science.gov (United States)

    Reddy, T.B.K.; Thomas, Alex D.; Stamatis, Dimitri; Bertsch, Jon; Isbandi, Michelle; Jansson, Jakob; Mallajosyula, Jyothi; Pagani, Ioanna; Lobos, Elizabeth A.; Kyrpides, Nikos C.

    2015-01-01

    The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Here we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencing projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards. PMID:25348402

  4. The Genomes OnLine Database (GOLD) v.5: a metadata management system based on a four level (meta)genome project classification

    Energy Technology Data Exchange (ETDEWEB)

    Reddy, Tatiparthi B. K. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Thomas, Alex D. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Stamatis, Dimitri [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Bertsch, Jon [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Isbandi, Michelle [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Jansson, Jakob [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Mallajosyula, Jyothi [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Pagani, Ioanna [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Lobos, Elizabeth A. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Kyrpides, Nikos C. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); King Abdulaziz Univ., Jeddah (Saudi Arabia)

    2014-10-27

    The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Within this paper, we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencing projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. Lastly, GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards.

  5. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (DEC VAX VMS VERSION)

    Science.gov (United States)

    Donnell, B.

    1994-01-01

    COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the

  6. Carbon dynamics of mature and regrowth tropical forests derived from a pantropical database (TropForC-db).

    Science.gov (United States)

    Anderson-Teixeira, Kristina J; Wang, Maria M H; McGarvey, Jennifer C; LeBauer, David S

    2016-05-01

    Tropical forests play a critical role in the global carbon (C) cycle, storing ~45% of terrestrial C and constituting the largest component of the terrestrial C sink. Despite their central importance to the global C cycle, their ecosystem-level C cycles are not as well-characterized as those of extra-tropical forests, and knowledge gaps hamper efforts to quantify C budgets across the tropics and to model tropical forest-climate interactions. To advance understanding of C dynamics of pantropical forests, we compiled a new database, the Tropical Forest C database (TropForC-db), which contains data on ground-based measurements of ecosystem-level C stocks and annual fluxes along with disturbance history. This database currently contains 3568 records from 845 plots in 178 geographically distinct areas, making it the largest and most comprehensive database of its type. Using TropForC-db, we characterized C stocks and fluxes for young, intermediate-aged, and mature forests. Relative to existing C budgets of extra-tropical forests, mature tropical broadleaf evergreen forests had substantially higher gross primary productivity (GPP) and ecosystem respiration (Reco), their autotropic respiration (Ra) consumed a larger proportion (~67%) of GPP, and their woody stem growth (ANPPstem) represented a smaller proportion of net primary productivity (NPP, ~32%) or GPP (~9%). In regrowth stands, aboveground biomass increased rapidly during the first 20 years following stand-clearing disturbance, with slower accumulation following agriculture and in deciduous forests, and continued to accumulate at a slower pace in forests aged 20-100 years. Most other C stocks likewise increased with stand age, while potential to describe age trends in C fluxes was generally data-limited. We expect that TropForC-db will prove useful for model evaluation and for quantifying the contribution of forests to the global C cycle. The database version associated with this publication is archived in Dryad (DOI

  7. Computer programme for control and maintenance and object oriented database: application to the realisation of an particle accelerator, the VIVITRON

    International Nuclear Information System (INIS)

    Diaz, A.

    1996-01-01

    The command and control system for the Vivitron, a new generation electrostatic particle accelerator, has been implemented using workstations and front-end computers using VME standards, the whole within an environment of UNIX/VxWorks. This architecture is distributed over an Ethernet network. Measurements and commands of the different sensors and actuators are concentrated in the front-end computers. The development of a second version of the software giving better performance and more functionality is described. X11 based communication has been utilised to transmit all the necessary informations to display parameters within the front-end computers on to the graphic screens. All other communications between processes use the Remote Procedure Call method (RPC). The conception of the system is based largely on the object oriented database O 2 which integrates a full description of equipments and the code necessary to manage it. This code is generated by the database. This innovation permits easy maintenance of the system and bypasses the need of a specialist when adding new equipments. The new version of the command and control system has been progressively installed since August 1995. (author)

  8. NOAA Climate Data Record (CDR) of Sea Surface Temperature - WHOI, Version 1.0 (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Please note, this dataset has been superseded by a newer version (see below). Users should not use this version except in rare cases (e.g., when reproducing previous...

  9. NOAA Climate Data Record (CDR) of Ocean Near Surface Atmospheric Properties, Version 1 (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Please note, this dataset has been superseded by a newer version (see below). Users should not use this version except in rare cases (e.g., when reproducing previous...

  10. Systems analysis programs for hands-on integrated reliability evaluations (SAPHIRE), Version 5.0

    International Nuclear Information System (INIS)

    Russell, K.D.; Kvarfordt, K.J.; Hoffman, C.L.

    1995-10-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. The Graphical Evaluation Module (GEM) is a special application tool designed for evaluation of operational occurrences using the Accident Sequence Precursor (ASP) program methods. GEM provides the capability for an analyst to quickly and easily perform conditional core damage probability (CCDP) calculations. The analyst can then use the CCDP calculations to determine if the occurrence of an initiating event or a condition adversely impacts safety. It uses models and data developed in the SAPHIRE specially for the ASP program. GEM requires more data than that normally provided in SAPHIRE and will not perform properly with other models or data bases. This is the first release of GEM and the developers of GEM welcome user comments and feedback that will generate ideas for improvements to future versions. GEM is designated as version 5.0 to track GEM codes along with the other SAPHIRE codes as the GEM relies on the same, shared database structure

  11. Reliability and validity of the Japanese version of the Resilience Scale and its short version

    Directory of Open Access Journals (Sweden)

    Kondo Maki

    2010-11-01

    Full Text Available Abstract Background The clinical relevance of resilience has received considerable attention in recent years. The aim of this study is to demonstrate the reliability and validity of the Japanese version of the Resilience Scale (RS and short version of the RS (RS-14. Findings The original English version of RS was translated to Japanese and the Japanese version was confirmed by back-translation. Participants were 430 nursing and university psychology students. The RS, Center for Epidemiologic Studies Depression Scale (CES-D, Rosenberg Self-Esteem Scale (RSES, Social Support Questionnaire (SSQ, Perceived Stress Scale (PSS, and Sheehan Disability Scale (SDS were administered. Internal consistency, convergent validity and factor loadings were assessed at initial assessment. Test-retest reliability was assessed using data collected from 107 students at 3 months after baseline. Mean score on the RS was 111.19. Cronbach's alpha coefficients for the RS and RS-14 were 0.90 and 0.88, respectively. The test-retest correlation coefficients for the RS and RS-14 were 0.83 and 0.84, respectively. Both the RS and RS-14 were negatively correlated with the CES-D and SDS, and positively correlated with the RSES, SSQ and PSS (all p Conclusions This study demonstrates that the Japanese version of RS has psychometric properties with high degrees of internal consistency, high test-retest reliability, and relatively low concurrent validity. RS-14 was equivalent to the RS in internal consistency, test-retest reliability, and concurrent validity. Low scores on the RS, a positive correlation between the RS and perceived stress, and a relatively low correlation between the RS and depressive symptoms in this study suggest that validity of the Japanese version of the RS might be relatively low compared with the original English version.

  12. Global marine radioactivity database (GLOMARD)

    International Nuclear Information System (INIS)

    2000-06-01

    The GLOMARD stores all available data on marine radioactivity in seawater, suspended matter, sediments and biota. The database provides critical input to the evaluation of the environmental radionuclide levels in regional seas and the world's oceans. It can be used as a basis for the assessment of the radiation doses to local, regional and global human populations and to marine biota. It also provides information on temporal trends of radionuclide levels in the marine environment and identifies gaps in available information. The database contains information on the sources of the data; the laboratories performing radionuclide analysis; the type of samples (seawater, sediment, biota) and associated details (such as volume and weight); the sample treatment, analytical methods, and measuring instruments; and the analysed results (such as radionuclide concentrations, uncertainties, temperature, salinity, etc.). The current version of the GLOMARD allows the input, maintenance and extraction of data for the production of various kinds of maps using external computer programs. Extracted data are processed by these programs to produce contour maps representing radionuclide distributions in studied areas. To date, development work has concentrated on the Barents and Kara Seas in the Arctic and the Sea of Japan in the northwest Pacific Ocean, in connection with the investigation of radioactive waste dumping sites, as well as on marine radioactivity assessment of the Mururoa and Fangataufa nuclear weapons tests sites in French Polynesia. Further data inputs and evaluations are being carried out for the Black and Mediterranean Seas. In the framework of the project on Worldwide Marine Radioactivity Studies, background levels of 3 H, 90 Sr, 137 Cs and 239,240 Pu in water, sediment and biota of the world's oceans and seas will be established

  13. Neuraxial analgesia to increase the success rate of external cephalic version: a systematic review and meta-analysis of randomized controlled trials.

    Science.gov (United States)

    Magro-Malosso, Elena Rita; Saccone, Gabriele; Di Tommaso, Mariarosaria; Mele, Michele; Berghella, Vincenzo

    2016-09-01

    External cephalic version is a medical procedure in which the fetus is externally manipulated to assume the cephalic presentation. The use of neuraxial analgesia for facilitating the version has been evaluated in several randomized clinical trials, but its potential effects are still controversial. The objective of the study was to evaluate the effectiveness of neuraxial analgesia as an intervention to increase the success rate of external cephalic version. Searches were performed in electronic databases with the use of a combination of text words related to external cephalic version and neuraxial analgesia from the inception of each database to January 2016. We included all randomized clinical trials of women, with a gestational age ≥36 weeks and breech or transverse fetal presentation, undergoing external cephalic version who were randomized to neuraxial analgesia, including spinal, epidural, or combined spinal-epidural techniques (ie, intervention group) or to a control group (either intravenous analgesia or no treatment). The primary outcome was the successful external cephalic version. The summary measures were reported as relative risk or as mean differences with a 95% confidence interval. Nine randomized clinical trials (934 women) were included in this review. Women who received neuraxial analgesia had a significantly higher incidence of successful external cephalic version (58.4% vs 43.1%; relative risk, 1.44, 95% confidence interval, 1.27-1.64), cephalic presentation in labor (55.1% vs 40.2%; relative risk, 1.37, 95% confidence interval, 1.08-1.73), and vaginal delivery (54.0% vs 44.6%; relative risk, 1.21, 95% confidence interval, 1.04-1.41) compared with those who did not. Women who were randomized to the intervention group also had a significantly lower incidence of cesarean delivery (46.0% vs 55.3%; relative risk, 0.83, 95% confidence interval, 0.71-0.97), maternal discomfort (1.2% vs 9.3%; relative risk, 0.12, 95% confidence interval, 0

  14. Energy Consumption Database

    Science.gov (United States)

    Consumption Database The California Energy Commission has created this on-line database for informal reporting ) classifications. The database also provides easy downloading of energy consumption data into Microsoft Excel (XLSX

  15. Using a Semi-Realistic Database to Support a Database Course

    Science.gov (United States)

    Yue, Kwok-Bun

    2013-01-01

    A common problem for university relational database courses is to construct effective databases for instructions and assignments. Highly simplified "toy" databases are easily available for teaching, learning, and practicing. However, they do not reflect the complexity and practical considerations that students encounter in real-world…

  16. Artemis and ACT: viewing, annotating and comparing sequences stored in a relational database.

    Science.gov (United States)

    Carver, Tim; Berriman, Matthew; Tivey, Adrian; Patel, Chinmay; Böhme, Ulrike; Barrell, Barclay G; Parkhill, Julian; Rajandream, Marie-Adèle

    2008-12-01

    Artemis and Artemis Comparison Tool (ACT) have become mainstream tools for viewing and annotating sequence data, particularly for microbial genomes. Since its first release, Artemis has been continuously developed and supported with additional functionality for editing and analysing sequences based on feedback from an active user community of laboratory biologists and professional annotators. Nevertheless, its utility has been somewhat restricted by its limitation to reading and writing from flat files. Therefore, a new version of Artemis has been developed, which reads from and writes to a relational database schema, and allows users to annotate more complex, often large and fragmented, genome sequences. Artemis and ACT have now been extended to read and write directly to the Generic Model Organism Database (GMOD, http://www.gmod.org) Chado relational database schema. In addition, a Gene Builder tool has been developed to provide structured forms and tables to edit coordinates of gene models and edit functional annotation, based on standard ontologies, controlled vocabularies and free text. Artemis and ACT are freely available (under a GPL licence) for download (for MacOSX, UNIX and Windows) at the Wellcome Trust Sanger Institute web sites: http://www.sanger.ac.uk/Software/Artemis/ http://www.sanger.ac.uk/Software/ACT/

  17. Schefferville Permafrost Temperature Database, Version 1

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set consists of ground temperature data from 192 boreholes in the area of Schefferville, Quebec (54°48'N, 66°50'W), located within the discontinuous...

  18. Comparison of Indian Council for Medical Research and Lunar Databases for Categorization of Male Bone Mineral Density.

    Science.gov (United States)

    Singh, Surya K; Patel, Vivek H; Gupta, Balram

    2017-06-19

    The mainstay of diagnosis of osteoporosis is dual-energy X-ray absorptiometry (DXA) scan measuring areal bone mineral density (BMD) (g/cm 2 ). The aim of the present study was to compare the Indian Council of Medical Research database (ICMRD) and the Lunar ethnic reference database of DXA scans in the diagnosis of osteoporosis in male patients. In this retrospective study, all male patients who underwent a DXA scan were included. The areal BMD (g/cm 2 ) was measured at either the lumbar spine (L1-L4) or the total hip using the Lunar DXA machine (software version 8.50) manufactured by GE Medical Systems (Shanghai, China). The Indian Council of Medical Research published a reference data for BMD in the Indian population derived from the population-based study conducted in healthy Indian individuals, which was used to analyze the BMD result by Lunar DXA scan. The 2 results were compared for various values using statistical software SPSS for Windows (version 16; SPSS Inc., Chicago, IL). A total 238 male patients with a mean age of 57.2 yr (standard deviation ±15.9) were included. Overall, 26.4% (66/250) and 2.8% (7/250) of the subjects were classified in the osteoporosis group according to the Lunar database and the ICMRD, respectively. Out of the 250 sites of the DXA scan, 28.8% (19/66) and 60.0% (40/66) of the cases classified as osteoporosis by the Lunar database were reclassified as normal and osteopenia by ICMRD, respectively. In conclusion, the Indian Council of Medical Research data underestimated the degree of osteoporosis in male subjects that might result in deferring of treatment. In view of the discrepancy, the decision on the treatment of osteoporosis should be based on the multiple fracture risk factors and less reliably on the BMD T-score. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  19. Artificial Radionuclides Database in the Pacific Ocean: HAM Database

    Directory of Open Access Journals (Sweden)

    Michio Aoyama

    2004-01-01

    Full Text Available The database “Historical Artificial Radionuclides in the Pacific Ocean and its Marginal Seas”, or HAM database, has been created. The database includes 90Sr, 137Cs, and 239,240Pu concentration data from the seawater of the Pacific Ocean and its marginal seas with some measurements from the sea surface to the bottom. The data in the HAM database were collected from about 90 literature citations, which include published papers; annual reports by the Hydrographic Department, Maritime Safety Agency, Japan; and unpublished data provided by individuals. The data of concentrations of 90Sr, 137Cs, and 239,240Pu have been accumulating since 1957–1998. The present HAM database includes 7737 records for 137Cs concentration data, 3972 records for 90Sr concentration data, and 2666 records for 239,240Pu concentration data. The spatial variation of sampling stations in the HAM database is heterogeneous, namely, more than 80% of the data for each radionuclide is from the Pacific Ocean and the Sea of Japan, while a relatively small portion of data is from the South Pacific. This HAM database will allow us to use these radionuclides as significant chemical tracers for oceanographic study as well as the assessment of environmental affects of anthropogenic radionuclides for these 5 decades. Furthermore, these radionuclides can be used to verify the oceanic general circulation models in the time scale of several decades.

  20. Development of an updated phytoestrogen database for use with the SWAN food frequency questionnaire: intakes and food sources in a community-based, multiethnic cohort study.

    Science.gov (United States)

    Huang, Mei-Hua; Norris, Jean; Han, Weijuan; Block, Torin; Gold, Ellen; Crawford, Sybil; Greendale, Gail A

    2012-01-01

    Phytoestrogens, heterocyclic phenols found in plants, may benefit several health outcomes. However, epidemiologic studies of the health effects of dietary phytoestrogens have yielded mixed results, in part due to challenges inherent in estimating dietary intakes. The goal of this study was to improve the estimates of dietary phytoestrogen consumption using a modified Block Food Frequency Questionnaire (FFQ), a 137-item FFQ created for the Study of Women's Health Across the Nation (SWAN) in 1994. To expand the database of sources from which phytonutrient intakes were computed, we conducted a comprehensive PubMed/Medline search covering January 1994 through September 2008. The expanded database included 4 isoflavones, coumestrol, and 4 lignans. The new database estimated isoflavone content of 105 food items (76.6%) vs. 14 (10.2%) in the 1994 version and computed coumestrol content of 52 food items (38.0%), compared to 1 (0.7%) in the original version. Newly added were lignans; values for 104 FFQ food items (75.9%) were calculated. In addition, we report here the phytonutrient intakes for each racial and language group in the SWAN sample and present major food sources from which the phytonutrients came. This enhanced ascertainment of phytoestrogens will permit improved studies of their health effects.

  1. Databases and their application

    NARCIS (Netherlands)

    Grimm, E.C.; Bradshaw, R.H.W; Brewer, S.; Flantua, S.; Giesecke, T.; Lézine, A.M.; Takahara, H.; Williams, J.W.,Jr; Elias, S.A.; Mock, C.J.

    2013-01-01

    During the past 20 years, several pollen database cooperatives have been established. These databases are now constituent databases of the Neotoma Paleoecology Database, a public domain, multiproxy, relational database designed for Quaternary-Pliocene fossil data and modern surface samples. The

  2. Updated database plus software for line-mixing in CO2 infrared spectra and their test using laboratory spectra in the 1.5-2.3 μm region

    International Nuclear Information System (INIS)

    Lamouroux, J.; Tran, H.; Laraia, A.L.; Gamache, R.R.; Rothman, L.S.; Gordon, I.E.; Hartmann, J.-M.

    2010-01-01

    In a previous series of papers, a model for the calculation of CO 2 -air absorption coefficients taking line-mixing into account and the corresponding database/software package were described and widely tested. In this study, we present an update of this package, based on the 2008 version of HITRAN, the latest currently available. The spectroscopic data for the seven most-abundant isotopologues are taken from HITRAN. When the HITRAN data are not complete up to J''=70, the data files are augmented with spectroscopic parameters from the CDSD-296 database and the high-temperature CDSD-1000 if necessary. Previously missing spectroscopic parameters, the air-induced pressure shifts and CO 2 line broadening coefficients with H 2 O, have been added. The quality of this new database is demonstrated by comparisons of calculated absorptions and measurements using CO 2 high-pressure laboratory spectra in the 1.5-2.3 μm region. The influence of the imperfections and inaccuracies of the spectroscopic parameters from the 2000 version of HITRAN is clearly shown as a big improvement of the residuals is observed by using the new database. The very good agreements between calculated and measured absorption coefficients confirm the necessity of the update presented here and further demonstrate the importance of line-mixing effects, especially for the high pressures investigated here. The application of the updated database/software package to atmospheric spectra should result in an increased accuracy in the retrieval of CO 2 atmospheric amounts. This opens improved perspectives for the space-borne detection of carbon dioxide sources and sinks.

  3. Database Optimizing Services

    Directory of Open Access Journals (Sweden)

    Adrian GHENCEA

    2010-12-01

    Full Text Available Almost every organization has at its centre a database. The database provides support for conducting different activities, whether it is production, sales and marketing or internal operations. Every day, a database is accessed for help in strategic decisions. The satisfaction therefore of such needs is entailed with a high quality security and availability. Those needs can be realised using a DBMS (Database Management System which is, in fact, software for a database. Technically speaking, it is software which uses a standard method of cataloguing, recovery, and running different data queries. DBMS manages the input data, organizes it, and provides ways of modifying or extracting the data by its users or other programs. Managing the database is an operation that requires periodical updates, optimizing and monitoring.

  4. Update History of This Database - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us RED Update History of This Database Date Update contents 2015/12/21 Rice Expression Database English archi...s Database Database Description Download License Update History of This Database Site Policy | Contact Us Update History of This Database - RED | LSDB Archive ... ...ve site is opened. 2000/10/1 Rice Expression Database ( http://red.dna.affrc.go.jp/RED/ ) is opened. About Thi

  5. rSNPBase 3.0: an updated database of SNP-related regulatory elements, element-gene pairs and SNP-based gene regulatory networks.

    Science.gov (United States)

    Guo, Liyuan; Wang, Jing

    2018-01-04

    Here, we present the updated rSNPBase 3.0 database (http://rsnp3.psych.ac.cn), which provides human SNP-related regulatory elements, element-gene pairs and SNP-based regulatory networks. This database is the updated version of the SNP regulatory annotation database rSNPBase and rVarBase. In comparison to the last two versions, there are both structural and data adjustments in rSNPBase 3.0: (i) The most significant new feature is the expansion of analysis scope from SNP-related regulatory elements to include regulatory element-target gene pairs (E-G pairs), therefore it can provide SNP-based gene regulatory networks. (ii) Web function was modified according to data content and a new network search module is provided in the rSNPBase 3.0 in addition to the previous regulatory SNP (rSNP) search module. The two search modules support data query for detailed information (related-elements, element-gene pairs, and other extended annotations) on specific SNPs and SNP-related graphic networks constructed by interacting transcription factors (TFs), miRNAs and genes. (3) The type of regulatory elements was modified and enriched. To our best knowledge, the updated rSNPBase 3.0 is the first data tool supports SNP functional analysis from a regulatory network prospective, it will provide both a comprehensive understanding and concrete guidance for SNP-related regulatory studies. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. GRIP Database original data - GRIPDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us GRI...PDB GRIP Database original data Data detail Data name GRIP Database original data DOI 10....18908/lsdba.nbdc01665-006 Description of data contents GRIP Database original data It consists of data table...s and sequences. Data file File name: gripdb_original_data.zip File URL: ftp://ftp.biosciencedbc.jp/archive/gripdb/LATEST/gri...e Database Description Download License Update History of This Database Site Policy | Contact Us GRIP Database original data - GRIPDB | LSDB Archive ...

  7. A new version of the ERICA tool to facilitate impact assessments of radioactivity on wild plants and animals.

    Science.gov (United States)

    Brown, J E; Alfonso, B; Avila, R; Beresford, N A; Copplestone, D; Hosseini, A

    2016-03-01

    A new version of the ERICA Tool (version 1.2) was released in November 2014; this constitutes the first major update of the Tool since release in 2007. The key features of the update are presented in this article. Of particular note are new transfer databases extracted from an international compilation of concentration ratios (CRwo-media) and the modification of 'extrapolation' approaches used to select transfer data in cases where information is not available. Bayesian updating approaches have been used in some cases to draw on relevant information that would otherwise have been excluded in the process of deriving CRwo-media statistics. All of these efforts have in turn led to the requirement to update Environmental Media Concentration Limits (EMCLs) used in Tier 1 assessments. Some of the significant changes with regard to EMCLs are highlighted. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Montage Version 3.0

    Science.gov (United States)

    Jacob, Joseph; Katz, Daniel; Prince, Thomas; Berriman, Graham; Good, John; Laity, Anastasia

    2006-01-01

    The final version (3.0) of the Montage software has been released. To recapitulate from previous NASA Tech Briefs articles about Montage: This software generates custom, science-grade mosaics of astronomical images on demand from input files that comply with the Flexible Image Transport System (FITS) standard and contain image data registered on projections that comply with the World Coordinate System (WCS) standards. This software can be executed on single-processor computers, multi-processor computers, and such networks of geographically dispersed computers as the National Science Foundation s TeraGrid or NASA s Information Power Grid. The primary advantage of running Montage in a grid environment is that computations can be done on a remote supercomputer for efficiency. Multiple computers at different sites can be used for different parts of a computation a significant advantage in cases of computations for large mosaics that demand more processor time than is available at any one site. Version 3.0 incorporates several improvements over prior versions. The most significant improvement is that this version is accessible to scientists located anywhere, through operational Web services that provide access to data from several large astronomical surveys and construct mosaics on either local workstations or remote computational grids as needed.

  9. Fetomaternal hemorrhage during external cephalic version.

    Science.gov (United States)

    Boucher, Marc; Marquette, Gerald P; Varin, Jocelyne; Champagne, Josette; Bujold, Emmanuel

    2008-07-01

    To estimate the frequency and volume of fetomaternal hemorrhage during external cephalic version for term breech singleton fetuses and to identify risk factors involved with this complication. A prospective observational study was performed including all patients undergoing a trial of external cephalic version for a breech presentation of at least 36 weeks of gestation between 1987 and 2001 in our center. A search for fetal erythrocytes using the standard Kleihauer-Betke test was obtained before and after each external cephalic version. The frequency and volume of fetomaternal hemorrhage were calculated. Putative risk factors for fetomaternal hemorrhage were evaluated by chi(2) test and Mann-Whitney U test. A Kleihauer-Betke test result was available before and after 1,311 trials of external cephalic version. The Kleihauer-Betke test was positive in 67 (5.1%) before the procedure. Of the 1,244 women with a negative Kleihauer-Betke test before external cephalic version, 30 (2.4%) had a positive Kleihauer-Betke test after the procedure. Ten (0.8%) had an estimated fetomaternal hemorrhage greater than 1 mL, and one (0.08%) had an estimated fetomaternal hemorrhage greater than 30 mL. The risk of fetomaternal hemorrhage was not influenced by parity, gestational age, body mass index, number of attempts at version, placental location, or amniotic fluid index. The risk of detectable fetomaternal hemorrhage during external cephalic version was 2.4%, with fetomaternal hemorrhage more than 30 mL in less than 0.1% of cases. These data suggest that the performance of a Kleihauer-Betke test is unwarranted in uneventful external cephalic version and that in Rh-negative women, no further Rh immune globulin is necessary other than the routine 300-microgram dose at 28 weeks of gestation and postpartum. II.

  10. Update History of This Database - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us RPD Update History of This Database Date Update contents 2016/02/02 Rice Proteome Database English archi...s Database Database Description Download License Update History of This Database Site Policy | Contact Us Update History of This Database - RPD | LSDB Archive ... ...ve site is opened. 2003/01/07 Rice Proteome Database ( http://gene64.dna.affrc.go.jp/RPD/ ) is opened. About Thi

  11. Progress Towards AIRS Science Team Version-7 at SRT

    Science.gov (United States)

    Susskind, Joel; Blaisdell, John; Iredell, Lena; Kouvaris, Louis

    2016-01-01

    The AIRS Science Team Version-6 retrieval algorithm is currently producing level-3 Climate Data Records (CDRs) from AIRS that have been proven useful to scientists in understanding climate processes. CDRs are gridded level-3 products which include all cases passing AIRS Climate QC. SRT has made significant further improvements to AIRS Version-6. At the last Science Team Meeting, we described results using SRT AIRS Version-6.22. SRT Version-6.22 is now an official build at JPL called 6.2.4. Version-6.22 results are significantly improved compared to Version-6, especially with regard to water vapor and ozone profiles. We have adapted AIRS Version-6.22 to run with CrIS/ATMS, at the Sounder SIPS which processed CrIS/ATMS data for August 2014. JPL AIRS Version-6.22 uses the Version-6 AIRS tuning coefficients. AIRS Version-6.22 has at least two limitations which must be improved before finalization of Version-7: Version-6.22 total O3 has spurious high values in the presence of Saharan dust over the ocean; and Version-6.22 retrieved upper stratospheric temperatures are very poor in polar winter. SRT Version-6.28 addresses the first concern. John Blaisdell ran the analog of AIRS Version-6.28 in his own sandbox at JPL for the 14th and 15th of every month in 2014 and all of July and October for 2014. AIRS Version-6.28a is hot off the presses and addresses the second concern.

  12. Brasilia’s Database Administrators

    Directory of Open Access Journals (Sweden)

    Jane Adriana

    2016-06-01

    Full Text Available Database administration has gained an essential role in the management of new database technologies. Different data models are being created for supporting the enormous data volume, from the traditional relational database. These new models are called NoSQL (Not only SQL databases. The adoption of best practices and procedures, has become essential for the operation of database management systems. Thus, this paper investigates some of the techniques and tools used by database administrators. The study highlights features and particularities in databases within the area of Brasilia, the Capital of Brazil. The results point to which new technologies regarding database management are currently the most relevant, as well as the central issues in this area.

  13. Intercomparison of ILAS-II version 1.4 and version 2 target parameters with MIPAS-Envisat measurements

    Directory of Open Access Journals (Sweden)

    A. Griesfeller

    2008-02-01

    Full Text Available This paper assesses the mean differences between the two ILAS-II data versions (1.4 and 2 by comparing them with MIPAS measurements made between May and October 2003. For comparison with ILAS-II results, MIPAS data processed at the Institut für Meteorologie und Klimaforschung, Karlsruhe, Germany (IMK in cooperation with the Instituto de Astrofísica de Andalucía (IAA in Granada, Spain, were used. The coincidence criteria of ±300 km in space and ±12 h in time for H2O, N2O, and CH4 and the coincidence criteria of ±300 km in space and ±6 h in time for ClONO2, O3, and HNO3 were used. The ILAS-II data were separated into sunrise (= Northern Hemisphere and sunset (= Southern Hemisphere. For the sunrise data, a clear improvement from version 1.4 to version 2 was observed for H2O, CH4, ClONO2, and O3. In particular, the ILAS-II version 1.4 mixing ratios of H2O and CH4 were unrealistically small, and those of ClONO2 above altitudes of 30 km unrealistically large. For N2O and HNO3, there were no large differences between the two versions. Contrary to the Northern Hemisphere, where some exceptional profiles deviated significantly from known climatology, no such outlying profiles were found in the Southern Hemisphere for both versions. Generally, the ILAS-II version 2 data were in better agreement with the MIPAS data than the version 1.4, and are recommended for quantitative analysis in the stratosphere. For H2O data in the Southern Hemisphere, further data quality evaluation is necessary.

  14. Mars Global Digital Dune Database; MC-1

    Science.gov (United States)

    Hayward, R.K.; Fenton, L.K.; Tanaka, K.L.; Titus, T.N.; Colaprete, A.; Christensen, P.R.

    2010-01-01

    beyond the scope of this report to measure all slipfaces. We attempted to include enough slipface measurements to represent the general circulation (as implied by gross dune morphology) and to give a sense of the complex nature of aeolian activity on Mars. The absence of slipface measurements in a given direction should not be taken as evidence that winds in that direction did not occur. When a dune field was located within a crater, the azimuth from crater centroid to dune field centroid was calculated, as another possible indicator of wind direction. Output from a general circulation model (GCM) is also included. In addition to polygons locating dune fields, the database includes THEMIS visible (VIS) and Mars Orbiter Camera Narrow Angle (MOC NA) images that were used to build the database. The database is presented in a variety of formats. It is presented as an ArcReader project which can be opened using the free ArcReader software. The latest version of ArcReader can be downloaded at http://www.esri.com/software/arcgis/arcreader/download.html. The database is also presented in an ArcMap project. The ArcMap project allows fuller use of the data, but requires ESRI ArcMap(Registered) software. A fuller description of the projects can be found in the NP_Dunes_ReadMe file (NP_Dunes_ReadMe folder_ and the NP_Dunes_ReadMe_GIS file (NP_Documentation folder). For users who prefer to create their own projects, the data are available in ESRI shapefile and geodatabase formats, as well as the open Geography Markup Language (GML) format. A printable map of the dunes and craters in the database is available as a Portable Document Format (PDF) document. The map is also included as a JPEG file. (NP_Documentation folder) Documentation files are available in PDF and ASCII (.txt) files. Tables are available in both Excel and ASCII (.txt)

  15. Development of a Chinese version of the Oswestry Disability Index version 2.1.

    Science.gov (United States)

    Lue, Yi-Jing; Hsieh, Ching-Lin; Huang, Mao-Hsiung; Lin, Gau-Tyan; Lu, Yen-Mou

    2008-10-01

    Cross-cultural adaptation and cross-sectional psychometric testing in a convenience sample of patients with low back pain. To translate and culturally adapt the Oswestry Disability Index version 2.1 (ODI 2.1) into a Mandarin Chinese version and to assess its reliability and validity. The Chinese ODI 2.1 has not been developed and validated. The ODI 2.1 was translated and culturally adapted to the Chinese version. The validity of the translated Chinese version was assessed by examining the relationship between the ODI and other well-known measures. Test-retest reliability was examined in 52 of these patients, who completed a second questionnaire within 1 week. Internal consistency of the ODI 2.1 was excellent with Cronbach's alpha = 0.903. The intraclass correlation coefficient of test-retest reliability was 0.89. The minimal detectable change was 12.8. The convergent validity of the Chinese ODI is supported by its high correlation with other physical functional status measures (Roland Morris Disability Questionnaire and SF-36 physical functioning subscale, r = 0.76 and -0.75, respectively), and moderate correlation with other measures (Visual Analogue Scale, r = 0.68) and certain SF-36 subscales (role-physical, bodily pain, and social functioning, r range: -0.49 to -0.57). As expected, the ODI was least correlated with nonfunctional measures (SF-36 mental subscale and role-emotional subscale, r = -0.25 and -0.33, respectively). The results of this study indicate that the Chinese version of the ODI 2.1 is a reliable and valid instrument for the measurement of functional status in patients with low back pain.

  16. NREL: U.S. Life Cycle Inventory Database - About the LCI Database Project

    Science.gov (United States)

    About the LCI Database Project The U.S. Life Cycle Inventory (LCI) Database is a publicly available database that allows users to objectively review and compare analysis results that are based on similar source of critically reviewed LCI data through its LCI Database Project. NREL's High-Performance

  17. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  18. NIRS database of the original research database

    International Nuclear Information System (INIS)

    Morita, Kyoko

    1991-01-01

    Recently, library staffs arranged and compiled the original research papers that have been written by researchers for 33 years since National Institute of Radiological Sciences (NIRS) established. This papers describes how the internal database of original research papers has been created. This is a small sample of hand-made database. This has been cumulating by staffs who have any knowledge about computer machine or computer programming. (author)

  19. Teaching Case: Adapting the Access Northwind Database to Support a Database Course

    Science.gov (United States)

    Dyer, John N.; Rogers, Camille

    2015-01-01

    A common problem encountered when teaching database courses is that few large illustrative databases exist to support teaching and learning. Most database textbooks have small "toy" databases that are chapter objective specific, and thus do not support application over the complete domain of design, implementation and management concepts…

  20. International Reactor Physics Handbook Database and Analysis Tool (IDAT) - IDAT user manual

    International Nuclear Information System (INIS)

    2013-01-01

    The IRPhEP Database and Analysis Tool (IDAT) was first released in 2013 and is included on the DVD. This database and corresponding user interface allows easy access to handbook information. Selected information from each configuration was entered into IDAT, such as the measurements performed, benchmark values, calculated values and materials specifications of the benchmark. In many cases this is supplemented with calculated data such as neutron balance data, spectra data, k-eff nuclear data sensitivities, and spatial reaction rate plots. IDAT accomplishes two main objectives: 1. Allow users to search the handbook for experimental configurations that satisfy their input criteria. 2. Allow users to trend results and identify suitable benchmarks experiments for their application. IDAT provides the user with access to several categories of calculated data, including: - 1-group neutron balance data for each configuration with individual isotope contributions in the reactor system. - Flux and other reaction rates spectra in a 299-group energy scheme. Plotting capabilities were implemented into IDAT allowing the user to compare the spectra of selected configurations in the original fine energy structure or on any user-defined broader energy structure. - Sensitivity coefficients (percent changes of k-effective due to elementary change of basic nuclear data) for the major nuclides and nuclear processes in a 238-group energy structure. IDAT is actively being developed. Those approved to access the online version of the handbook will also have access to an online version of IDAT. As May 2013 marks the first release, IDAT may contain data entry errors and omissions. The handbook remains the primary source of reactor physics benchmark data. A copy of IDAT user's manual is attached to this document. A copy of the IRPhE Handbook can be obtained on request at http://www.oecd-nea.org/science/wprs/irphe/irphe-handbook/form.html

  1. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...

  2. Applications of GIS and database technologies to manage a Karst Feature Database

    Science.gov (United States)

    Gao, Y.; Tipping, R.G.; Alexander, E.C.

    2006-01-01

    This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.

  3. ELIPGRID-PC: Upgraded version

    International Nuclear Information System (INIS)

    Davidson, J.R.

    1995-12-01

    Evaluating the need for and the effectiveness of remedial cleanup at waste sites often includes finding average contaminant concentrations and identifying pockets of contamination called hot spots. The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID code of singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM reg-sign personal computer (PC) or compatible. A new version of ELIPGRID-PC, incorporating Monte Carlo test results and simple graphics, is herein described. Various examples of how to use the program for both single and multiple hot spot cases are given. The code for an American National Standards Institute C version of the ELIPGRID algorithm is provided, and limitations and further work are noted. This version of ELIPGRID-PC reliably meets the goal of moving Singer's ELIPGRID algorithm to the PC

  4. Time Series Discord Detection in Medical Data using a Parallel Relational Database [PowerPoint

    Energy Technology Data Exchange (ETDEWEB)

    Woodbridge, Diane; Wilson, Andrew T.; Rintoul, Mark Daniel; Goldstein, Richard H.

    2015-11-01

    Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithms on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.

  5. Reliability and validity of the Japanese version of the Resilience Scale and its short version.

    Science.gov (United States)

    Nishi, Daisuke; Uehara, Ritei; Kondo, Maki; Matsuoka, Yutaka

    2010-11-17

    The clinical relevance of resilience has received considerable attention in recent years. The aim of this study is to demonstrate the reliability and validity of the Japanese version of the Resilience Scale (RS) and short version of the RS (RS-14). The original English version of RS was translated to Japanese and the Japanese version was confirmed by back-translation. Participants were 430 nursing and university psychology students. The RS, Center for Epidemiologic Studies Depression Scale (CES-D), Rosenberg Self-Esteem Scale (RSES), Social Support Questionnaire (SSQ), Perceived Stress Scale (PSS), and Sheehan Disability Scale (SDS) were administered. Internal consistency, convergent validity and factor loadings were assessed at initial assessment. Test-retest reliability was assessed using data collected from 107 students at 3 months after baseline. Mean score on the RS was 111.19. Cronbach's alpha coefficients for the RS and RS-14 were 0.90 and 0.88, respectively. The test-retest correlation coefficients for the RS and RS-14 were 0.83 and 0.84, respectively. Both the RS and RS-14 were negatively correlated with the CES-D and SDS, and positively correlated with the RSES, SSQ and PSS (all p reliability, and relatively low concurrent validity. RS-14 was equivalent to the RS in internal consistency, test-retest reliability, and concurrent validity. Low scores on the RS, a positive correlation between the RS and perceived stress, and a relatively low correlation between the RS and depressive symptoms in this study suggest that validity of the Japanese version of the RS might be relatively low compared with the original English version.

  6. Open Geoscience Database

    Science.gov (United States)

    Bashev, A.

    2012-04-01

    Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data

  7. Identifying Primary Spontaneous Pneumothorax from Administrative Databases: A Validation Study

    Directory of Open Access Journals (Sweden)

    Eric Frechette

    2016-01-01

    Full Text Available Introduction. Primary spontaneous pneumothorax (PSP is a disorder commonly encountered in healthy young individuals. There is no differentiation between PSP and secondary pneumothorax (SP in the current version of the International Classification of Diseases (ICD-10. This complicates the conduct of epidemiological studies on the subject. Objective. To validate the accuracy of an algorithm that identifies cases of PSP from administrative databases. Methods. The charts of 150 patients who consulted the emergency room (ER with a recorded main diagnosis of pneumothorax were reviewed to define the type of pneumothorax that occurred. The corresponding hospital administrative data collected during previous hospitalizations and ER visits were processed through the proposed algorithm. The results were compared over two different age groups. Results. There were 144 cases of pneumothorax correctly coded (96%. The results obtained from the PSP algorithm demonstrated a significantly higher sensitivity (97% versus 81%, p=0.038 and positive predictive value (87% versus 46%, p<0.001 in patients under 40 years of age than in older patients. Conclusions. The proposed algorithm is adequate to identify cases of PSP from administrative databases in the age group classically associated with the disease. This makes possible its utilization in large population-based studies.

  8. Software test plan/description/report (STP/STD/STR) for the enhanced logistics intratheater support tool (ELIST) global data segment. Version 8.1.0.0, Database Instance Segment Version 8.1.0.0, ...[elided] and Reference Data Segment Version 8.1.0.0 for Solaris 7; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.; Absil-Mills, M.; Jacobs, K.

    2002-01-01

    This document is the Software Test Plan/Description/Report (STP/STD/STR) for the DII COE Enhanced Logistics Intratheater Support Tool (ELIST) mission application. It combines in one document the information normally presented separately in a Software Test Plan, a Software Test Description, and a Software Test Report; it also presents this information in one place for all the segments of the ELIST mission application. The primary purpose of this document is to show that ELIST has been tested by the developer and found, by that testing, to install, deinstall, and work properly. The information presented here is detailed enough to allow the reader to repeat the testing independently. The remainder of this document is organized as follows. Section 1.1 identifies the ELIST mission application. Section 2 is the list of all documents referenced in this document. Section 3, the Software Test Plan, outlines the testing methodology and scope-the latter by way of a concise summary of the tests performed. Section 4 presents detailed descriptions of the tests, along with the expected and observed results; that section therefore combines the information normally found in a Software Test Description and a Software Test Report. The remaining small sections present supplementary information. Throughout this document, the phrase ELIST IP refers to the Installation Procedures (IP) for the Enhanced Logistics Intratheater Support Tool (ELIST) Global Data Segment, Database Instance Segment, Database Fill Segment, Database Segment, Database Utility Segment, Software Segment, and Reference Data Segment

  9. Introducing external cephalic version in a Malaysian setting.

    Science.gov (United States)

    Yong, Stephen P Y

    2007-02-01

    To assess the outcome of external cephalic version for routine management of malpresenting foetuses at term. Prospective observational study. Tertiary teaching hospital, Malaysia. From September 2003 to June 2004, a study involving 41 pregnant women with malpresentation at term was undertaken. An external cephalic version protocol was implemented. Data were collected for identifying characteristics associated with success or failure of external cephalic version. Maternal and foetal outcome measures including success rate of external cephalic version, maternal and foetal complications, and characteristics associated with success or failure; engagement of presenting part, placental location, direction of version, attempts at version, use of intravenous tocolytic agent, eventual mode of delivery, Apgar scores, birth weights, and maternal satisfaction with the procedure. Data were available for 38 women. External cephalic version was successful in 63% of patients; the majority (75%) of whom achieved a vaginal delivery. Multiparity (odds ratio=34.0; 95% confidence interval, 0.67-1730) and high amniotic fluid index (4.9; 1.3-18.2) were associated with successful external cephalic version. Engagement of presenting part (odds ratio=0.0001; 95% confidence interval, 0.00001-0.001) and a need to resort to backward somersault (0.02; 0.00001-0.916) were associated with poor success rates. Emergency caesarean section rate for foetal distress directly resulting from external cephalic version was 8%, but there was no perinatal or maternal adverse outcome. The majority (74%) of women were satisfied with external cephalic version. External cephalic version has acceptable success rates. Multiparity, liquor volume, engagement of presenting part, and the need for backward somersault were strong predictors of outcome. External cephalic version is relatively safe, simple to learn and perform, and associated with maternal satisfaction. Modern obstetric units should routinely offer the

  10. Inleiding database-systemen

    NARCIS (Netherlands)

    Pels, H.J.; Lans, van der R.F.; Pels, H.J.; Meersman, R.A.

    1993-01-01

    Dit artikel introduceert de voornaamste begrippen die een rol spelen rond databases en het geeft een overzicht van de doelstellingen, de functies en de componenten van database-systemen. Hoewel de functie van een database intuitief vrij duidelijk is, is het toch een in technologisch opzicht complex

  11. External cephalic version-related risks: a meta-analysis.

    Science.gov (United States)

    Grootscholten, Kim; Kok, Marjolein; Oei, S Guid; Mol, Ben W J; van der Post, Joris A

    2008-11-01

    To systematically review the literature on external cephalic version-related complications and to assess if the outcome of a version attempt is related to complications. In March 2007 we searched MEDLINE, EMBASE, and the Cochrane Central Register of Controlled Trials. Studies reporting on complications from an external cephalic version attempt for singleton breech pregnancies after 36 weeks of pregnancy were selected. We calculated odds ratios (ORs) from studies that reported both on complications as well as on the position of the fetus immediately after the procedure. We found 84 studies, reporting on 12,955 version attempts that reported on external cephalic version-related complications. The pooled complication rate was 6.1% (95% CI 4.7-7.8), 0.24% for serious complications (95% confidence interval [CI] 0.17-0.34) and 0.35% for emergency cesarean deliveries (95% CI 0.26-0.47). Complications were not related to external cephalic version outcome (OR 1.2 (95% CI 0.93-1.7). External cephalic version is a safe procedure. Complications are not related to the fetal position after external cephalic version.

  12. Inclusion in the Workplace - Text Version | NREL

    Science.gov (United States)

    Careers » Inclusion in the Workplace - Text Version Inclusion in the Workplace - Text Version This is the text version for the Inclusion: Leading by Example video. I'm Martin Keller. I'm the NREL of the laboratory. Another very important element in inclusion is diversity. Because if we have a

  13. Integration of Biodiversity Databases in Taiwan and Linkage to Global Databases

    Directory of Open Access Journals (Sweden)

    Kwang-Tsao Shao

    2007-03-01

    Full Text Available The biodiversity databases in Taiwan were dispersed to various institutions and colleges with limited amount of data by 2001. The Natural Resources and Ecology GIS Database sponsored by the Council of Agriculture, which is part of the National Geographic Information System planned by the Ministry of Interior, was the most well established biodiversity database in Taiwan. But thisThis database was, however, mainly collectingcollected the distribution data of terrestrial animals and plants within the Taiwan area. In 2001, GBIF was formed, and Taiwan joined as one of the an Associate Participant and started, starting the establishment and integration of animal and plant species databases; therefore, TaiBIF was able to co-operate with GBIF. The information of Catalog of Life, specimens, and alien species were integrated by the Darwin core. The standard. These metadata standards allowed the biodiversity information of Taiwan to connect with global databases.

  14. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  15. Update History of This Database - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us PLACE Update History of This Database Date Update contents 2016/08/22 The contact address is...s Database Database Description Download License Update History of Thi...s Database Site Policy | Contact Us Update History of This Database - PLACE | LSDB Archive ... ... changed. 2014/10/20 The URLs of the database maintenance site and the portal site are changed. 2014/07/17 PLACE English archi

  16. Database Publication Practices

    DEFF Research Database (Denmark)

    Bernstein, P.A.; DeWitt, D.; Heuer, A.

    2005-01-01

    There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems.......There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems....

  17. HIP2: An online database of human plasma proteins from healthy individuals

    Directory of Open Access Journals (Sweden)

    Shen Changyu

    2008-04-01

    Full Text Available Abstract Background With the introduction of increasingly powerful mass spectrometry (MS techniques for clinical research, several recent large-scale MS proteomics studies have sought to characterize the entire human plasma proteome with a general objective for identifying thousands of proteins leaked from tissues in the circulating blood. Understanding the basic constituents, diversity, and variability of the human plasma proteome is essential to the development of sensitive molecular diagnosis and treatment monitoring solutions for future biomedical applications. Biomedical researchers today, however, do not have an integrated online resource in which they can search for plasma proteins collected from different mass spectrometry platforms, experimental protocols, and search software for healthy individuals. The lack of such a resource for comparisons has made it difficult to interpret proteomics profile changes in patients' plasma and to design protein biomarker discovery experiments. Description To aid future protein biomarker studies of disease and health from human plasma, we developed an online database, HIP2 (Healthy Human Individual's Integrated Plasma Proteome. The current version contains 12,787 protein entries linked to 86,831 peptide entries identified using different MS platforms. Conclusion This web-based database will be useful to biomedical researchers involved in biomarker discovery research. This database has been developed to be the comprehensive collection of healthy human plasma proteins, and has protein data captured in a relational database schema built to contain mappings of supporting peptide evidence from several high-quality and high-throughput mass-spectrometry (MS experimental data sets. Users can search for plasma protein/peptide annotations, peptide/protein alignments, and experimental/sample conditions with options for filter-based retrieval to achieve greater analytical power for discovery and validation.

  18. rasdaman Array Database: current status

    Science.gov (United States)

    Merticariu, George; Toader, Alexandru

    2015-04-01

    rasdaman (Raster Data Manager) is a Free Open Source Array Database Management System which provides functionality for storing and processing massive amounts of raster data in the form of multidimensional arrays. The user can access, process and delete the data using SQL. The key features of rasdaman are: flexibility (datasets of any dimensionality can be processed with the help of SQL queries), scalability (rasdaman's distributed architecture enables it to seamlessly run on cloud infrastructures while offering an increase in performance with the increase of computation resources), performance (real-time access, processing, mixing and filtering of arrays of any dimensionality) and reliability (legacy communication protocol replaced with a new one based on cutting edge technology - Google Protocol Buffers and ZeroMQ). Among the data with which the system works, we can count 1D time series, 2D remote sensing imagery, 3D image time series, 3D geophysical data, and 4D atmospheric and climate data. Most of these representations cannot be stored only in the form of raw arrays, as the location information of the contents is also important for having a correct geoposition on Earth. This is defined by ISO 19123 as coverage data. rasdaman provides coverage data support through the Petascope service. Extensions were added on top of rasdaman in order to provide support for the Geoscience community. The following OGC standards are currently supported: Web Map Service (WMS), Web Coverage Service (WCS), and Web Coverage Processing Service (WCPS). The Web Map Service is an extension which provides zoom and pan navigation over images provided by a map server. Starting with version 9.1, rasdaman supports WMS version 1.3. The Web Coverage Service provides capabilities for downloading multi-dimensional coverage data. Support is also provided for several extensions of this service: Subsetting Extension, Scaling Extension, and, starting with version 9.1, Transaction Extension, which

  19. Third millenium ideal gas and condensed phase thermochemical database for combustion (with update from active thermochemical tables).

    Energy Technology Data Exchange (ETDEWEB)

    Burcat, A.; Ruscic, B.; Chemistry; Technion - Israel Inst. of Tech.

    2005-07-29

    The thermochemical database of species involved in combustion processes is and has been available for free use for over 25 years. It was first published in print in 1984, approximately 8 years after it was first assembled, and contained 215 species at the time. This is the 7th printed edition and most likely will be the last one in print in the present format, which involves substantial manual labor. The database currently contains more than 1300 species, specifically organic molecules and radicals, but also inorganic species connected to combustion and air pollution. Since 1991 this database is freely available on the internet, at the Technion-IIT ftp server, and it is continuously expanded and corrected. The database is mirrored daily at an official mirror site, and at random at about a dozen unofficial mirror and 'finger' sites. The present edition contains numerous corrections and many recalculations of data of provisory type by the G3//B3LYP method, a high-accuracy composite ab initio calculation. About 300 species are newly calculated and are not yet published elsewhere. In anticipation of the full coupling, which is under development, the database started incorporating the available (as yet unpublished) values from Active Thermochemical Tables. The electronic version now also contains an XML file of the main database to allow transfer to other formats and ease finding specific information of interest. The database is used by scientists, educators, engineers and students at all levels, dealing primarily with combustion and air pollution, jet engines, rocket propulsion, fireworks, but also by researchers involved in upper atmosphere kinetics, astrophysics, abrasion metallurgy, etc. This introductory article contains explanations of the database and the means to use it, its sources, ways of calculation, and assessments of the accuracy of data.

  20. Update History of This Database - DMPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us DMPD Update History of This Database Date Update contents 2010/03/29 DMPD English archive si....jp/macrophage/ ) is released. About This Database Database Description Download License Update History of Thi...s Database Site Policy | Contact Us Update History of This Database - DMPD | LSDB Archive ...

  1. Maternal outcomes of term breech presentation delivery: impact of successful external cephalic version in a nationwide sample of delivery admissions in the United States.

    Science.gov (United States)

    Weiniger, Carolyn F; Lyell, Deirdre J; Tsen, Lawrence C; Butwick, Alexander J; Shachar, BatZion; Callaghan, William M; Creanga, Andreea A; Bateman, Brian T

    2016-07-08

    We aimed to define the frequency and predictors of successful external cephalic version in a nationally-representative cohort of women with breech presentations and to compare maternal outcomes associated with successful external cephalic version versus persistent breech presentation. Using the Nationwide Inpatient Sample, a United States healthcare utilization database, we identified delivery admissions between 1998 and 2011 for women who had successful external cephalic version or persistent breech presentation (including unsuccessful or no external cephalic version attempt) at term. Multivariable logistic regression identified patient and hospital-level factors associated with successful external cephalic version. Maternal outcomes were compared between women who had successful external cephalic version versus persistent breech. Our study cohort comprised 1,079,576 delivery admissions with breech presentation; 56,409 (5.2 %) women underwent successful external cephalic version and 1,023,167 (94.8 %) women had persistent breech presentation at the time of delivery. The rate of cesarean delivery was lower among women who had successful external cephalic version compared to those with persistent breech (20.2 % vs. 94.9 %; p external cephalic version were also less likely to experience several measures of significant maternal morbidity including endometritis (adjusted Odds Ratio (aOR) = 0.36, 95 % Confidence Interval (CI) 0.24-0.52), sepsis (aOR = 0.35, 95 % CI 0.24-0.51) and length of stay > 7 days (aOR = 0.53, 95 % CI 0.40-0.70), but had a higher risk of chorioamnionitis (aOR = 1.83, 95 % CI 1.54-2.17). Overall a low proportion of women with breech presentation undergo successful external cephalic version, and it is associated with significant reduction in the frequency of cesarean delivery and a number of measures of maternal morbidity. Increased external cephalic version use may be an important approach to mitigate the high rate of

  2. Standard Electronic Format Specification for Tank Characterization Data Loader Version 3.0

    International Nuclear Information System (INIS)

    ADAMS, M.R.

    1999-01-01

    The purpose of this document is to describe the standard electronic format for data files that will be sent for entry into the Tank Characterization Database (TCD). There are 2 different file types needed for each data load: Analytical Results; Sample Descriptions. The first record of each file must be a header record. The content of the first 5 fields is ignored. They were used previously to satisfy historic requirements that are no longer applicable. The sixth field of the header record must contain the Standard Electronic Format (SEF) version ID (SEF3.0). The remaining records will be formatted as specified below. Fields within a record will be separated using the ''|'' symbol. The ''|'' symbol must not appear anywhere in the file except when used as a delimiter

  3. Development of a personalized training system using the Lung Image Database Consortium and Image Database resource Initiative Database.

    Science.gov (United States)

    Lin, Hongli; Wang, Weisheng; Luo, Jiawei; Yang, Xuedong

    2014-12-01

    The aim of this study was to develop a personalized training system using the Lung Image Database Consortium (LIDC) and Image Database resource Initiative (IDRI) Database, because collecting, annotating, and marking a large number of appropriate computed tomography (CT) scans, and providing the capability of dynamically selecting suitable training cases based on the performance levels of trainees and the characteristics of cases are critical for developing a efficient training system. A novel approach is proposed to develop a personalized radiology training system for the interpretation of lung nodules in CT scans using the Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) database, which provides a Content-Boosted Collaborative Filtering (CBCF) algorithm for predicting the difficulty level of each case of each trainee when selecting suitable cases to meet individual needs, and a diagnostic simulation tool to enable trainees to analyze and diagnose lung nodules with the help of an image processing tool and a nodule retrieval tool. Preliminary evaluation of the system shows that developing a personalized training system for interpretation of lung nodules is needed and useful to enhance the professional skills of trainees. The approach of developing personalized training systems using the LIDC/IDRL database is a feasible solution to the challenges of constructing specific training program in terms of cost and training efficiency. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  4. Update History of This Database - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us KOME Update History of This Database Date Update contents 2014/10/22 The URL of the whole da...site is opened. 2003/07/18 KOME ( http://cdna01.dna.affrc.go.jp/cDNA/ ) is opened. About This Database Dat...abase Description Download License Update History of This Database Site Policy | Contact Us Update History of This Database - KOME | LSDB Archive ...

  5. Update History of This Database - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PSCDB Update History of This Database Date Update contents 2016/11/30 PSCDB English archive ...site is opened. 2011/11/13 PSCDB ( http://idp1.force.cs.is.nagoya-u.ac.jp/pscdb/ ) is opened. About This Database Database... Description Download License Update History of This Database Site Policy | Contact Us Update History of This Database - PSCDB | LSDB Archive ...

  6. The 2003 edition of geisa: a spectroscopic database system for the second generation vertical sounders radiance simulation

    Science.gov (United States)

    Jacquinet-Husson, N.; Lmd Team

    The GEISA (Gestion et Etude des Informations Spectroscopiques Atmosphériques: Management and Study of Atmospheric Spectroscopic Information) computer accessible database system, in its former 1997 and 2001 versions, has been updated in 2003 (GEISA-03). It is developed by the ARA (Atmospheric Radiation Analysis) group at LMD (Laboratoire de Météorologie Dynamique, France) since 1974. This early effort implemented the so-called `` line-by-line and layer-by-layer '' approach for forward radiative transfer modelling action. The GEISA 2003 system comprises three databases with their associated management softwares: a database of spectroscopic parameters required to describe adequately the individual spectral lines belonging to 42 molecules (96 isotopic species) and located in a spectral range from the microwave to the limit of the visible. The featured molecules are of interest in studies of the terrestrial as well as the other planetary atmospheres, especially those of the Giant Planets. a database of absorption cross-sections of molecules such as chlorofluorocarbons which exhibit unresolvable spectra. a database of refractive indices of basic atmospheric aerosol components. Illustrations will be given of GEISA-03, data archiving method, contents, management softwares and Web access facilities at: http://ara.lmd.polytechnique.fr The performance of instruments like AIRS (Atmospheric Infrared Sounder; http://www-airs.jpl.nasa.gov) in the USA, and IASI (Infrared Atmospheric Sounding Interferometer; http://smsc.cnes.fr/IASI/index.htm) in Europe, which have a better vertical resolution and accuracy, compared to the presently existing satellite infrared vertical sounders, is directly related to the quality of the spectroscopic parameters of the optically active gases, since these are essential input in the forward models used to simulate recorded radiance spectra. For these upcoming atmospheric sounders, the so-called GEISA/IASI sub-database system has been elaborated

  7. CHIANTI—AN ATOMIC DATABASE FOR EMISSION LINES. XIII. SOFT X-RAY IMPROVEMENTS AND OTHER CHANGES

    International Nuclear Information System (INIS)

    Landi, E.; Young, P. R.; Dere, K. P.; Del Zanna, G.; Mason, H. E.

    2013-01-01

    The CHIANTI spectral code consists of two parts: an atomic database and a suite of computer programs in Python and IDL. Together, they allow the calculation of the optically thin spectrum of astrophysical objects and provide spectroscopic plasma diagnostics for the analysis of astrophysical spectra. The database includes atomic energy levels, wavelengths, radiative transition probabilities, collision excitation rate coefficients, ionization, and recombination rate coefficients, as well as data to calculate free-free, free-bound, and two-photon continuum emission. Version 7.1 has been released, which includes improved data for several ions, recombination rates, and element abundances. In particular, it provides a large expansion of the CHIANTI models for key Fe ions from Fe VIII to Fe XIV to improve the predicted emission in the 50-170 Å wavelength range. All data and programs are freely available at http://www.chiantidatabase.org and in SolarSoft, while the Python interface to CHIANTI can be found at http://chiantipy.sourceforge.net.

  8. Comparing two versions of the Karolinska Sleepiness Scale (KSS).

    Science.gov (United States)

    Miley, Anna Åkerstedt; Kecklund, Göran; Åkerstedt, Torbjörn

    2016-01-01

    The Karolinska Sleepiness Scale (KSS) is frequently used to study sleepiness in various contexts. However, it exists in two versions, one with labels on every other step (version A), and one with labels on every step (version B) on the 9-point scale. To date, there are no studies examining whether these versions can be used interchangeably. The two versions were here compared in a 24 hr wakefulness study of 12 adults. KSS ratings were obtained every hour, alternating version A and B. Results indicated that the two versions are highly correlated, do not have different response distributions on labeled and unlabeled steps, and that the distributions across all steps have a high level of correspondence (Kappa = 0.73). It was concluded that the two versions are quite similar.

  9. Logical database design principles

    CERN Document Server

    Garmany, John; Clark, Terry

    2005-01-01

    INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint

  10. Artemis and ACT: viewing, annotating and comparing sequences stored in a relational database

    Science.gov (United States)

    Carver, Tim; Berriman, Matthew; Tivey, Adrian; Patel, Chinmay; Böhme, Ulrike; Barrell, Barclay G.; Parkhill, Julian; Rajandream, Marie-Adèle

    2008-01-01

    Motivation: Artemis and Artemis Comparison Tool (ACT) have become mainstream tools for viewing and annotating sequence data, particularly for microbial genomes. Since its first release, Artemis has been continuously developed and supported with additional functionality for editing and analysing sequences based on feedback from an active user community of laboratory biologists and professional annotators. Nevertheless, its utility has been somewhat restricted by its limitation to reading and writing from flat files. Therefore, a new version of Artemis has been developed, which reads from and writes to a relational database schema, and allows users to annotate more complex, often large and fragmented, genome sequences. Results: Artemis and ACT have now been extended to read and write directly to the Generic Model Organism Database (GMOD, http://www.gmod.org) Chado relational database schema. In addition, a Gene Builder tool has been developed to provide structured forms and tables to edit coordinates of gene models and edit functional annotation, based on standard ontologies, controlled vocabularies and free text. Availability: Artemis and ACT are freely available (under a GPL licence) for download (for MacOSX, UNIX and Windows) at the Wellcome Trust Sanger Institute web sites: http://www.sanger.ac.uk/Software/Artemis/ http://www.sanger.ac.uk/Software/ACT/ Contact: artemis@sanger.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:18845581

  11. Specialist Bibliographic Databases.

    Science.gov (United States)

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls.

  12. Specialist Bibliographic Databases

    Science.gov (United States)

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  13. Directory of IAEA databases

    International Nuclear Information System (INIS)

    1992-12-01

    This second edition of the Directory of IAEA Databases has been prepared within the Division of Scientific and Technical Information (NESI). Its main objective is to describe the computerized information sources available to staff members. This directory contains all databases produced at the IAEA, including databases stored on the mainframe, LAN's and PC's. All IAEA Division Directors have been requested to register the existence of their databases with NESI. For the second edition database owners were requested to review the existing entries for their databases and answer four additional questions. The four additional questions concerned the type of database (e.g. Bibliographic, Text, Statistical etc.), the category of database (e.g. Administrative, Nuclear Data etc.), the available documentation and the type of media used for distribution. In the individual entries on the following pages the answers to the first two questions (type and category) is always listed, but the answers to the second two questions (documentation and media) is only listed when information has been made available

  14. The Swedish Family-Cancer Database: Update, Application to Colorectal Cancer and Clinical Relevance

    Directory of Open Access Journals (Sweden)

    Hemminki Kari

    2005-01-01

    Full Text Available Abstract The Swedish Family-Cancer Database has been used for almost 10 years in the study of familial risks at all common sites. In the present paper we describe some main features of version VI of this Database, assembled in 2004. This update included all Swedes born in 1932 and later (offspring with their biological parents, a total of 10.5 million individuals. Cancer cases were retrieved from the Swedish Cancer Registry from 1958-2002, including over 1.2 million first and multiple primary cancers and in situ tumours. Compared to previous versions, only 6.0% of deceased offspring with a cancer diagnosis lack any parental information. We show one application of the Database in the study of familial risks in colorectal adenocarcinoma, with defined age-group and anatomic site specific analyses. Familial standardized incidence ratios (SIRs were determined for offspring when parents or sibling were diagnosed with colon or rectal cancer. As a novel finding it was shown that risks for siblings were higher than those for offspring of affected parents. The excess risk was limited to colon cancer and particularly to right-sided colon cancer. The SIRs for colon cancer in age matched populations were 2.58 when parents were probands and 3.81 when siblings were probands; for right-sided colon cancer the SIRs were 3.66 and 7.53, respectively. Thus the familial excess (SIR-1.00 was more than two fold higher for right-sided colon cancer. Colon and rectal cancers appeared to be distinguished between high-penetrant and recessive conditions that only affect the colon, whereas low-penetrant familial effects are shared by the two sites. Epidemiological studies can be used to generate clinical estimates for familial risk, conditioned on numbers of affected family members and their ages of onset. Useful risk estimates have been developed for familial breast and prostate cancers. Reliable risk estimates for other cancers should also be seriously considered for

  15. The Porcelain Crab Transcriptome and PCAD, the Porcelain Crab Microarray and Sequence Database

    Energy Technology Data Exchange (ETDEWEB)

    Tagmount, Abderrahmane; Wang, Mei; Lindquist, Erika; Tanaka, Yoshihiro; Teranishi, Kristen S.; Sunagawa, Shinichi; Wong, Mike; Stillman, Jonathon H.

    2010-01-27

    Background: With the emergence of a completed genome sequence of the freshwater crustacean Daphnia pulex, construction of genomic-scale sequence databases for additional crustacean sequences are important for comparative genomics and annotation. Porcelain crabs, genus Petrolisthes, have been powerful crustacean models for environmental and evolutionary physiology with respect to thermal adaptation and understanding responses of marine organisms to climate change. Here, we present a large-scale EST sequencing and cDNA microarray database project for the porcelain crab Petrolisthes cinctipes. Methodology/Principal Findings: A set of ~;;30K unique sequences (UniSeqs) representing ~;;19K clusters were generated from ~;;98K high quality ESTs from a set of tissue specific non-normalized and mixed-tissue normalized cDNA libraries from the porcelain crab Petrolisthes cinctipes. Homology for each UniSeq was assessed using BLAST, InterProScan, GO and KEGG database searches. Approximately 66percent of the UniSeqs had homology in at least one of the databases. All EST and UniSeq sequences along with annotation results and coordinated cDNA microarray datasets have been made publicly accessible at the Porcelain Crab Array Database (PCAD), a feature-enriched version of the Stanford and Longhorn Array Databases.Conclusions/Significance: The EST project presented here represents the third largest sequencing effort for any crustacean, and the largest effort for any crab species. Our assembly and clustering results suggest that our porcelain crab EST data set is equally diverse to the much larger EST set generated in the Daphnia pulex genome sequencing project, and thus will be an important resource to the Daphnia research community. Our homology results support the pancrustacea hypothesis and suggest that Malacostraca may be ancestral to Branchiopoda and Hexapoda. Our results also suggest that our cDNA microarrays cover as much of the transcriptome as can reasonably be captured in

  16. Update History of This Database - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us SAHG Update History of This Database Date Update contents 2016/05/09 SAHG English archive si...te is opened. 2009/10 SAHG ( http://bird.cbrc.jp/sahg ) is opened. About This Database Database Description ...Download License Update History of This Database Site Policy | Contact Us Update History of This Database - SAHG | LSDB Archive ...

  17. Update History of This Database - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us RMOS Update History of This Database Date Update contents 2015/10/27 RMOS English archive si...12 RMOS (http://cdna01.dna.affrc.go.jp/RMOS/) is opened. About This Database Database Description Download License Update Hi...story of This Database Site Policy | Contact Us Update History of This Database - RMOS | LSDB Archive ...

  18. The World Bacterial Biogeography and Biodiversity through Databases: A Case Study of NCBI Nucleotide Database and GBIF Database

    Directory of Open Access Journals (Sweden)

    Okba Selama

    2013-01-01

    Full Text Available Databases are an essential tool and resource within the field of bioinformatics. The primary aim of this study was to generate an overview of global bacterial biodiversity and biogeography using available data from the two largest public online databases, NCBI Nucleotide and GBIF. The secondary aim was to highlight the contribution each geographic area has to each database. The basis for data analysis of this study was the metadata provided by both databases, mainly, the taxonomy and the geographical area origin of isolation of the microorganism (record. These were directly obtained from GBIF through the online interface, while E-utilities and Python were used in combination with a programmatic web service access to obtain data from the NCBI Nucleotide Database. Results indicate that the American continent, and more specifically the USA, is the top contributor, while Africa and Antarctica are less well represented. This highlights the imbalance of exploration within these areas rather than any reduction in biodiversity. This study describes a novel approach to generating global scale patterns of bacterial biodiversity and biogeography and indicates that the Proteobacteria are the most abundant and widely distributed phylum within both databases.

  19. An Interoperable Cartographic Database

    OpenAIRE

    Slobodanka Ključanin; Zdravko Galić

    2007-01-01

    The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on t...

  20. Software listing: CHEMTOX database

    International Nuclear Information System (INIS)

    Moskowitz, P.D.

    1993-01-01

    Initially launched in 1983, the CHEMTOX Database was among the first microcomputer databases containing hazardous chemical information. The database is used in many industries and government agencies in more than 17 countries. Updated quarterly, the CHEMTOX Database provides detailed environmental and safety information on 7500-plus hazardous substances covered by dozens of regulatory and advisory sources. This brief listing describes the method of accessing data and provides ordering information for those wishing to obtain the CHEMTOX Database

  1. A new version of the European tsunami catalogue: updating and revision

    Directory of Open Access Journals (Sweden)

    S. Tinti

    2001-01-01

    Full Text Available A new version of the European catalogue of tsunamis is presented here. It differs from the latest release of the catalogue that was produced in 1998 and is known as GITEC tsunami catalogue in some important aspects. In the first place, it is a database built on the Visual FoxPro 6.0 DBMS that can be used and maintained under the PC operating systems currently available. Conversely, the GITEC catalogue was compatible only with Windows 95 and older PC platforms. In the second place, it is enriched by new facilities and a new type of data, such as a database of pictures that can be accessed easily from the main screen of the catalogue. Thirdly, it has been updated by including the newly published references. Minute and painstaking search for new data has been undertaken to re-evaluate cases that were not included in the GITEC catalogue, though they were mentioned in previous catalogues; the exclusion was motivated by a lack of data. This last work has focused so far on Italian cases of the last two centuries. The result is that at least two events have been found which deserve inclusion in the new catalogue: one occurred in 1809 in the Gulf of La Spezia, and the other occurred in 1940 in the Gulf of Palermo. Two further events are presently under investigation.

  2. Database Dictionary for Ethiopian National Ground-Water DAtabase (ENGDA) Data Fields

    Science.gov (United States)

    Kuniansky, Eve L.; Litke, David W.; Tucci, Patrick

    2007-01-01

    Introduction This document describes the data fields that are used for both field forms and the Ethiopian National Ground-water Database (ENGDA) tables associated with information stored about production wells, springs, test holes, test wells, and water level or water-quality observation wells. Several different words are used in this database dictionary and in the ENGDA database to describe a narrow shaft constructed in the ground. The most general term is borehole, which is applicable to any type of hole. A well is a borehole specifically constructed to extract water from the ground; however, for this data dictionary and for the ENGDA database, the words well and borehole are used interchangeably. A production well is defined as any well used for water supply and includes hand-dug wells, small-diameter bored wells equipped with hand pumps, or large-diameter bored wells equipped with large-capacity motorized pumps. Test holes are borings made to collect information about the subsurface with continuous core or non-continuous core and/or where geophysical logs are collected. Test holes are not converted into wells. A test well is a well constructed for hydraulic testing of an aquifer in order to plan a larger ground-water production system. A water-level or water-quality observation well is a well that is used to collect information about an aquifer and not used for water supply. A spring is any naturally flowing, local, ground-water discharge site. The database dictionary is designed to help define all fields on both field data collection forms (provided in attachment 2 of this report) and for the ENGDA software screen entry forms (described in Litke, 2007). The data entered into each screen entry field are stored in relational database tables within the computer database. The organization of the database dictionary is designed based on field data collection and the field forms, because this is what the majority of people will use. After each field, however, the

  3. Update History of This Database - SSBD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us SSBD Update History of This Database Date Update contents 2016/07/25 SSBD English archive si...tion Download License Update History of This Database Site Policy | Contact Us Update History of This Database - SSBD | LSDB Archive ... ...te is opened. 2013/09/03 SSBD ( http://ssbd.qbic.riken.jp/ ) is opened. About This Database Database Descrip

  4. THPdb: Database of FDA-approved peptide and protein therapeutics.

    Directory of Open Access Journals (Sweden)

    Salman Sadullah Usmani

    Full Text Available THPdb (http://crdd.osdd.net/raghava/thpdb/ is a manually curated repository of Food and Drug Administration (FDA approved therapeutic peptides and proteins. The information in THPdb has been compiled from 985 research publications, 70 patents and other resources like DrugBank. The current version of the database holds a total of 852 entries, providing comprehensive information on 239 US-FDA approved therapeutic peptides and proteins and their 380 drug variants. The information on each peptide and protein includes their sequences, chemical properties, composition, disease area, mode of activity, physical appearance, category or pharmacological class, pharmacodynamics, route of administration, toxicity, target of activity, etc. In addition, we have annotated the structure of most of the protein and peptides. A number of user-friendly tools have been integrated to facilitate easy browsing and data analysis. To assist scientific community, a web interface and mobile App have also been developed.

  5. The CTBTO Link to the database of the International Seismological Centre (ISC)

    Science.gov (United States)

    Bondar, I.; Storchak, D. A.; Dando, B.; Harris, J.; Di Giacomo, D.

    2011-12-01

    The CTBTO Link to the database of the International Seismological Centre (ISC) is a project to provide access to seismological data sets maintained by the ISC using specially designed interactive tools. The Link is open to National Data Centres and to the CTBTO. By means of graphical interfaces and database queries tailored to the needs of the monitoring community, the users are given access to a multitude of products. These include the ISC and ISS bulletins, covering the seismicity of the Earth since 1904; nuclear and chemical explosions; the EHB bulletin; the IASPEI Reference Event list (ground truth database); and the IDC Reviewed Event Bulletin. The searches are divided into three main categories: The Area Based Search (a spatio-temporal search based on the ISC Bulletin), the REB search (a spatio-temporal search based on specific events in the REB) and the IMS Station Based Search (a search for historical patterns in the reports of seismic stations close to a particular IMS seismic station). The outputs are HTML based web-pages with a simplified version of the ISC Bulletin showing the most relevant parameters with access to ISC, GT, EHB and REB Bulletins in IMS1.0 format for single or multiple events. The CTBTO Link offers a tool to view REB events in context within the historical seismicity, look at observations reported by non-IMS networks, and investigate station histories and residual patterns for stations registered in the International Seismographic Station Registry.

  6. An Interoperable Cartographic Database

    Directory of Open Access Journals (Sweden)

    Slobodanka Ključanin

    2007-05-01

    Full Text Available The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on the Internet. 

  7. STOPGAP: a database for systematic target opportunity assessment by genetic association predictions.

    Science.gov (United States)

    Shen, Judong; Song, Kijoung; Slater, Andrew J; Ferrero, Enrico; Nelson, Matthew R

    2017-09-01

    We developed the STOPGAP (Systematic Target OPportunity assessment by Genetic Association Predictions) database, an extensive catalog of human genetic associations mapped to effector gene candidates. STOPGAP draws on a variety of publicly available GWAS associations, linkage disequilibrium (LD) measures, functional genomic and variant annotation sources. Algorithms were developed to merge the association data, partition associations into non-overlapping LD clusters, map variants to genes and produce a variant-to-gene score used to rank the relative confidence among potential effector genes. This database can be used for a multitude of investigations into the genes and genetic mechanisms underlying inter-individual variation in human traits, as well as supporting drug discovery applications. Shell, R, Perl and Python scripts and STOPGAP R data files (version 2.5.1 at publication) are available at https://github.com/StatGenPRD/STOPGAP . Some of the most useful STOPGAP fields can be queried through an R Shiny web application at http://stopgapwebapp.com . matthew.r.nelson@gsk.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  8. A web-based data visualization tool for the MIMIC-II database.

    Science.gov (United States)

    Lee, Joon; Ribey, Evan; Wallace, James R

    2016-02-04

    Although MIMIC-II, a public intensive care database, has been recognized as an invaluable resource for many medical researchers worldwide, becoming a proficient MIMIC-II researcher requires knowledge of SQL programming and an understanding of the MIMIC-II database schema. These are challenging requirements especially for health researchers and clinicians who may have limited computer proficiency. In order to overcome this challenge, our objective was to create an interactive, web-based MIMIC-II data visualization tool that first-time MIMIC-II users can easily use to explore the database. The tool offers two main features: Explore and Compare. The Explore feature enables the user to select a patient cohort within MIMIC-II and visualize the distributions of various administrative, demographic, and clinical variables within the selected cohort. The Compare feature enables the user to select two patient cohorts and visually compare them with respect to a variety of variables. The tool is also helpful to experienced MIMIC-II researchers who can use it to substantially accelerate the cumbersome and time-consuming steps of writing SQL queries and manually visualizing extracted data. Any interested researcher can use the MIMIC-II data visualization tool for free to quickly and conveniently conduct a preliminary investigation on MIMIC-II with a few mouse clicks. Researchers can also use the tool to learn the characteristics of the MIMIC-II patients. Since it is still impossible to conduct multivariable regression inside the tool, future work includes adding analytics capabilities. Also, the next version of the tool will aim to utilize MIMIC-III which contains more data.

  9. FORM version 4.0

    Science.gov (United States)

    Kuipers, J.; Ueda, T.; Vermaseren, J. A. M.; Vollinga, J.

    2013-05-01

    We present version 4.0 of the symbolic manipulation system FORM. The most important new features are manipulation of rational polynomials and the factorization of expressions. Many other new functions and commands are also added; some of them are very general, while others are designed for building specific high level packages, such as one for Gröbner bases. New is also the checkpoint facility, that allows for periodic backups during long calculations. Finally, FORM 4.0 has become available as open source under the GNU General Public License version 3. Program summaryProgram title: FORM. Catalogue identifier: AEOT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 151599 No. of bytes in distributed program, including test data, etc.: 1 078 748 Distribution format: tar.gz Programming language: The FORM language. FORM itself is programmed in a mixture of C and C++. Computer: All. Operating system: UNIX, LINUX, Mac OS, Windows. Classification: 5. Nature of problem: FORM defines a symbolic manipulation language in which the emphasis lies on fast processing of very large formulas. It has been used successfully for many calculations in Quantum Field Theory and mathematics. In speed and size of formulas that can be handled it outperforms other systems typically by an order of magnitude. Special in this version: The version 4.0 contains many new features. Most important are factorization and rational arithmetic. The program has also become open source under the GPL. The code in CPC is for reference. You are encouraged to upload the most recent sources from www.nikhef.nl/form/formcvs.php because of frequent bug fixes. Solution method: See "Nature of Problem", above. Additional comments: NOTE: The code in CPC is for reference. You are encouraged

  10. Extending Database Integration Technology

    National Research Council Canada - National Science Library

    Buneman, Peter

    1999-01-01

    Formal approaches to the semantics of databases and database languages can have immediate and practical consequences in extending database integration technologies to include a vastly greater range...

  11. The new Cloud Dynamics and Radiation Database algorithms for AMSR2 and GMI: exploitation of the GPM observational database for operational applications

    Science.gov (United States)

    Cinzia Marra, Anna; Casella, Daniele; Martins Costa do Amaral, Lia; Sanò, Paolo; Dietrich, Stefano; Panegrossi, Giulia

    2017-04-01

    Two new precipitation retrieval algorithms for the Advanced Microwave Scanning Radiometer 2 (AMSR2) and for the GPM Microwave Imager (GMI) are presented. The algorithms are based on the Cloud Dynamics and Radiation Database (CDRD) Bayesian approach and represent an evolution of the previous version applied to Special Sensor Microwave Imager/Sounder (SSMIS) observations, and used operationally within the EUMETSAT Satellite Application Facility on support to Operational Hydrology and Water Management (H-SAF). These new products present as main innovation the use of an extended database entirely empirical, derived from coincident radar and radiometer observations from the NASA/JAXA Global Precipitation Measurement Core Observatory (GPM-CO) (Dual-frequency Precipitation Radar-DPR and GMI). The other new aspects are: 1) a new rain-no-rain screening approach; 2) the use of Empirical Orthogonal Functions (EOF) and Canonical Correlation Analysis (CCA) both in the screening approach, and in the Bayesian algorithm; 2) the use of new meteorological and environmental ancillary variables to categorize the database and mitigate the problem of non-uniqueness of the retrieval solution; 3) the development and implementations of specific modules for computational time minimization. The CDRD algorithms for AMSR2 and GMI are able to handle an extremely large observational database available from GPM-CO and provide the rainfall estimate with minimum latency, making them suitable for near-real time hydrological and operational applications. As far as CDRD for AMSR2, a verification study over Italy using ground-based radar data and over the MSG full disk area using coincident GPM-CO/AMSR2 observations has been carried out. Results show remarkable AMSR2 capabilities for rainfall rate (RR) retrieval over ocean (for RR > 0.25 mm/h), good capabilities over vegetated land (for RR > 1 mm/h), while for coastal areas the results are less certain. Comparisons with NASA GPM products, and with

  12. CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (MACINTOSH VERSION)

    Science.gov (United States)

    Culbert, C.

    1994-01-01

    The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh

  13. Database specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB)

    Energy Technology Data Exchange (ETDEWEB)

    Faby, E.Z.; Fluker, J.; Hancock, B.R.; Grubb, J.W.; Russell, D.L. [Univ. of Tennessee, Knoxville, TN (United States); Loftis, J.P.; Shipe, P.C.; Truett, L.F. [Oak Ridge National Lab., TN (United States)

    1994-03-01

    This Database Specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB) describes the database organization and storage allocation, provides the detailed data model of the logical and physical designs, and provides information for the construction of parts of the database such as tables, data elements, and associated dictionaries and diagrams.

  14. Earth in Space: A CD-ROM Version for Pre-College Teachers

    Science.gov (United States)

    Pedigo, P.

    2003-12-01

    Earth in Space, a magazine about the Earth and space sciences for pre-college science teachers, was published by AGU between 1987 and 2001 (9 issues each year). The goal of Earth in Space was to make research at the frontiers of the geosciences accessible to teachers and students and engage them in thinking about scientific careers. Each issue contained two or three recent research articles, rewritten for a high school level audience from the original version published in peer-reviewed AGU journals, which were supplemented with short news items and biographic information about the authors. As part of a 2003 summer internship with AGU, sponsored by the AGU Committee on Education and Human Resources (CEHR) and the American Institute of Physics, this collection of Earth in Space magazines was converted into an easily accessible electronic resource for K-12 teachers and students. Every issue was scanned into a PDF file. The entire collection of articles was cataloged in a database indexed to key topic terms (e.g., volcanoes, global climate change, space weather). A front-page was designed in order to facilitate rapid access to articles concerning specific topics within the Earth and space sciences of particular interest to high school students. A compact CD-ROM version of this resource will be distributed to science teachers at future meetings of the National Science Teachers Association and will be made available through AGU's Outreach and Research Support program.

  15. Specialist Bibliographic Databases

    OpenAIRE

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A.; Trukhachev, Vladimir I.; Kostyukova, Elena I.; Gerasimov, Alexey N.; Kitas, George D.

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and d...

  16. Evaluated and estimated solubility of some elements for performance assessment of geological disposal of high-level radioactive waste using updated version of thermodynamic database

    International Nuclear Information System (INIS)

    Kitamura, Akira; Doi, Reisuke; Yoshida, Yasushi

    2011-01-01

    Japan Atomic Energy Agency (JAEA) established the thermodynamic database (JAEA-TDB) for performance assessment of geological disposal of high-level radioactive waste (HLW) and TRU waste. Twenty-five elements which were important for the performance assessment of geological disposal were selected for the database. JAEA-TDB enhanced reliability of evaluation and estimation of their solubility through selecting the latest and the most reliable thermodynamic data at present. We evaluated and estimated solubility of the 25 elements in the simulated porewaters established in the 'Second Progress Report for Safety Assessment of Geological Disposal of HLW in Japan' using the JAEA-TDB and compared with those using the previous thermodynamic database (JNC-TDB). It was found that most of the evaluated and estimated solubility values were not changed drastically, but the solubility and speciation of dominant aqueous species for some elements using the JAEA-TDB were different from those using the JNC-TDB. We discussed about how to provide reliable solubility values for the performance assessment. (author)

  17. NOAA Climate Data Record (CDR) of AVHRR Daily and Monthly Aerosol Optical Thickness over Global Oceans, Version 2.0 (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Version 2 of the dataset has been superseded by a newer version. Users should not use version 2 except in rare cases (e.g., when reproducing previous studies that...

  18. NOAA Climate Data Record (CDR) of AVHRR Daily and Monthly Aerosol Optical Thickness over Global Oceans, Version 1.0 (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Version 1 of the dataset has been superseded by a newer version. Users should not use version 1 except in rare cases (e.g., when reproducing previous studies that...

  19. Nuclear power economic database

    International Nuclear Information System (INIS)

    Ding Xiaoming; Li Lin; Zhao Shiping

    1996-01-01

    Nuclear power economic database (NPEDB), based on ORACLE V6.0, consists of three parts, i.e., economic data base of nuclear power station, economic data base of nuclear fuel cycle and economic database of nuclear power planning and nuclear environment. Economic database of nuclear power station includes data of general economics, technique, capital cost and benefit, etc. Economic database of nuclear fuel cycle includes data of technique and nuclear fuel price. Economic database of nuclear power planning and nuclear environment includes data of energy history, forecast, energy balance, electric power and energy facilities

  20. Keyword Search in Databases

    CERN Document Server

    Yu, Jeffrey Xu; Chang, Lijun

    2009-01-01

    It has become highly desirable to provide users with flexible ways to query/search information over databases as simple as keyword search like Google search. This book surveys the recent developments on keyword search over databases, and focuses on finding structural information among objects in a database using a set of keywords. Such structural information to be returned can be either trees or subgraphs representing how the objects, that contain the required keywords, are interconnected in a relational database or in an XML database. The structural keyword search is completely different from